Deformable Translucent Objects PDF Print E-mail


Subsurface scattering is an important aspect developper have to take into account when rendering translucent materials like marble, wax or plastic. Several methods have been proposed by researcher to render such material for offline to real-time applications. One of them I find very smart is the one proposed in [1].

The method proposed in [1] uses the dipole diffusion approximation model to compute the subsurface light scattering inside a 3D object. It uses a splatting approach taking advantage of the GPU. I have implemented this method with some idea from myself to share it with everyone.

Download source and executable
YouTube video


Algorithm


I will give an overview of the method but for more details, especially about the dipole approximation theory, please refer to paper [1]. In the remainder of this web page, I will assume that the scene is lit by a spot light.

Firstly, we render from the spot light view point the object. For each visible pixel, we compute it's position Xi and the amount of light it receive. Using the dipole theory, these points will be considered as small light sources. In my demo, the size of this buffer is 48 by 48 so I can have a maximum of 2304 light sources! Don't worry, the rendering of each of these lights is not as time consuming as usual light. I will discuss about this latter.

Secondly, we render in another buffer the object from the camera view, and for each pixel we output their position Xo. The resolution of this buffer must be the same as the back buffer.

Finally, for each point Xi, we render a view-aligned billboard with a given radius R. For each point Xo visible from the camera, the distance D between it and the current Xi is computed. The light contribution of Xi on Xo is then read in the translucent texture with coordinate (D/R, 0.5). This is how the subsurface light scattering is computed : by adding all Xi contribution to each Xo. This is based on the dipole diffusion model which basically compute an approach of the integral of luminance/colors over a surface.

Example of translucent texture

In [1], several parameter need to be given by the artist like fresnel term, scattering, absorption or extinction coefficients. This parameter may not be intuitive for artists. So, I tried to make this algorithm artist friendly by only requiring the definition of a radius of influence (R) and a 1D translucency texture (T) for each translucent material. As usually, the model can have a albedo map, a gloss map, etc. Note that because the light is multpiply by the albedo texture at the entry point Xi and at the exit point Xo, it could result in underexposure (in fact light is multiplied by the square of the albedo texture). To reduce this problem, I used the same trick as in [2] : the albedo texture contains in fact the square root of the original albedo texture.

Some results for a full screen translucent model on a GeForce 7900GT (for my optimisations, please read the next section):

Resolution
1280*1024
1024*768
800*600
Frames Per Second
1
9
16

 

Front- and back-lit wax knight.

 

Automatic shadows taking into account light diffusion properties of each material.

 

Front- and back-lit plastic knight with a green belt in the albedo texture.
Notice green light scattering in the medium.

 

On the left, sample points position (Xi) corresponding to the scene on the right.

 

Rendering of a purple wax candle with a flickering light. (see the video)


Optimization

The most time consuming part of this method is the splatting part for each Xi. The fill rate can be really high due to the amount of view aligned billboard. To reduce this problem, I propose to use a splatting buffer at a smaller resolution than the back rendering buffer. The buffer containing the position Xo must have the same size. Thus, the number of total pixel written is incredibly reduced and the frame rate increase in favor of a real time application.

Using this trick, some artifacts can appear. For instance, texels can be seen on the silhouette of the model. As a solution to this problem, I propose to expand the model silhouette in the buffer containing all Xo positions. As a result all Obvious artifacts disapear and the frame rate increase. However, if the object have large depth discontinuities and/or highly contrasted region in the albedo texture, these artifacts can still be visible inside the model.

Some results for a full screen translucent model on a GeForce 7900GT and a splat buffer having a resolution of 100*75:

Resolution
1280*1024
1024*768
800*600
Frames Per Second without edge expansion
115
146
150
Frames Per Second with edge expansion
97
121
127

 

 

Using the low resolution splat buffer, texels become visible on complicated models (top).
However, for simple models, the difference is not noticeable (bottom).

 

Left: with silhouette expansion. Right: without silhouette expansion.
Texels of the splat-buffer are noticeable on the knight silhouette when the expansion filter is not apply.


Drawback of this method

I find that this method is really smart and interesting. However, by implementing it, I faced several problems :

  • Sometimes models look under-exposed so I have to multiply the light luminance by a high factor.
  • If the texture storing all Xi points and their luminance have a low resolution, splat billboard may be visible: more sample must be taken to avoid this problem.
  • Visible flickering may appear because of the discreet sampling of each Xi points. To reduce this ugly visual artifact, some sample may be kept from frame to frame as proposed in [1]. Temporal coherency of each splat may be computed using light/object relative movement and change in color.


Future work

If your are interested in improving this algorithm, here is some possiblme future work:

  1. (easy) Specular component is not shadowed. A shadow algorithm must be applied.
  2. (normal) Currently, i'm reading the light view texture back to the CPU using glGetTexImage which stall the rendering pipeline. Pixel buffers should be used instead to accelerate this method. The splatting vertex shader should then be changed accordingly.
  3. (hard) Implement environment lighting as proposed in the paper.


References :

Last Updated on Sunday, 08 May 2011 15:51