Download source and executable YouTube video
The method I am going to present you has been proposed by Jon Jansen and Louis Bavoil, both working at nVidia, and is called Fourier Opacity Mapping. This method allows computing correct light attenuation for each light ray coming from a light source and passing through a medium such as smoke, steam or fog. Several previous works allowed doing such effects using Deep Shadow Maps, Opacity Shadow Maps or Half Angle Slicing (for an indepth overview of previous methods, please read the article [1]). However, these methods suffer either from too high memory cost or artifacts due to sampling discretization.
I would like to expose you the theory but this is well done in the paper. I will just cover simple light attenuation theory of a ray going through a medium. The transmittance at a given distance d for a ray r can be computed as T(r,d)= L(r,d) / L0(r) where L0 is the light emitted from the light source and L is the light received at depth d. According to Beer’s law, T(r,d) can be approximated as exp(teta * d). In our case, we want to render a volume of particle with corrected light attenuation. So, the goal is to recover the absorption coefficient teta for each rendered particle pixel at depth d. It is important to note that this method does not take into account scattering effects.
Fourier Opacity mapping is a method consisting in two major steps:
1. Render particles from light point of view to build the Fourier opacity maps.
2. Render particles from the camera point of view. During this step, the Fourier opacity maps are used to compute the transmittance for each pixel of the volume and thus the amount of light.
Build the Fourier opacity maps
During this step, as we render the particles from the light point of view, each pixel of the Fourier Opacity maps is considered as a ray of light for which the transmittance function T(r,d) along the ray path will be constructed and stored. The transmittance function is stored as a truncated Fourier series. The contribution of each particle is added to the FO maps using the equation given in the paper [1] (you should read the mathematical demonstration) and additive blending. As in the paper, I use 7 coefficients for the truncated Fourier series (a0, an, bn where n is {1, 2, 3}).
Fourier series coefficients for the transmittance function for the red color component. Left: a0, a1, a2 and a3 coefficients (a3 is not visible since it is in the alpha channel). Right: b1, b2 and b3 coefficients. The final render corresponding to these coefficients can be seen in the next screenshots.
You can store the transmittance function as a single luminance attenuation requiring only one Fourier series. Furthermore, you can use 3 Fourier series in order to compute wavelength dependant light attenuation. In this case, you will need three times more memory. It is important to note that during this step, the rendering order of particle does not matter.
Render the particle volume
During this step, we render the particle as usual. However, we attenuate the light using Beer’s law. We compute the attenuation function value at depth d (the distance from the light to the pixel), using the truncated Fourier series coefficients stored in the Fourier opacity maps (read the paper for the equation). The corresponding Fourier series for each pixel is recovered using usual texture projection. During this rendering pass, particles can be sorted according to current camera transformation matrix.
More details
Because the Fourier series is truncated to a small number of coefficients, ringing can become apparent in some case, especially for abrupt variation of the attenuation function. This is illustrated in several examples in the paper. It makes this method not usable to compute the attenuation function for light passing through transparent polygon like glasses. However, it is perfect for low opacity medium that attenuate light smoothly like fog, smoke, etc.
You don’t need a high resolution buffer for the Fourier opacity maps. A 256x256 resolution can be enough for a spot light with an aperture of 90 degree. Furthermore, the size of the maps can be adapted to the aperture value of the spot light. Also, it would depend on the complexity of the particle volume.
Performance
I give some performance of the FOM method on my scene with a sin wave of 1024 particles dancing around a cube. These performances are here to evaluate the influence of buffers resolution, and between colored or luminance attenuation function. Apart from the buffers resolution and attenuation function complexity, the final performance will depend a lot on the number of particles in your scene and the fill rate from the light and camera view (related to the number of particle and their size). The performance reported here were measured using GPU timer query for a constant view of the scene on a nVidia Quadro FX3700M.
In the following table, FOM buffers filling (step 1) performance are reported in milliseconds for wavelength depend color attenuation (6 RGBA16F buffers) and luminance attenuation (2 RGBA16F buffers) as a function of Fourier opacity maps resolution:
Resolution

1024x1024

512x512

256x256

128x128

Color Att.

11.8

3.9

1.7

0.7

Luminance Att.

6.5

2.0

0.8

0.2

In the following table, scene and particles rendering (step 2) performance are reported in milliseconds for wavelength depend color attenuation (6 RGBA16F buffers) and luminance attenuation (2 RGBA16F buffers) as a function of Fourier opacity maps resolution:
Resolution

1024x1024


256x256

128x128

Color Att.

14.6

7.6

5.0

4.1

Luminance Att.

6.5

4.2

3.7

3.5

The next screenshots show the differences between rendering particles without (left) or with (right) FOM:
The next screenshots show the same scene with a luminance attenuation function only (In the paper, only luminance attenuation is presented):
Conclusion
I found that this method is very effective for what it has been developed for: volumetric shadow for low opacity particles volumes. Furthermore It only requires one additional pass as compared to usual particle rendering methods and do not impose constraints on the rendering order of particles. To conclude, its usability is not to demonstrate since it has been successfully used in the game Batman Arkham Asylum.
References :
