[Sloan et al. 07]
Peter-Pike Sloan recently posted a new publication to his website that shows a lot of potential for real-time applications. Thanks to Chris Oat to pointing it out to me. In my humble eyes, the two most beneficial contributions of the paper are:
Image-based Occlusion and Indirect Transfer
P.P. builds atop some of the ideas presented in last year’s SIGGRAPH paper “Real-time Soft Shadows in Dynamic Scenes Using Spherical Harmonic Exponentiation” by Ren et al. (of which Sloan was a co-author). By representing objects with spherical proxies and accumulating their SH visibility (in log space) at receiver points (shaded pixels) within a range of influence, soft shadows are calculated. This is similar to the Ren paper, with the key difference being that it is in image space instead of object space. This is sped up by the fact that z testing pixels can be used to avoid visibility computation on pixels that are outside of the possible range of influence of the spherical proxies. This is similar to the non-local occlusion technique in the “Accelerated Ambient Occlusion Techniques on GPUs” paper I’ve mentioned before, though Sloan’s technique seems to be a little more careful in conserving energy, etc.
The other neat thing about this method is that you don’t get over occlusion of pixels because of the way that the visibility vectors are accumulated. This is something that seems to be often disregarded in real-time AO methods. I found this paper easier to read than the original exponentiation paper, but admittedly much of the SH stuff went over my head during my first reads.
Have you ever read a paper and unexpectedly found the solution to a problem you had previously considered (to no avail)? Though it only occupies a small section in the paper, I found the bilateral upsampling method to be incredibly useful. It allows you to compute some quantity at a lower resolution, then upsample to a higher resolution while respecting edges/discontinuities. So if you have an expensive pixel shader that is relatively low frequency (screen-space ambient occlusion anyone?), you can save significant computation.
Bilateral filtering, though around for several years (orig paper link) , only recently grabbed my attention at SIGGRAPH 06, with the “Image-Based Material Editing” paper by Khan et al. Bilateral filtering is a method for edge-preserving smoothing. By weighting filter samples in two dimensions, spatial (euclidean pixel distance) and frequency (differences in luminance), only pixels that are similar (defined by parameters to the function) are incorporated in the filtering function. In this manner, edges are preserved.
The upsampling function described in the paper follows the same concepts. By weighting standard bilinear filtering weights by the similarity of the normals and the positions to the lower resolution data, edges that would have been violated by bilinear filtering are preserved. Any method that allows you to get more miles (pixels!) out of your computation with little overhead is a-ok in my book. In fact, the global illumination method in this paper, impressive as it is, could scarcely be considered real-time if not for the bilateral upsampling that is employed.
Image-Based Proxy Accumulation for Real-Time Soft Global Illumination
Peter-Pike Sloan, Naga K. Govindaraju, Derek Nowrouzezahrai, John Snyder,
To Appear in Pacific Graphics 2007,October, 2007