Light Indexed Deferred Rendering

January 24, 2008 at 3:19 pm (Demos, Papers)

no_deferred1.jpg

Damian Trebilco posted a paper and demo of an interesting approach to deferred rendering on the Beyond3D forums. Instead of rendering out several buffers of geometry information (gbuffers), the author renders the light volumes with unique IDs into a buffer. Standard forward rendering is then used and the per-pixel light indices are used to index light information. The great benefit of doing it this way is that you don’t have all of the bandwidth of outputting these gbuffers. Once you turn on MSAA, these already large buffers become increasingly costly. Another benefit is that handling transparent surfaces becomes much easier.

With this approach, the worry of packing geometry and material information into as few gbuffers as possible is replaced with the worry of storing your light IDs and handling the max number of lights that might overlap one pixel. There are a few other gotchas, but you should read the paper for a comparison with standard forward rendering and traditional deferred rendering. Worth a read!

Permalink 3 Comments

I3D 2008 Papers and Registration

January 16, 2008 at 4:54 am (Papers)

logo.gif

I3D 2008 (Feb 15th-17th) is creeping up on us quickly. Early registration ends in about 15 minutes @ the I3D website. Papers have begun to trickle into Kensen Huang’s graphics conference paper page for I3D 2008.

A few papers of interest that have jumped out at me are:

Bouthors et al. Interactive multiple anisotropic scattering in clouds ( gallery/thread on gamedev.net)

Modeling anisotropic light scattering in clouds with beautiful results.

Kim Hardware-Aware Analysis and Optimization of Stable Fluids

The author analyzes performance of Stam’s Stable Fluids algorithm in terms of load/store : ALU ratio and access patterns, reports experimental results supporting theorized performance, and offers two optimizations to alleviate the bandwidth bottleneck.

Wyman Hierarchical Caustic Maps

Haven’t had a chance to look at this one yet but caustics papers always warrant a look IMHO.

Kloetzli et al. Interactive Volume Isosurface Rendering using BT Volumes

Paper isn’t publicly available yet but I’ve seen the algorithm running and it is sweet.

Permalink 2 Comments

Caustics Mapping: An Image-space Technique for Real-time Caustics

January 4, 2008 at 3:02 am (Papers)

Real-time Caustics

I wish I hadn’t waited so long to read this paper. It’s got three of my favorite things in a graphics paper: it’s image-space, it’s got a proof rather than hand waving, and it’s clever! It contains some damned pretty images too.

The setup is pretty much the same as every image-space rendering algorithm. You render a few buffers (3D position and normal) of the refractive object from the view of the light. Like most image-space algorithms, you want to consider each texel in these buffers to be a small surface. To calculate the caustics on the scene, you want to know where the light refracted through each of these surfaces ends up in the 3D scene. This is the hard part about rendering caustics, because you need to know where the refracted ray intersects the scene. Ray-scene intersection on the GPU = impractical (though obviously not impossible). This is the problem that this paper takes a hack at.

By additionally rendering the 3D positions of the scene (sans-refractive object) from the view of the light, this algorithm totally bypasses explicit ray-scene intersection. The paper outlines an iterative method which moves toward the correct distance along the refracted ray at which it intersects a scene surface, in image-space. The algorithm is as follows:

Assume an initial distance d along the refracted ray

do for some number of iterations:

  • Backproject the 3D position P1 at length d along the ray into view space of the camera
  • Use the calculated view space position to get the 3D position P2 of the scene from the scene position buffer
  • Use the distance between the P1 and P2 as the new estimate for d

The paper offers a proof of convergence based on Newton’s method. By repeating this process for each texel in the refractive object buffer and splatting the refracted flux onto the scene at the 3D position calculated by the iterative process above, voila caustics.

One problem with a naive implementation is that the amount of flux splatted onto the scene is dependent on the number of texels covered by the refractive object in the refractive object buffers. The paper states that the flux contribution of each texel is the flux at the surface (N dot L) multiplied by the reciprocal of the projected area of the object in the refracted object buffer. The paper doesn’t state this, but I believe that this should additionally be multiplied by a fresnel term. The projected area is calculated by performing an occlusion query from the view of the light.

I think the method in this paper is very practical. The frame rates are fantastic. They suggest two buffers for the refractive objective information but it could be stored in one RGBA texture by storing depth in one channel and backprojecting/unprojecting the depth to 3D position. This will save a bit on bandwidth and utilize the ALU a bit more.

Musawir A. Shah, Jaakko Konttinen, Sumanta Pattanaik. “Caustics Mapping: An Image-space Technique for Real-time Caustics” To appear in IEEE Transactions on Visualization and Computer Graphics (TVCG).

paperproject page

Permalink 2 Comments

Breaking Blog Silence

January 4, 2008 at 2:03 am (Journal)

It’s been a bit since I’ve posted anything here. I have had more time than usual to explore my interests at work so I have had less of a need to do much reading at home. I can’t post about things I’m doing for work so in the meantime the blog must suffer. Take that, blog. I’ve updated my website, so feel free to look here. I hope to start updating more frequently. If you’ve read anything cool lately, drop me an email.

Permalink Leave a Comment