Eikonal Rendering: Efficient Light Transport in Refractive Objects

May 25, 2007 at 9:28 pm (Papers)

eikonal_ss.jpg

From ACM SIGGRAPH 2007, “Eikonal Rendering: Efficient Light Transport in Refractive Objects by Ivo Ihrke, Gernot Ziegler, Art Tevs, Christian Theobalt (Max-Planck-Institut für Informatik), Marcus Magnor (Technische Universität Braunschweig), Hans-Peter Seidel (Max-Planck-Institut für Informatik).

The idea: When light propagates through material, it bends and scatters based on the properties (refractive indices, scattering, extinction, etc.) throughout. To properly render an object as participating media (not just surface shading as in a BRDF), you have to use an expensive simulation such as photon mapping, in order to capture all of the correct behavior. This paper takes the route of precomputing how the light bends and scatters into a volume and evaluating this information at runtime. The authors compute the wavefront of the light advancing through the material by solving a simplified form of the Eikonal equation. I haven’t exactly followed how these simplifications are made.. partly because I haven’t totally absorbed the paper yet and partly because my diff eq knowledge is a bit limited. But it appears that are able to compute this volume much faster than any GPU computation of the regular eikonal (Won-Ki Jeong has done some interesting work in this area) that I’ve seen so far. The only drawback to this approach is that it takes a few seconds to recompute this light propagation whenever you move the light. Oh well.. it’s going to be a couple of years before this light propagation can be modeled in real-time.

figure of ray bending

Project page w/paper.

Advertisements

Permalink 1 Comment

Farbrausch demo 41: debris

May 25, 2007 at 8:56 pm (Demos)

Debris

Demoscene group Farbrausch released a new demo at Breakpoint this year, titled Debris. It is my favorite demo of all time. Tons of geometry and creativity. I know this came out a few months ago but I was just showing it to someone else and think everyone who reads this should download it and take a peek.

fr-041:debris

p.s. don’t forget it’s all procedural whilst viewing

Permalink Leave a Comment

Real-time Volume Graphics SIGGRAPH 2004 Course

May 19, 2007 at 10:38 pm (Uncategorized)

I think SIGGRAPH course notes are invaluable and aren’t something I remember to look for when I’m sort of passively thinking of reading up on some random graphics topics. Anyway, the volume rendering course from SIGGRAPH 2004 is one that I’ve referred back to many times. The visual quality of volume rendering was increasing by leaps and bounds around this time thanks to increasingly flexible GPUs and lots of great research. Lots of great information on GPU ray casting and making beautiful transfer functions (a mapping of volume properties to color and opacity), and general information on light transport in media. Worth a read, for sure.

Course notes and slides!

Permalink Leave a Comment

Compute the thickness of an non-convex object

May 19, 2007 at 10:24 pm (Tips & Tricks)

Knowing the thickness of an object can be useful, for things like computing single-scattering of light through a participating media (think fog or a translucent material). The most obvious thing to do is render out the depth of the back faces, render out the depth of the front face, then use the difference between the two as the thickness of the object. This is fine-and-dandy for convex objects, but not so practical for more complicated objects. Look at a simple example of such an object in the below figure. A handy way to compute the thickness regardless of the object is to sum up the depths of all front faces at a pixel, then sum up the depth of all back faces, and subtract. You don’t have to resort to something like depth peeling to get these depths, just turn on additive blending when you render out depth for front and back faces. This was shared with me by my friend Thorsten Scheuerman but can originally be found in the NVIDIA Fog Volume SDK sample.

 

Thickness diagram

Permalink 1 Comment

Physically-Based Reflectance For Games

May 8, 2007 at 1:51 pm (Uncategorized)

SIGGRAPH 2006 Course notes and slides available online. It’s a bit basic, but good for anyone starting out in graphics

Course Description

“This course discusses the practical implementation of physically-principled reflectance models in interactive graphics and video games, in current practice as well as upcoming technologies. The course begins with the visual phenomena important to the perception of reflectance in real-world materials, which it uses as background for the underlying theory and derivation of common reflectance models. After introducing the current game development pipeline, from content creation to rendering, the course then discusses rendering techniques for implementing reflectance models in games — with emphasis on real-world trade offs such as shader performance, content creation efficiency, resource size considerations, and overall rendering quality. The course will help a researcher understand constraints in the game development pipeline and it will help a game developer understand the physical phenomena underlying reflectance models.”

Permalink 1 Comment

NVIDIA Human Head Demo

May 8, 2007 at 3:53 am (Demos)

A little blurb in the NVIDIA developer blog revealed the “Human Head” demo. Despite the fact that it’s basically just a rehash of ye old image-space blurring + translucent shadow maps to simulate scattering in skin which has been around for years (The first three Ruby demos used this and it’s been presented at multiple conferences, by ATI and others but I believe it was originally used by George Borshukov for Matrix Reloaded), it’s pretty impressive looking. That’s mostly due to high detail (4096×4096) captured skin images and some nice HDR lighting.

Video here.

Speaking of translucent shadow maps, I think I have some code laying around where I implemented this. I think I’ll put it in a RenderMonkey project or a standalone demo and stick it up here sometime this week.

Permalink Leave a Comment

Tom Forsyth simplifies things for you

May 5, 2007 at 4:10 am (Game techniques)

Tom F, author of many great GDC presentations and online articles, posted a great little article and demo a month or so ago.  Tom has taken three oft-used “hacky” lighting models ( Wrap lighting, Bi-directional lighting, and hemispherical lighting ) and generalized them into one lighting model. This is a great article for anyone unfamiliar with some of the non-lambertian lighting models that game developers frequently use,  and a cool hack to get a little variation out of one lighting model.

Permalink Leave a Comment

Danger Planet by Fantasy Lab

May 2, 2007 at 11:00 pm (Game techniques)

Danger Planet

Fantasy Lab, the company that released that Rhinork global illumination demo last summer, has released a screenshot from the first game utilizing their Fantasy Engine, Danger Planet. The company president is Mike Bunnell, the guy who wrote the GPU Gems 2 article on modeling light transfer and occlusion based on disc-to-disc form factors. The technique they are using for indirect light is no doubt based on the method in GPU Gems, but likely with many improvements. One image is sort of a disappointment.I’m dying to see a video, preferably with some ray gun blast lighting!

Fantasy Lab

Permalink 5 Comments

Fast Scene Voxelization and Applications

May 1, 2007 at 4:31 am (Papers)

Right now (well not literally right now, it’s a bit late), I3D 2007 is going on in Seattle. I already talked a bit about a paper from this year’s conference, but the event reminded me of my favorite paper from last year’s conference:

Fast Scene Voxelization and Applications by Elmar Eisemann and Xavier Decoret

This paper deals with converting an arbitrary polygonal scene into a voxelized representation. It’s a simple idea that has awesome potential. Coming from a volume rendering background, I’m used to going the other way and converting voxels to polygons ala marching cubes/tetrahedra. I didn’t realize at first glance the potential of discretizing a complex scene into a uniform grid.

The idea is simple and is implemented in one fragment shader. The goal is to produce one RGBA8 texture (32-bits total) of resolution equal to the viewport, where each pixel represents one column of voxels reaching into the scene and each bit indicates if a polygon crosses that voxel. To create this texture, the scene is rendered. In the fragment shader, each depth value is used to index a bit mask texture. The result of the lookup is a 32-bit value with one bit turned on, equivalent to that depth value. This result is OR blended into the final texture. After all fragments are rendered, you have your grid.

The applications covered in the paper are transmittance shadow maps, shadow volume culling, and refraction/attenuation of light. However, I think there are a lot of other uses and you could probably think of a few too.

Here is a powerpoint slide deck which might elaborate on a few details from the paper and adds some thoughts about how the algorithm may be aided by DX10.

Voxelized man

Permalink 1 Comment