Horizon Split Ambient Occlusion

February 27, 2008 at 3:55 am (Game techniques, Presentations)

I have so much I want to write about what I saw at I3D and GDC that sitting down and doing it seems daunting. I will try and post my thoughts over the next few days.

One interesting poster at I3D (and also a short talk at GDC) was an extension to SSAO. The idea was to calculate a piece-wise linear approximation of the horizon ala horizon mapping. This is achieved by sampling the depth values along m steps ( they used 8 ) in n equally spaced directions (8 again) in the tangent frame. At each depth sample, you update the horizon value if the current depth sample is higher than the current estimation of the horizon in that direction. Sampling in this fashion reduces over-occlusion. This is actually very similar to the approximation to AO that Dachsbacher and Tatarchuk described in their poster at I3D last year, “Prism Parallax Occlusion Mapping with Accurate Silhouette Generation“. All of the authors from the poster are NVIDIA guys and they announced that they will release a whitepaper describing their method.

Permalink 3 Comments

2007: The year SSAO broke

February 10, 2008 at 4:39 pm (Demos, Game techniques)

kindernoiser

(image from RGBA demo kindernoiser)

One of my very first posts on this blog discussed the image space ambient occlusion paper by Shanmugam et al. That post has more hits than any other entry I’ve made here. So I thought I would take a few minutes and survey the state of the art in screen space ambient occlusion.

CryTek gave a presentation at SIGGRAPH 2007 in the Real-time Rendering course. They didn’t reveal any exact implementation details, but did reveal that they are only using the depth buffer from their z pre-pass (unlike the Shanmugam paper which uses depth and normals). They additionally noted that they use a per-pixel rotated disc of sample offsets to reduce sampling artifacts. This technique is common for reducing sampling artifacts in shadow mapping.

Iñigo Quilez ( of RGBA demoscene group, yay ) has created a website dedicated to his use of SSAO in the RGBA demo kindernoiser. One unique contribution is that he suggests doing a post-SSAO blur to lessen sampling artifacts. He has given the most detail by into the implementation of his technique, by far. A thread which follows some of the early results he was getting is @ gamedev. He also provides a analytic solution for the ambient occlusion due to a sphere here.

Megan Fox developed a technique which she refers to as “Crease shading“. It differs very little from the original Shanmugam technique. However, she provides an excellent breakdown of the technique and some comparison screenshots.

A tweak that I implemented for SSAO is to perform the ambient occlusion computation on a lower resolution texture and then upsample using an boundary respecting filter. I suggested on this blog (here) that the bilateral upsampling technique works well. I have received email from a few people that have tried it and are getting pretty good results.

So.. a whole lot of people were very interested in SSAO. Rightly so. Approximating an expensive technique like dynamic ambient occlusion (which itself is an approximation!) in a geometry-independent manner is a valuable tool. Of course, it has some serious drawbacks such as not being able to account for occlusion from back facing polygons and causing over occlusion, to name a few. If anybody knows of any other implementations/discussions of SSAO anywhere on the web, please post them here.

Permalink 14 Comments

Presentations from Gamefest 2007

October 14, 2007 at 5:01 pm (Game techniques, Presentations)

Slides and audio from Microsoft’s Gamefest 2007. The presentations cover a gamut of topics.. from art, audio, graphics and researchy-type-stuff from MSR. Check it out here.

Permalink Leave a Comment

NPR style of Team Fortress 2

September 19, 2007 at 2:27 am (Game techniques, Papers)

Heavy weapons guy

While the gaming world waits patiently for the release of Valve’s Team Fortress 2, Jason Mitchell et al. have been releasing a deluge of material on the rendering style of the game. While there is nothing especially novel about the techniques, I think the paper is an interesting exercise in authoring a lighting model to achieve a desired look. It helps that the trailer videos are fun to watch.

Illustrative Rendering in Team Fortress 2, Mitchell et al. NPAR 2007.

Rendering techniques featurette with commentary

Permalink Leave a Comment

Just Blur and Add Noise

August 29, 2007 at 2:07 pm (Game techniques, Presentations)

All of the hoopla regarding the game LittleBigPlanet has reminded me of the excellent talk that Alex Evans, co-founder of game creator Media Molecule, gave last year at the Advanced Real-time Rendering for 3D Games course at SIGGRAPH 2006. There are good presentation slides in PDF form at the ATI Developer site: here. If you look at the slide pictures you can see early renders of LittleBigPlanet characters, with unfortunate spheres places on their heads so as not to spoil the game (this was about nine months before the game was announced).

Anyway, he discusses a few techniques that he tried in achieving global illumination-like effects in a “small world” type of environment, all of which were implemented on a Radeon mobility 9800; nothing is too high tech.

screenshots from alex’s pres

Permalink 3 Comments

Tom Forsyth simplifies things for you

May 5, 2007 at 4:10 am (Game techniques)

Tom F, author of many great GDC presentations and online articles, posted a great little article and demo a month or so ago.  Tom has taken three oft-used “hacky” lighting models ( Wrap lighting, Bi-directional lighting, and hemispherical lighting ) and generalized them into one lighting model. This is a great article for anyone unfamiliar with some of the non-lambertian lighting models that game developers frequently use,  and a cool hack to get a little variation out of one lighting model.

Permalink Leave a Comment

Danger Planet by Fantasy Lab

May 2, 2007 at 11:00 pm (Game techniques)

Danger Planet

Fantasy Lab, the company that released that Rhinork global illumination demo last summer, has released a screenshot from the first game utilizing their Fantasy Engine, Danger Planet. The company president is Mike Bunnell, the guy who wrote the GPU Gems 2 article on modeling light transfer and occlusion based on disc-to-disc form factors. The technique they are using for indirect light is no doubt based on the method in GPU Gems, but likely with many improvements. One image is sort of a disappointment.I’m dying to see a video, preferably with some ray gun blast lighting!

Fantasy Lab

Permalink 5 Comments

Real-time Ambient Maps

March 19, 2007 at 1:09 am (Game techniques)

I visited the CryTek booth at GDC 2007 and watched their CryTek Engine tech demo trailer for about the 10th time. They have a lot of impressive looking things going on there, but one thing that I hadn’t really thought about until recently was their “Real-time Ambient Maps” which are demonstrated in this video: http://www.youtube.com/watch?v=VFtATu5gt3k

I don’t think a lot of people realize at first what exactly this feature is, besides the fact that the video looks good. While the point light soft shadows are nice, you need to look past them. The “ambient” part should be a tip-off. When doing surface lighting, the ambient term is an approximation to global illumination.. the amount of non-direct lighting incident on the surface. Ambient occlusion refers to the determination of the fraction of the hemisphere above a surface position that is occluded. Simply modulating an ambient lighting amount by the ambient occlusion factor adds a very effective approximation to global illumination. So therefore to get a decent approximation that accounts for multiple-bounce lighting, you have to get an approximation of the amount of light bouncing around the room.

Of course, I have no idea exactly what they are doing and the rest of this is speculation. But I would guess that in order to get an approximation of the ambient lighting term, they are rendering cube maps, in artist-defined positions, of the nearby surfaces shaded with direct lighting. Each texel in that cube map can be treated like a little light source. To get the amount of lighting incident on that cube map position, you can average all of the texel values in the cube map. Of course, each texel should be weighted by its distance from the cube map center to account for attenuation. The resulting value is a quick and dirty approximation of the ambient lighting term in the room! Assuming that these scenes are spatially subdivided by some structure such as a BSP tree, portals, etc. the cube map rendering would be accelerated by only rendering lit surfaces in nearby nodes.

Direct vs. Ambient

Note that this is only an ambient term and has no directional properties. It’s also possible to convert that cube map into a spherical harmonic representation of irradiance which could in turn be used to light nearby surfaces. I found this poster, which describes doing something similar.

Anyway, just thought I’d post something about this because I’ve seen several posts on forums wondering what’s going on here. I’d love to hear anyone else’s thoughts.

Permalink 3 Comments