Iñigo Quílez explains Slisesix demo

September 25, 2008 at 10:35 am (Demos, Presentations)

Iñigo Quílez posted a recent presentation on the techniques used in the “Slisesix” rgba demo. By procedurally generating a distance field representation of the scene, a fast, space-skipping raycast can be performed. Using a distance field representation allows other tricks such as fast AO and soft shadow techniques because information such as the distance and direction of the closest occluders are implicit in the representation. Alex Evans talked about using this type of technique ( see Just Blur and Add Noise) but in his proposed implementation the distance field came from rasterizing geometry into slices and blurring outward.

Presentation: Rendering Worlds with Two Triangles

Permalink 2 Comments

Building Details in a Pixel Shader

August 5, 2008 at 9:13 pm (Demos) ()

Humus recently posted a new demo. He renders the interior of rooms visible from the exterior using cube maps. This eliminates the need for additional geometry to represent the interior. With D3D10.1, cube map arrays are used. This allows different cube maps to be selected dynamically in the shader to add variety to the rooms rendered.

It seems to me that you could additionally transform the vector used to fetch from the cubemaps to provide a random rotation around the vertical axis to provide further variation between rooms.

Permalink Leave a Comment

2007: The year SSAO broke

February 10, 2008 at 4:39 pm (Demos, Game techniques)

kindernoiser

(image from RGBA demo kindernoiser)

One of my very first posts on this blog discussed the image space ambient occlusion paper by Shanmugam et al. That post has more hits than any other entry I’ve made here. So I thought I would take a few minutes and survey the state of the art in screen space ambient occlusion.

CryTek gave a presentation at SIGGRAPH 2007 in the Real-time Rendering course. They didn’t reveal any exact implementation details, but did reveal that they are only using the depth buffer from their z pre-pass (unlike the Shanmugam paper which uses depth and normals). They additionally noted that they use a per-pixel rotated disc of sample offsets to reduce sampling artifacts. This technique is common for reducing sampling artifacts in shadow mapping.

Iñigo Quilez ( of RGBA demoscene group, yay ) has created a website dedicated to his use of SSAO in the RGBA demo kindernoiser. One unique contribution is that he suggests doing a post-SSAO blur to lessen sampling artifacts. He has given the most detail by into the implementation of his technique, by far. A thread which follows some of the early results he was getting is @ gamedev. He also provides a analytic solution for the ambient occlusion due to a sphere here.

Megan Fox developed a technique which she refers to as “Crease shading“. It differs very little from the original Shanmugam technique. However, she provides an excellent breakdown of the technique and some comparison screenshots.

A tweak that I implemented for SSAO is to perform the ambient occlusion computation on a lower resolution texture and then upsample using an boundary respecting filter. I suggested on this blog (here) that the bilateral upsampling technique works well. I have received email from a few people that have tried it and are getting pretty good results.

So.. a whole lot of people were very interested in SSAO. Rightly so. Approximating an expensive technique like dynamic ambient occlusion (which itself is an approximation!) in a geometry-independent manner is a valuable tool. Of course, it has some serious drawbacks such as not being able to account for occlusion from back facing polygons and causing over occlusion, to name a few. If anybody knows of any other implementations/discussions of SSAO anywhere on the web, please post them here.

Permalink 14 Comments

Light Indexed Deferred Rendering

January 24, 2008 at 3:19 pm (Demos, Papers)

no_deferred1.jpg

Damian Trebilco posted a paper and demo of an interesting approach to deferred rendering on the Beyond3D forums. Instead of rendering out several buffers of geometry information (gbuffers), the author renders the light volumes with unique IDs into a buffer. Standard forward rendering is then used and the per-pixel light indices are used to index light information. The great benefit of doing it this way is that you don’t have all of the bandwidth of outputting these gbuffers. Once you turn on MSAA, these already large buffers become increasingly costly. Another benefit is that handling transparent surfaces becomes much easier.

With this approach, the worry of packing geometry and material information into as few gbuffers as possible is replaced with the worry of storing your light IDs and handling the max number of lights that might overlap one pixel. There are a few other gotchas, but you should read the paper for a comparison with standard forward rendering and traditional deferred rendering. Worth a read!

Permalink 3 Comments

New farbrausch demo: Momentum (fr-059)

October 9, 2007 at 2:32 am (Demos)

fr-059

Momentum

 

I really like the use of image-based techniques here. Creating refractions and shadows in the scene by approximating surfaces with planes is cool. Add in tons of fluid looking geometry, good soundtrack and some slick production and you’ve got yourself a hot little demo.

 

Demo and video available at pouet.

 

Permalink Leave a Comment

Volumetric particle lighting

October 4, 2007 at 3:33 am (Demos, Presentations, Tips & Tricks)

ruby_snow_small1.jpg

At SIGGRAPH this year, there was a talk by the AMD/ATI Demo Team about the Ruby:Whiteout demo. It was disappointingly attended but it was filled to the brim with GPU tips and tricks, especially in the lighting department. This stuff hasn’t been presented anywhere else and I haven’t seen much discussion on the web so I decided to highlight a few of the key topics.

One of the really impressive subjects covered was volumetric lighting (w.r.t. particle and hair). Modeling light interaction with participating media is a notoriously difficult problem (see subsurface scattering, volumetric shadows/light shafts) and many surface approximations have been found. However, dealing with a heterogeneous volume of varying density, such as the case with a cloud of particles or hair, is still daunting. The method involves finding the distance between the first surface seen from the viewpoint of the light and the exit surface (the thickness), and also accumulating the particle density between those surfaces. Depending on how you decide to handle calculating this thickness and particle density, it could take two passes. They present a method for calculating this in one pass.

By outputting z in the red channel, 1-z in the green channel and particle density in alpha, setting the RGB blend mode to min and the alpha blend mode to additive and rendering all particles from the viewpoint of the light, you get the thickness and density in one pass. This same method can be applied to meshes such as a hair. It should be noted that this information can also be used to cast shadows onto external objects.

The presenters also discuss a few other tricks. These include rendering depth offsets on the particles and blurring the composited depth before performing the thickness calculation discussed above to remove discontinuities. Also, for handling shadows from non-particle objects, they suggest using standard shadow mapping per-vertex on the particles. I think I originally saw this idea mentioned by Lutz Latta in one of his particle system articles or presentations.

I might dredge some other topics from the presentation later on, but eveyone should check out the slides here.

Permalink 7 Comments

Water Rendering with Projected Grid

August 25, 2007 at 10:25 pm (Demos, Papers)

Proj grid thesis

Abstract

This thesis will examine how a large water surface can be rendered in an efficient manner using modern graphics hardware. If a non-planar approximation of the water surface is required, a high-resolution polygonal representation must be created dynamically. This is usually done by treating the surface as a height field. To allow spatial scalability, different methods of “Level-Of-Detail” (LOD) are often used when rendering said height field. This thesis presents an alternative technique called “projected grid”. The intent of the projected grid is to create a grid mesh whose vertices are even-spaced, not in world-space which is the traditional way but in post-perspective camera space. This will deliver a polygonal representation that provides spatial scalability along with high relative resolution without resorting to multiple levels of detail.

This thesis was written in 2004 but never ended up getting published anywhere. I’ve seen it referenced a few times on message boards, etc. and in fewer papers, most recently the Wave Particles paper at SIGGRAPH 2007. It’s a great idea that I’ve come back to a few times, but I always have trouble dredging up the paper because of the vague title “Real-time Water Rendering”. For my own reference, and maybe yours too, I’m posting the link here.

Johanson, C. “Real-time Water Rendering” Paper + images + demo + source

Permalink 2 Comments

Farbrausch demo 41: debris

May 25, 2007 at 8:56 pm (Demos)

Debris

Demoscene group Farbrausch released a new demo at Breakpoint this year, titled Debris. It is my favorite demo of all time. Tons of geometry and creativity. I know this came out a few months ago but I was just showing it to someone else and think everyone who reads this should download it and take a peek.

fr-041:debris

p.s. don’t forget it’s all procedural whilst viewing

Permalink Leave a Comment

NVIDIA Human Head Demo

May 8, 2007 at 3:53 am (Demos)

A little blurb in the NVIDIA developer blog revealed the “Human Head” demo. Despite the fact that it’s basically just a rehash of ye old image-space blurring + translucent shadow maps to simulate scattering in skin which has been around for years (The first three Ruby demos used this and it’s been presented at multiple conferences, by ATI and others but I believe it was originally used by George Borshukov for Matrix Reloaded), it’s pretty impressive looking. That’s mostly due to high detail (4096×4096) captured skin images and some nice HDR lighting.

Video here.

Speaking of translucent shadow maps, I think I have some code laying around where I implemented this. I think I’ll put it in a RenderMonkey project or a standalone demo and stick it up here sometime this week.

Permalink Leave a Comment