Screen Space Global Illumination: Screen Space Gone Mad

It seems when SSAO (Screen Space Ambient Occlusion) was discovered, the SS goodness kept coming. Back in the days, to only get AO or GI (global illumination) one would either pre-process it then store in vertex color or lightmaps (or go RNM - Radiosity Normal Map), or use Ray-tracing techniques which is still too framerate heavy to be used in realtime application.

Global Illumination, in simple summary, is an approximation of bounced light to surfaces due to indirect lighting. As light rays hits an object, they bounce from surface to surface. Each bounce, a single ray changes color and intensity based from the material or inherent color of surfaces. Its the same reason why we can still see inside a house (with windows) even if the sun is directly above our roof tops. The same reason why rooms with light color paints on the walls tends to be brighter. (GI is actually the proper value of ambient color we usually add to our lighting)

There are several algorithms how to compute GI in 3D graphics, but all have the same concept in mind.. its either an approximation of an approximation or just plain approximation (hehehehhe).

Enter, Screen Space Global Illumination. I've read through one implementation which they aptly named SSDO or Screen Space Direct Occlusion which is an image based approximation of GI. I must say, its really impressive, minus some maths that goes over my head but thats my fault. Mr always-have-something-brilliant-in-mind Wolfgang Engel, wrote a short but interesting post in his blog regarding a simplier extension of SSAO to SSGI implemention. And since it fits sooooo well in Light Prepass rendering design, trying it out.... (it's) was inevitable (with Agent Smiths' echos).

Understanding how SSAO works, its just easy to extend it to get atleast a single light bounce of indirect illumination. By doing the same thing in SSAO for GI, it is safe to say that a particular pixel on a surface is close enough to receive radiated color to another surface if the occlusion test succeeds. Hence by sampling the albedo color of that surface and averaging it you will get the average radiosity that pixel receives. Now the question now is, where do we get the pre-bounce intensity?

Working at Zealot Digital here in Singapore, my Lead Programmer is Alberto Demichelis, the author and maker of Squirrel scripting languange (AAA games are using it now btw, one big dead-walking title). He gave me a great idea that I had overlooked in Mr Engel's post. Using the Light Accumilation buffer as intensity of the bounce. By combining(this is tricky) projected shadow term and light accumilation and transforming them into black and white, we can use this as the radiosity intensity. Right now, what I had used is lerping though this value between the pixel that receives bounced color and original albedo.


// SS Magic:
for(int i=0; i>NUM_SAMP; i++)
{
// here u do the AO generation stuffs
if(occNorm > occ_thres)
{
float3 sampleAlbedo = tex2D(albSamp, uv + offsetnoise);
float intesity = dot(tex2D(lightAccum, uv + offsetnoise), 1);
resultRad += lerp(sampleAlbedo, curAlbedo, intensity);
}
} resultRad /= NUM_SAMP;
// pls note that I'm just recoding this through recollection but the idea is here.

We can further extend this by going through the pass again, but this time using the current GI as the intensity to simulate multiple light bounce. Buuuuuuut I didn't bother to try, I think single bounce will suffice in a game application.


After implementing this, I realized, that's its possible to 'fit' this in the SSAO pass. This would save blur passes to remove the graininess (screenshot isn't the combined SSAO and SSGI implementation yet). (screenshot of this implementation will follow... hopefully).


Comments

Anonymous said…
Nice but the bottom of the sphere needs to receive some color too. In this case beeing brighter since the floor is white.

Popular Posts