Accurate 2D SSAO: A new implementation?


Well, I hope it is...lol. SSAO or Screen Space Ambient Occlusion is a way to approximate global illumination light and shadow. Basically, its like shadow created by indirect lighting or bounced light. SSAO first presented by Crytek's back a few years ago with their CryEngine 2 for the game Crysis (too much Cry lol). I first implemented pure 2D SSAO (depth compare) by Arkano22 which is quite straight forward. It basically compares depth of a random offsetted sample. I also did the from the GameRendering website which uses a projected randomized offsetted normals and project it back to image space. I find these two implementation very interesting.

The projected technique is, I would say, the correct computation of SSAO, as it compares occlusion from the projected normals. But the price of projecting and transforming it back to texture space is just too heavy especially this is done in every sampling (8 or 16).

In terms of speed nothing beats pure 2D SSAO of course, but this is only an estimate because the angle of the normals are not taken account of... in short, this is would not work in extreme cases. This becomes obvious when the scene is rotated in a axis, the AO shrinks and expands.
Hence, I came up with a different approach of computing the SSAO. This is the sucker-punch question, why do I need to project the normals back to image space each sample if I'm already plotting data in a 2D space? Pure 2D was correct, I agree on this techniques assumption. Projecting normals is correct as it is the proper estimation of occlusion.

My implementation in the screenshot above is working in a pure 2D SSAO but with the normals taken accounted WITHOUT projecting to texture space. We know normals are directions. By simply extending it a few units what we get is the expanded version of the normal, as if we are scaling the vertexes up. With that in mind, even if this is 3D space, if you imagine looking at the offset is really just a two-dimensional information (as in deferred uv as data). Then, dividing it with the current depth so we offset properly and PRESTO, we got ourselves the UV offset of the sample based from the extended normals. Now do an original normal vs offsetted normal when sampling and you get the same occlusion of the projected normals. The unique part here is that I compute the offsets of the normal prior to sampling, which means I am doing a projected normal comparison in pure 2D space. (My hunch is that this is also posible in depth comparison instead of normals, I'll probably test this tommorrow.)
I don't know if this is a new implementation, but the result is as if I am just doing a 16 sampled filter or simple Depth of Field or simpler. Its is more optimized than projected normals as Mr Iñigo's implementation. No matrix multiplication, no reflect function, no sign function, etc and the result is theoretically the same. With a simple multiplication of the normals and using that to be the offset position prior to sampling it has achieved a similar result.

ADD:

I look into nvidia's sample of their own implementation of SSAO. But the mathematics is way beyond me so I don't bother trying it out. Plus considering they had a lot of mathematical instructions to go through so I bet its heavier.

ADD: 20/07/09 Accurate 2D SSAO with 'not so good' noise texture

Comments

Anonymous said…
Hi very nice idea thx!
Can you please explain it a little bit more or post some code snippets?
Hi,

I hope you visit again and comment with a registered account so I can follow up with your question.

I plan to make my first paper (if time permits) about this technique.

Anyway, here's a little more explanation...

Firstly, THINK 2D.

Every data must be in that space. Even the 'expanded' NORMAL (we will call this, EN)

As screen UV is our plotting 2D data, offset_EN must be ranged 0-1 and distance to eye taken account of by using depth.

EN = (normal.xy * occlu_size);

offset_EN = EN / (depth * screen_size.xy);

samp_UV = screenUV + offset_EN;

NOTE: samp_UV is done prior to plotting the occlusion in a loop.


Another way of getting offset_EN is using the quoted 'reflect' technique but assuming the normals extruded forward its direction then transforming it into screen space. That offset is your samp_UV. The difference is, we are only doing this process once not every sampling in the loop.

Once we have our samp_UV, as to our first rule, think 2D, we do a 2D radial depth-aware filter in a loop. (Its better to use a 2D noise offset texture at this time to avoid obvious sampling pattern)

To get the amount of occlusion, we then compare the current pixel normal and the sampled normal and average them or accumulate them (same as other standard SSAO techniques).
Jose said…
Hi! good work, it looks better than usual ssao :)

I´ve developed another way of computing ssao which still samples in 2D, but uses normal and position info to compute correct occlusion. It is both fast and good looking (I think :P), you can check it out here:

http://www.gamedev.net/community/forums/topic.asp?whichpage=4&pagesize=25&topic_id=556187
Shalka said…
@ArKano
Think of it, it is a powerful technique, because you do not need a position buffer or a reconstruction position from depth function.
Technique merge your 1st 2D only technique with your new created once.
@ArKano Thanks. I saw your screenshot and video. They look awesome. Though, 30 samples is bit too spicy for my taste. :)


If you guys can give me any screenshot of this technique that would be great.

Currently, I'm doing some misc stuffs for my company's game engine. When I'm done, I'll be working on Inferred Rendering, which base from the design, can be use in SSAO smart blurring and proper upscaling.
Urukai said…
Sorry that i ask but what does it mean?

.... we do a 2D radial depth-aware filter in a loop....
.....then compare the current pixel normal and the sampled normal and average them ....

Do you read the normal buffer the depth buffer or both?
How to compare them?

Thank you!
@Urukai

Yes. That's how the process works. No 16 passes of projecting extended normal in 3D space common SSAO implementation.

We only project the normal once to determine the radial sample center in 2D screen space.
Anonymous said…
I inclination not agree on it. I over polite post. Specially the title-deed attracted me to be familiar with the whole story.
I'm sorry but I don't get what your saying. Can you explain further what story and what is it that you don't agree with?

Also, Can you register a name or even a alias so I can address you properly. Thanks.
Anonymous said…
This comment has been removed by a blog administrator.
Anonymous said…
Good brief and this mail helped me alot in my college assignement. Thanks you for your information.
Unknown said…
Your article is great. It helps me to know many things that previously I did not pay any attention to. Ask permission to share information about Local Business Listing Services at possible right here.
Thanks for sharing. Keep posting.....!!

Popular Posts