Well, I hope it is...lol. SSAO or Screen Space Ambient Occlusion is a way to approximate global illumination light and shadow. Basically, its like shadow created by indirect lighting or bounced light. SSAO first presented by Crytek's back a few years ago with their CryEngine 2 for the game Crysis (too much Cry lol). I first implemented pure 2D SSAO (depth compare) by Arkano22 which is quite straight forward. It basically compares depth of a random offsetted sample. I also did the from the GameRendering website which uses a projected randomized offsetted normals and project it back to image space. I find these two implementation very interesting.
The projected technique is, I would say, the correct computation of SSAO, as it compares occlusion from the projected normals. But the price of projecting and transforming it back to texture space is just too heavy especially this is done in every sampling (8 or 16).
In terms of speed nothing beats pure 2D SSAO of course, but this is only an estimate because the angle of the normals are not taken account of... in short, this is would not work in extreme cases. This becomes obvious when the scene is rotated in a axis, the AO shrinks and expands.
Hence, I came up with a different approach of computing the SSAO. This is the sucker-punch question, why do I need to project the normals back to image space each sample if I'm already plotting data in a 2D space? Pure 2D was correct, I agree on this techniques assumption. Projecting normals is correct as it is the proper estimation of occlusion.
My implementation in the screenshot above is working in a pure 2D SSAO but with the normals taken accounted WITHOUT projecting to texture space. We know normals are directions. By simply extending it a few units what we get is the expanded version of the normal, as if we are scaling the vertexes up. With that in mind, even if this is 3D space, if you imagine looking at the offset is really just a two-dimensional information (as in deferred uv as data). Then, dividing it with the current depth so we offset properly and PRESTO, we got ourselves the UV offset of the sample based from the extended normals. Now do an original normal vs offsetted normal when sampling and you get the same occlusion of the projected normals. The unique part here is that I compute the offsets of the normal prior to sampling, which means I am doing a projected normal comparison in pure 2D space. (My hunch is that this is also posible in depth comparison instead of normals, I'll probably test this tommorrow.)
I don't know if this is a new implementation, but the result is as if I am just doing a 16 sampled filter or simple Depth of Field or simpler. Its is more optimized than projected normals as Mr Iñigo's implementation. No matrix multiplication, no reflect function, no sign function, etc and the result is theoretically the same. With a simple multiplication of the normals and using that to be the offset position prior to sampling it has achieved a similar result.
I look into nvidia's sample of their own implementation of SSAO. But the mathematics is way beyond me so I don't bother trying it out. Plus considering they had a lot of mathematical instructions to go through so I bet its heavier.