LO-SL BRDF Explained...

...sort of. As the screenshot persists, I'm still tinkering on BRDF. I turned off other effects including shadows so I have a better feel on what I'm doing. Its better to reduce our parameter control points, otherwise we'll end up alchemisticly mixing one variables or attributes to another (which I often do), the outcome is we less understand the result and we'll ultimately waste time. In my approach on BRDF, I use the same concept in breaking down this function into true realtime game application. Before I expound on this, I must explain first what BRDF is (as I understood it).

Bidirectional Reflectance Distribution Function or BRDF means, in its literal meaning, 2 direction composing how light reflection is distributed on the surface of a material. These 2 direction are light direction towards the surface, which is the 'IN' hence they call this 'incident light', and eye-to-surface direction or view direction, which is the 'OUT' or they this called 'reflectance'. Each type of material surface reacts differently with light. This reaction is more of a distortion of the light. Because of this distortion, we perceive textures and colors reflecting directly or indirectly from objects. If its a reflection of light, this means if the surface is perfect flat, the reflection can be off when it hits our eyes. Of course this perception is based from the characteristics of the light we use. You lit a red light, we see red light blending on the surfaces. A very good example is shooting a billiard ball on table sides, the direction of the force will determine how it will react on the side wall texture and will ultimately bounce after some energy distortion or absorption. And also by its result will show how far will the ball be off from the hole.


This diagram is based from Oren-Nayar reflection model. Here is a very informative link of a series of lectures regarding reflectance and other photometric phenomenon.


Now here comes the technical part. In order for a BRDF to be used in graphics rendering, typically, one would need the following, light direction, view direction, normals and tangents. Addition to that, we need to have the Theta and Phi of both Incident and Reflectance light as we are dealing with 3D angles called Solid Angles.

My implementation is composed of 2 ideas/theories in reducing the Function's complexities.

The first part is 'LO' or Light Oriented,  to reduce the required data, everything will be oriented to Phi Incident direction. With this, we do not need the tangents and (considering we do not care about subsurface scattering) we just add the Phi Incident to the Phi Reflectance. By doing so, we reduce 1D of the BRDF dimension requirements. We then assume that the Incident light on the Phi angle is perfectly aligned to the Oren-Nayar model. With the sum of Phi Incident and Phi Reflectance angles we still have almost to perfect similarity of light distortion/absorption as compare to the complete function.

The second part is 'SL' or Spike and Lobe theory. The standard illustration of 'light lobes' of this function is always against the light direction. In my theory, I use the functions lobe and spike but this time, not only against the light but also towards it. This lobe represents how much energy/light was absorbed and bounced by the surface before it reflects light.

This data is then stored somewhere, it can  be a N.L/N.H lookup table like in STALKER in GPU Gems or for Lafortune lighting model (using a matrix to mimic distortion of light). In my implementation I used the flattening of Phi Incident/Phi Reflectance, which I hope to explain on later posts.

So there you have it, Light Oriented - Spike and Lobe BRDF implementation.... (batteries sold separately). Kinda' neat-o-burrito aye?

Btw. regarding the screenshot. I just guessed the BRDF parameters hence the '?' on the labels. hehehhe

Comments

Awesome!
Judging from the screenshots with the 3 materials it looks slick.

Looking forward for that "flattening" of Phi Incident/Phi Reflectance explanation!.

I have two questions:
1.- The indexing would be the same as stalker? The lookup texture is indexed by: U = Phi_IN; V = Phi_R;
How do you make those textures? something like RGB = color response, A = specular response?.

2.- Are using this material technique in a light pre pass renderer? The question arises because how do you accumulate all lights, since we are talking about angles now and not color? I can almost understand STALKER technique (have read that article several times now, and still some bits are not clear to me), but they designed a full deferred rendered, so during each light pass they get the color channel and add the light right away.
Hi Alejandro,

I'll probably make a new post regarding the lookup as it is quite related to the 'flattening' of the Phi's. But in a nut shell, it will require us to have 3 lookup tables (+1 for specularity). Two of the lookup table can be combined as their samples will be multiplied eventually. So with that, we have atleast 2 lookup tables (+1 specularity). One material will result to one 2D texture and one 1D texture. Anyway, I hope to explain this more on my next post.

You can store 3 channels for RGB if you wish to have individual light response on each type of color. Though I find this overkill already. But it is possible.

Yes, I am using a Light Prepass rendering. Of course the limitation here(if you consider it to be one) is that we are now required to MRT the ZNormal pass with a material index pass stored on a small precision buffer(this one I haven't implemented). With this you can now process the light response in a per material index manner, albeit, index to a 3D texture of your material library lookup table.

So the answer to when is, we do this on the Light Accumulation passes, distorting each light based from the material index buffer.

Of course this would mean that once we choose BRDF to be implemented, ALL of the objects will have BRDF as this is the nature of the Light Prepass technique. But fret not my friend, as there is a way to LOD these light response(hopefully in the next post) *wink* *wink*.

In STALKER, its really simple, you just have a 3D Texture (u=N.L, v=N.H, w=material id) and then you sample it with the same data N.L and N.H. I haven't tried this, but it would be interesting to see when we do a Linear Filtering in a 3D texture.
This is neat stuff, let me tell you. Cool, crazy, flexible, otherwise impossible, materials could be done with this. Hopefully that great next post (:p) will clear everything up!

One brief question: The BRDF and material indexing is then done only at the light accumulation pass. The final geometry pass just samples the light buffer like in a standard light pre pass renderer?

Linear Filtering in a 3D texture perhaps won't do nothing, I mean, the U and V may get interpolated but your object would fall in a specific W slice. Or do you mean something like "this material ID is 3.7f" (70% ID 4 30% ID 3) that could be crazy!.

I'm still trying to picture it in my mind... yes I know, that's my fault :p. So I'll just go into wait-for-next-post mode!
Aaron Lin said…
the last 1 look like marble!
@Aaron - Alas, you've finally found my blog. (now where's the Hide Blog button?)hhehehe

@Alejandro - Yes this is really exciting stuff! After seeing the result, I view light rendering in a very different way now.

The material indexing is done on a different pass, prior to Light Accumulation pass. Remember during the Light Accumulation pass, there is no way of knowing what material the light is responding to. So we need it to ID material before this pass. The best way I could think of is by MRT with the ZNormal pass outputting into a seperate buffer.

Popular Posts