tag:blogger.com,1999:blog-68299044896147107402024-03-06T11:20:45.606+08:00My Pseudocode LifeJohn David Bonifacio Uyhttp://www.blogger.com/profile/04108888915485048942noreply@blogger.comBlogger30125tag:blogger.com,1999:blog-6829904489614710740.post-13727508444132420662017-09-28T03:45:00.000+08:002017-09-28T03:45:13.832+08:00Project H.O.R.D.E.<div dir="ltr" style="text-align: left;" trbidi="on">
Just to announce my students' game which I spearheaded, <b>Project H.O.R.D.E.</b><br />
<br />
It's a fast-paced online multiplayer isometric shooter. Best of all it's free with no ads!<br />
<br />
Please check it out.<br />
<br />
It's out virtually on all platforms...<br />
iOS - <a href="https://itunes.apple.com/sg/app/project-h-o-r-d-e/id1279620506?mt=8">https://itunes.apple.com/sg/app/project-h-o-r-d-e/id1279620506?mt=8</a><br />
Android - <a href="https://play.google.com/store/apps/details?id=com.MagesInstituteOfExcellence.ProjectHorde">https://play.google.com/store/apps/details?id=com.MagesInstituteOfExcellence.ProjectHorde</a><br />
Win/MacOS - <a href="http://tinyurl.com/projecthorde">http://tinyurl.com/projecthorde</a><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<iframe width="320" height="266" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/OM3eY5hfEz4/0.jpg" src="https://www.youtube.com/embed/OM3eY5hfEz4?feature=player_embedded" frameborder="0" allowfullscreen></iframe></div>
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgek84mKYex5G6WW-LzINzwexHUEWR79N1TqEhapSVw8iTaEw4nbrUf4jadJTcF21m_6TC_tIgaJyi9frIPB31THBBPcX5xh31nyofACiKt_JwkgQsXrqAhMbKp9KKPgA7z7mYu0Rcyotc/s1600/Project+HORDE+7_8_2017+1_07_20+PM+%25281%2529.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1000" data-original-width="1600" height="250" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgek84mKYex5G6WW-LzINzwexHUEWR79N1TqEhapSVw8iTaEw4nbrUf4jadJTcF21m_6TC_tIgaJyi9frIPB31THBBPcX5xh31nyofACiKt_JwkgQsXrqAhMbKp9KKPgA7z7mYu0Rcyotc/s400/Project+HORDE+7_8_2017+1_07_20+PM+%25281%2529.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1cRs5ZLJmhntTxdFdadbCt4iHLBAQ1vrvtnAMT4h-k7u5CLTiqMfLLt-3XKSaut0HMgeLk0WF75U6WTiJoXjWxeZStMPmCU0oMmpMkhHaNCD_iYiQtF8mGalPXEe0_96walB70LhY5HY/s1600/Project+HORDE+7_8_2017+12_36_48+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1000" data-original-width="1600" height="250" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1cRs5ZLJmhntTxdFdadbCt4iHLBAQ1vrvtnAMT4h-k7u5CLTiqMfLLt-3XKSaut0HMgeLk0WF75U6WTiJoXjWxeZStMPmCU0oMmpMkhHaNCD_iYiQtF8mGalPXEe0_96walB70LhY5HY/s400/Project+HORDE+7_8_2017+12_36_48+PM.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4La3brCFYabjP8Ex61zXnxBFh6rHZ1-AV16LPPdP_26DYOhGducbZYMsqRI5NTIeLiPYc9uIhCCwSiPgGZ_Vx1dFuWQdc1WRaOpr7yEeR8mb8_i2H1ael1zssa7JHwtabdERrXHdOWM4/s1600/Project+HORDE+7_8_2017+12_37_10+PM+%25281%2529.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="900" data-original-width="1600" height="225" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4La3brCFYabjP8Ex61zXnxBFh6rHZ1-AV16LPPdP_26DYOhGducbZYMsqRI5NTIeLiPYc9uIhCCwSiPgGZ_Vx1dFuWQdc1WRaOpr7yEeR8mb8_i2H1ael1zssa7JHwtabdERrXHdOWM4/s400/Project+HORDE+7_8_2017+12_37_10+PM+%25281%2529.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhIkh1nntTmE44XOkH-KQ6Fh0k5LxcBraD8-Undtv4eKKhJV6rHw5Z6MutGko8L-x-7ex8ucofSL9RESntOxPROTuYGeoZmGLNEGiUxg0KopT-97BCyvvJV6Yx-FjlmTHFf_0j4Z0kqtpI/s1600/Project+HORDE+7_8_2017+12_37_16+PM+%25281%2529.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="900" data-original-width="1600" height="225" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhIkh1nntTmE44XOkH-KQ6Fh0k5LxcBraD8-Undtv4eKKhJV6rHw5Z6MutGko8L-x-7ex8ucofSL9RESntOxPROTuYGeoZmGLNEGiUxg0KopT-97BCyvvJV6Yx-FjlmTHFf_0j4Z0kqtpI/s400/Project+HORDE+7_8_2017+12_37_16+PM+%25281%2529.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgJx0qFbw2yP5WF-t7LjyeAFEalje2UTOvgqTwTDg_RtEcIVGZAQlVlIPI2zHhwY4_SxY5fsmjshWKXTe6uPBw4073c_rvkzr06pEkEbIXfk3Wb8x_WOSjw5K3Dgu-5zyzgmUI6Gup6Zq0/s1600/Project+HORDE+7_8_2017+12_39_23+PM+%25281%2529.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1000" data-original-width="1600" height="250" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgJx0qFbw2yP5WF-t7LjyeAFEalje2UTOvgqTwTDg_RtEcIVGZAQlVlIPI2zHhwY4_SxY5fsmjshWKXTe6uPBw4073c_rvkzr06pEkEbIXfk3Wb8x_WOSjw5K3Dgu-5zyzgmUI6Gup6Zq0/s400/Project+HORDE+7_8_2017+12_39_23+PM+%25281%2529.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3K-xLmYp2p29UEfMUouo37QuNUY2no_6CXuxQHd9ZPEl7zMz48npCw-ZM2cM4kFs-2Y5t92xVIg3UPBogins5ZpW8p_wtu9w5QhI0yJzMfQylSCGteqhgNxVqR_nn6OCoC9s9mIozlsk/s1600/Project+HORDE+7_8_2017+12_47_03+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="900" data-original-width="1600" height="223" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3K-xLmYp2p29UEfMUouo37QuNUY2no_6CXuxQHd9ZPEl7zMz48npCw-ZM2cM4kFs-2Y5t92xVIg3UPBogins5ZpW8p_wtu9w5QhI0yJzMfQylSCGteqhgNxVqR_nn6OCoC9s9mIozlsk/s400/Project+HORDE+7_8_2017+12_47_03+PM.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcs_OnWcBwyt7jRKDwGQtycEPPicMvfjk3G0XlOh9y83NfabDuuZw4Wxb_EaWrfB2zkwscV_nUNR4qU95MF8ZiNDmvisxqX784SgXXzTqkP38VMh634I62t7fprmR3orKycWPfqr3DN1s/s1600/Project+HORDE+7_8_2017+12_47_59+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1000" data-original-width="1600" height="250" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcs_OnWcBwyt7jRKDwGQtycEPPicMvfjk3G0XlOh9y83NfabDuuZw4Wxb_EaWrfB2zkwscV_nUNR4qU95MF8ZiNDmvisxqX784SgXXzTqkP38VMh634I62t7fprmR3orKycWPfqr3DN1s/s400/Project+HORDE+7_8_2017+12_47_59+PM.png" width="400" /></a></div>
<br />
<br />
<br /></div>
John David Bonifacio Uyhttp://www.blogger.com/profile/04108888915485048942noreply@blogger.com1Singapore1.352083 103.819836000000010.8441055 103.174389 1.8600605 104.46528300000001tag:blogger.com,1999:blog-6829904489614710740.post-51606661670999762014-12-12T23:58:00.000+08:002014-12-13T00:32:15.478+08:00Tech Demo @ Google Play<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfAreU5GSsLXv2l0v8Qbt7o8BzxTevKsY1mGDpgCCf1wY46UXJ-IkmSzY2KRrq9kQGkEFth885ZmY5g89Pi-qdfRIUqkC6M6fragVNSKg8HoJnhzevNPQNwT2PQjLrXITxXWAFL9bayZ4/s1600/techdemo_googleplay.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfAreU5GSsLXv2l0v8Qbt7o8BzxTevKsY1mGDpgCCf1wY46UXJ-IkmSzY2KRrq9kQGkEFth885ZmY5g89Pi-qdfRIUqkC6M6fragVNSKg8HoJnhzevNPQNwT2PQjLrXITxXWAFL9bayZ4/s1600/techdemo_googleplay.jpg" height="320" width="500" /></a></div>
<br />
Our Technical Demo is now available for download via Google Play. -> <a href="https://play.google.com/store/apps/details?id=com.qnsoftware.qndemo&hl=en">QN Tech Demo</a></div>
John David Bonifacio Uyhttp://www.blogger.com/profile/04108888915485048942noreply@blogger.com0Singapore1.352083 103.819836000000010.8441055 103.174389 1.8600605 104.46528300000001tag:blogger.com,1999:blog-6829904489614710740.post-92118839019380912642014-11-25T00:05:00.001+08:002014-11-25T00:05:36.911+08:00Second Version of Our NVIDIA Tegra K1 Techdemo<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='500' height='400' src='https://www.youtube.com/embed/zfyTQxU2oE4?feature=player_embedded' frameborder='0'></iframe></div>
<br />
This is our second showcase in running our engine in NVIDIA Tegra K1 (Xiaomi MiPad).<br />
Here are the new features we have added:<br />
<br />
<ul style="text-align: left;">
<li>Day-night transitions</li>
<li>Actor (partial)path finding/avoidance</li>
<li>Particles</li>
<li>UI controls</li>
<li>Event actors(triggers)</li>
</ul>
</div>
John David Bonifacio Uyhttp://www.blogger.com/profile/04108888915485048942noreply@blogger.com2tag:blogger.com,1999:blog-6829904489614710740.post-38680816064439169152014-10-20T17:23:00.000+08:002014-10-22T19:39:32.712+08:00Our Engine Running on NVIDIA Tegra K1<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='500' height='400' src='https://www.youtube.com/embed/cROkyB9Bi9A?feature=player_embedded' frameborder='0'></iframe></div>
<br />
We've hit a milestone! After a week of porting, we managed to run our proprietary engine on Android, more specifically on NVIDIA Tegra K1. This video is our first attempt in exploring the capabilities of NVIDIA's up coming gen flagship chip.<br />
<br />
Currently, we are only utilizing a single core but we are already getting 30+ to 40+ frames per second with desktop graphics quality (fully deferred which means all dynamic light/shadow, hardware skinning, etc.).<br />
<br />
This tech demo features a fully deferred rendering pipeline running on OpenGL 4.4 with the following graphical features:<br />
<br />
<ul style="text-align: left;">
<li>Dynamic lighting (point, spot, directional)</li>
<li>Dynamic shadows (dual paraboloid shadow mapping, cascaded shadow mapping)</li>
<li>Linear Space HDR lighting with bloom and eye adaptation</li>
<li>Hemispherical ambient lighting with Scalable Ambient Occlusion (SAO not in the video)</li>
<li>Distance and Height fog</li>
<li>Hardware skinned characters using transform feedback.</li>
<li>Geometry instancing, Octagon-based clip mapped terrain, etc.</li>
</ul>
<br />
<br />
What do you guys think?</div>
John David Bonifacio Uyhttp://www.blogger.com/profile/04108888915485048942noreply@blogger.com0tag:blogger.com,1999:blog-6829904489614710740.post-22798359201655358032013-11-04T02:53:00.002+08:002013-11-04T02:54:50.982+08:00Back into R&D Mode YEY!<div dir="ltr" style="text-align: left;" trbidi="on">
So after slightly over two years of development of our game Fatehunters our team can finally go back to R&D mode.<br />
<br />
I hope to be active again updating my blog. I got a GO signal from my Technical Director to go back on my research on LOSL-BRDF. I am now doing the LOSL-EX-BRDF. It's short for the extention of my 'semi-delusional' material lighting model. I'm quite happy that our game was build heavily on the LOSL lighting model which researched 2 years ago. Though I know this isometric-perspective game is not the best game to exhibit this lighting model, having my research implemented on an actual comercial game is both thrilling and rewarding.<br />
<br />
I hope to grab some screenshots or get a video of the current game build and post them here.<br />
<br />
Until then, expect some updates from me soon (I promise so this time :D ).</div>
John David Bonifacio Uyhttp://www.blogger.com/profile/04108888915485048942noreply@blogger.com0tag:blogger.com,1999:blog-6829904489614710740.post-62084545363383302382012-03-08T23:13:00.001+08:002012-03-09T00:07:21.766+08:00LO-SLD BRDF (LO-SL BRDF on blazed!)<div dir="ltr" style="text-align: left;" trbidi="on">So, I got this crazy idea! Yes, again. They're always been crazy. Ha!<br />
<br />
First some updates. <br />
<br />
Currently, my company <a href="http://www.zealotdigital.com/">Zealot Digital</a> here in Singapore, is producing a game that uses my <a href="http://my-pseudocode-life.blogspot.com/2010/07/lo-sp-brdf-explained.html">LO-SL BRDF</a> technique or Light Oriented-Spike and Lobe BRDF. Under our propietary MMO game engine called ZD Engine 2.0, the game development seems to be progressing well. For its isometric-style design, LO-SL BRDF is good enough for its light response rendering needs. So far, it runs well with the targeted specs with decent framerates.<br />
<br />
However, there's a real limitation of LO-SL BRDF. It only works on modulations not real reflections. This means it cannot really simulate 'real' reflectance only the factor distrubution.<br />
<br />
And then it strucked me!<br />
<br />
Enter LO-SL<strong>D</strong> BRDF, which stands for Light Oriented-Spike and Lobe 'Differential' BRDF. It's an extension on my previous implementation. It works with the same concept that in order to get the light distribution information, we only need three things: light direction, normal, and (VS)view direction. The previous technique then derived from an preprocessed two channel lookup table. One for incident and one for reflectance. Here is where the new technique differs. Instead of deriving the modulation (0-1.0 factor) we encode two directions in a four channel look-up table. This two directions will represent the two light distortions we have in the BRDF. By compressing these two directions, the three components becomes two. Hence, we can store the two directions into four channels. Because of the look-up table, the plotted values covers the differentials of the material. If you understand the 'SL' part of my technique, you know that this is possible.<br />
With this extension, we can now use this technique to simulate 'real' reflectance. We will see more believable light reaction/distortions similar to physical based microfacet-theory response. In a nutshell, if you want to render environment reflections, radiosities, and/or cautics, LO-SLD BRDF accommodates informations we need for these to happen. This is even compatible with Image Based Lighting or any similar lighting techniques. And still lightweight enough for current-gen games!<br />
<br />
For now, I haven't implemented this into codes. But I know it will work!(even if it may sound crazy, ha!). Hopefully, I'll have enough time to prototype this. Until then, cheers!</div>John David Bonifacio Uyhttp://www.blogger.com/profile/04108888915485048942noreply@blogger.com1tag:blogger.com,1999:blog-6829904489614710740.post-78708907169309241312011-05-05T15:47:00.002+08:002012-03-09T00:08:37.133+08:00LO-SL BRDF Showcase<div dir="ltr" style="text-align: left;" trbidi="on">It's been a while since I last posted. I have been busy with some programming stuff at work. Most of which are integration of all the prototypes my team mates have done for our engine.<br />
<br />
I'm quite happy that things are slowly coming together for our game engine. Tools, FX, lighting, etc. These are the times that I am reminded why I chose this career. (*sniff*)<br />
<br />
(After much drama) I want to showcase some screenshot of the LOSL-BRDF (check my previous posts). <br />
<br />
Pls note the following specifications I used on the screenshots:<br />
- No environmental mapping was used.<br />
- No fancy global illumination and/or ambient occlusion<br />
- No image-based lighting.<br />
- Ambient color is black.<br />
- Light color is white.<br />
- Only one directional light (red line shows the current light direction)<br />
- No shadow<br />
- No diffuse texture. Only color for albedo.<br />
- Running in real-time and fast enough for Directx 9 cards for implementing for games.<br />
- Running in a Pre-light pipeline rendering with specular color (this is not noticable due to light color is white)<br />
- LOSL-BRDF is rendered at light accumulation pass. This means additional lights placed in the scene will react according to the material assigned to surfaces.<br />
- Assets borrowed from UDK (model and normal map)<br />
<br />
Here are the screenshots (click to enlarge)...<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvquOJqHdpC4hHU8_zIM-J4bUpUcqA1Qf6fPWoZQgdmKiB-RJYuEdukbRXMjIlD3c8LXM3NxBn6GU_bMXJvx9RMEpSABYVpwnarpWOaQrQAqlwOsojhR44QlaVTko4SmHm2MJM_s1CJ7c/s1600/LOSL_BRDF_05_05_11_w_txt_01.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="245" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvquOJqHdpC4hHU8_zIM-J4bUpUcqA1Qf6fPWoZQgdmKiB-RJYuEdukbRXMjIlD3c8LXM3NxBn6GU_bMXJvx9RMEpSABYVpwnarpWOaQrQAqlwOsojhR44QlaVTko4SmHm2MJM_s1CJ7c/s320/LOSL_BRDF_05_05_11_w_txt_01.jpg" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br />
</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEieJcpVT1jWjIuZAGgrtPy0ar78wUK4NmWbyoPgF6kEsXlUgveW0mC3jpEvVuOK1yopXVnuwhoDyN8OEbKdCETava7CauJgMhI4j6MtBKZCME3LRxi_uq44IoSg_yQSEmLAJ9iLck24Jxk/s1600/LOSL_BRDF_05_05_11_w_txt_02_chrome.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="245" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEieJcpVT1jWjIuZAGgrtPy0ar78wUK4NmWbyoPgF6kEsXlUgveW0mC3jpEvVuOK1yopXVnuwhoDyN8OEbKdCETava7CauJgMhI4j6MtBKZCME3LRxi_uq44IoSg_yQSEmLAJ9iLck24Jxk/s320/LOSL_BRDF_05_05_11_w_txt_02_chrome.jpg" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br />
</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhVyj1irRVL4JEXrxSjL_AsxTxmHSXZoSFL27YyYGKndeMv00v0ieORux2sGqef1zR8hvo7-e2KgkVD8wg6QWqi8_wYNZo_GOtbvVyn9Jtms_RmWqzkYJSYVgyB24_QvlTcszVGCivualg/s1600/LOSL_BRDF_05_05_11_w_txt_03_brushedmetal.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="245" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhVyj1irRVL4JEXrxSjL_AsxTxmHSXZoSFL27YyYGKndeMv00v0ieORux2sGqef1zR8hvo7-e2KgkVD8wg6QWqi8_wYNZo_GOtbvVyn9Jtms_RmWqzkYJSYVgyB24_QvlTcszVGCivualg/s320/LOSL_BRDF_05_05_11_w_txt_03_brushedmetal.jpg" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br />
</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjGHctDrQso7CmCzkUIHKwxRZVURng6L8_Hkaqtqz0bYJUWlM4nInb8jQLdN2Og8e7xIak7OzpOjlkbPYwjsTod5jv4BB3Ax_qE9aO7JyW6BZk761fxWllEGPoRdwUt4Lt43Gd-ZYI7qA8/s1600/LOSL_BRDF_05_05_11_w_txt_04_rough.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="245" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjGHctDrQso7CmCzkUIHKwxRZVURng6L8_Hkaqtqz0bYJUWlM4nInb8jQLdN2Og8e7xIak7OzpOjlkbPYwjsTod5jv4BB3Ax_qE9aO7JyW6BZk761fxWllEGPoRdwUt4Lt43Gd-ZYI7qA8/s320/LOSL_BRDF_05_05_11_w_txt_04_rough.jpg" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br />
</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhl8P9bBhm7x3IWNlMm0MD3rzkXX68na_Q4VdOZLatBkM0JjQ4OHQb90zen3aaSR_Ifq3Zk6Dv4h6UzgVU98vUjZsLDQrjr91jSQB24xGnnVVxg5pC1TK-SZ-zw7A4fKoimEu6ypsPoD68/s1600/LOSL_BRDF_05_05_11_w_txt_05_glass.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="245" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhl8P9bBhm7x3IWNlMm0MD3rzkXX68na_Q4VdOZLatBkM0JjQ4OHQb90zen3aaSR_Ifq3Zk6Dv4h6UzgVU98vUjZsLDQrjr91jSQB24xGnnVVxg5pC1TK-SZ-zw7A4fKoimEu6ypsPoD68/s320/LOSL_BRDF_05_05_11_w_txt_05_glass.jpg" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br />
</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgE7DFsePWkbBT6uT-X9_xNY2bLfH6qm0sMtUfgMe1ypO6RwXQ6EzKXzwQxU49pAXezUjSyEI1aie-1QacxiCKOauJCv0lDxT86jsUGDW55sUNdQ4iWy1usLLTS-6K7McPi-GUXZtXwQZ4/s1600/LOSL_BRDF_05_05_11_w_txt_06_plastic.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="245" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgE7DFsePWkbBT6uT-X9_xNY2bLfH6qm0sMtUfgMe1ypO6RwXQ6EzKXzwQxU49pAXezUjSyEI1aie-1QacxiCKOauJCv0lDxT86jsUGDW55sUNdQ4iWy1usLLTS-6K7McPi-GUXZtXwQZ4/s320/LOSL_BRDF_05_05_11_w_txt_06_plastic.jpg" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br />
</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhSpg5U8PjK-Ng2ZPyLCTtSJmTwl5lKHyoH3Z0C_D7UidRPbN3AAXpjilsUUFfhalGajz6cuiLiBpUmdaGtgJT_v7p7hYecp0oTV_jcDE8bdz3Wo7TIEeZ-t2Q-r-XHajinsa7iThaNaNU/s1600/LOSL_BRDF_05_05_11_w_txt_08_jade.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="245" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhSpg5U8PjK-Ng2ZPyLCTtSJmTwl5lKHyoH3Z0C_D7UidRPbN3AAXpjilsUUFfhalGajz6cuiLiBpUmdaGtgJT_v7p7hYecp0oTV_jcDE8bdz3Wo7TIEeZ-t2Q-r-XHajinsa7iThaNaNU/s320/LOSL_BRDF_05_05_11_w_txt_08_jade.jpg" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br />
</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjvxX1pDT3PihCJv2LxVLx1J1EzK0RrqiHX7gHhI_drL8ieNX2TfPzwAanYUe3NNOMOmexeqoMxyiQxuC4lwpyNINBG-62Clz0ULzjoh4y6h87O1EuG-ppJQGtn3y7Bmvxnt4xfPiO6ikM/s1600/LOSL_BRDF_05_05_11_w_txt_07_ruby.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="245" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjvxX1pDT3PihCJv2LxVLx1J1EzK0RrqiHX7gHhI_drL8ieNX2TfPzwAanYUe3NNOMOmexeqoMxyiQxuC4lwpyNINBG-62Clz0ULzjoh4y6h87O1EuG-ppJQGtn3y7Bmvxnt4xfPiO6ikM/s320/LOSL_BRDF_05_05_11_w_txt_07_ruby.jpg" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br />
</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-wERphyWe0KrJObJIz4RDwpewYZbPufl5gJcWeh0okfKuKWuhoBjAbKW_tvN9jjuk9AQL0lSNlVlHhvBBu3AdxNN_DqUu8JvhUCyIWJu7BO7kcQ2Gj4UyQQ2TiML9a6f_9nBUfBfu99w/s1600/LOSL_BRDF_05_05_11_w_txt_09_blinnphong.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="245" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-wERphyWe0KrJObJIz4RDwpewYZbPufl5gJcWeh0okfKuKWuhoBjAbKW_tvN9jjuk9AQL0lSNlVlHhvBBu3AdxNN_DqUu8JvhUCyIWJu7BO7kcQ2Gj4UyQQ2TiML9a6f_9nBUfBfu99w/s320/LOSL_BRDF_05_05_11_w_txt_09_blinnphong.jpg" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br />
</div><div class="separator" style="clear: both; text-align: center;"><br />
</div></div>John David Bonifacio Uyhttp://www.blogger.com/profile/04108888915485048942noreply@blogger.com0tag:blogger.com,1999:blog-6829904489614710740.post-69499975763006377562010-08-18T22:35:00.003+08:002010-08-18T22:39:08.661+08:00LO-SL BRDF Additional Test Materials<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0HLYHLSu6tVayB4yGMjpzSdarwXvY-xJfkw-9Al5L_wzcp-Q2yDO9IPtOfIWWEHo1aq29hXtoL25st5kW464gvACe_l9pBVOsxozB06JGZ6DRVHFJnj36gQYf9ncOVJ6JPsoTfuegNMU/s1600/BRDF_Material_Indexing_17_08_10.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" ox="true" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0HLYHLSu6tVayB4yGMjpzSdarwXvY-xJfkw-9Al5L_wzcp-Q2yDO9IPtOfIWWEHo1aq29hXtoL25st5kW464gvACe_l9pBVOsxozB06JGZ6DRVHFJnj36gQYf9ncOVJ6JPsoTfuegNMU/s320/BRDF_Material_Indexing_17_08_10.jpg" width="320" /></a></div><div align="justify">As part of my test materials on my previous post about <a href="http://my-pseudocode-life.blogspot.com/2010/08/lo-sl-brdf-explained-part-2.html">LO-SL BRDF</a>, here are some more materials. The furthest 2 rows are from the older test(refer to previous post). The 2nd to the closest rows, starting from left is matte finish, leather, rough wood, polished varnished wood and plastic. The closest row, opal, jade, ruby, clear glass and my default Phong-Blinn with a nice-and-easy specularity. My favorite material is the clear glass (2nd from the right, closest row), obviously I'm not doing any transparency yet. It seems to really simulate how light reacts to clear glass materials. You would notice that it seems that I'm only rendering a specular fresnel but if you look closely, after the fresnel shade there's a very thin line running a long the outline. It adds some realism or complexity on the rendering, which a normal fresnel rendering would never accomplish.</div><div align="justify"><br />
</div><div align="justify">Currently, I'm using a 256x128 texture look-up tables. I noticed that N.V and Phi I+R can be reduced half of precision and would not make obvious banding. N.V makes sense to be reduced because we can only see the front hemisphere of the Normal according to the view.</div><div align="justify"><br />
</div><div align="justify">Each of the material has a look-up table texture which is a 2 channels 256x128. All of these are then stored in a texture array and is used in the light accumulation pass in a light prepass rendering pipeline. The pipeline is running on view space data, considering how much we save on computing with this space. Each affecting light will react with similar light response and thus adds awesome almost-free complexities to the rendering. I'm really amazed how simple yet how much this technique contributes to the overall rendering. With a simple 2 texture sample taps, it does wonders! A W E S O M E!</div>John David Bonifacio Uyhttp://www.blogger.com/profile/04108888915485048942noreply@blogger.com0tag:blogger.com,1999:blog-6829904489614710740.post-77701272269383703872010-08-15T02:49:00.001+08:002010-08-15T02:57:40.587+08:00LO-SL BRDF Explained... (Part 2)<div style="text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhCobnm_lMu13ayQvf8DT3G-tVCmq5mlMaRNu9Q8mVH-UsirYDtx16L_eKXvZHsKcD9XlAq-6UXdZnyQvhdCrU0ylQJ2DKIWW-cNKZJLvyeh-kMpbGYCMWRO97750__HHxz18tqX8vaD-Y/s1600/BRDF_Material_Indexing_13_08_10B.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" ox="true" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhCobnm_lMu13ayQvf8DT3G-tVCmq5mlMaRNu9Q8mVH-UsirYDtx16L_eKXvZHsKcD9XlAq-6UXdZnyQvhdCrU0ylQJ2DKIWW-cNKZJLvyeh-kMpbGYCMWRO97750__HHxz18tqX8vaD-Y/s320/BRDF_Material_Indexing_13_08_10B.jpg" /></a></div><div class="separator" style="clear: both; text-align: justify;">Time fly so fast... wheew. I've been so busy this month but as I have promised, here's the continuation of the <a href="http://my-pseudocode-life.blogspot.com/2010/07/lo-sp-brdf-explained.html">LO-SL BRDF</a>. Much thanks to Alberto Demichelis for giving me this challenging task even with some heated discussions along the way, but all for the sake of pointing me to the right direction.</div><div class="separator" style="clear: both; text-align: justify;"><br />
</div><div class="separator" style="clear: both; text-align: justify;">The screenshot above is a material test running on our pipeline. I removed the ambient light and turned off any AO or postprocesses so its easier to see their light response. Currently, these are just directional diffuse+specularity as our requirement does not need realtime reflection so far. All of the objects are RGB(128,128,128) meaning real gray to purposely visualize the high, mid and low tone light response. The light is directional light with a white color RGB(255, 255, 255). The further 4 materials are actually simulating Blinn-Phong lighting model having the specularity size differences and some factor to simulate roughness. The nearer 4 materials are metals, (at least I tried to mimic). Starting from the left is rough iron, brushed metal (notice some anisotropic effect to it), copper, gold and chrome. Ofcourse, the effect will be much visible once normal mapping is applied to them.</div><div class="separator" style="clear: both; text-align: justify;"><br />
</div><div class="separator" style="clear: both; text-align: justify;"><br />
</div><div class="separator" style="clear: both; text-align: justify;">What is Light Oriented-Spike and Lobe BRDF? It is a modified-simplified approach of the Bidirectional Reflectance Distribution Function so much so that it can be use for realtime-game application. It is not however, an exact math replacement of the full Oren-Nayer reflection model. Its goal is not to be precise but rather to just be convincing enough. In researching about photometry, I found out that there are different ways in gathering reflectance distribution data of different materials. Some plot the reflectance into a 3D or volume data. LO-SL BRDF simplify this volume of data by mimicking the values into 'strategicly aligned' 1D waves along the Normal (no tangents needed). This waves are then stored into a lookup table shrinking its math into a simple texture sample. The beauty of this is that, in possible future implementation this can be extended to doing away with waves and instead use curves or vector data for better plotting precision before they are save in the look-up table texture.</div><div class="separator" style="clear: both; text-align: justify;"><br />
</div><div class="separator" style="clear: both; text-align: justify;">Here is a screenshot of the look-up generator for the LO-SL BRDF...</div><div class="separator" style="clear: both; text-align: justify;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEipXAKWgeZs-t1mtfBCSLlf_y0q0i66uiSPozlS6I6pzvhPAA1s1XwXgHxQik34s6VwlNLYYZiiyMK8rRyojrpsC89IjwhaagnWn07DjVR7W6dWxMx96ha2_0VmLy6fLvLkSOEtb4t-JUg/s1600/BRDF_LUT_Creator_13_08_10.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" ox="true" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEipXAKWgeZs-t1mtfBCSLlf_y0q0i66uiSPozlS6I6pzvhPAA1s1XwXgHxQik34s6VwlNLYYZiiyMK8rRyojrpsC89IjwhaagnWn07DjVR7W6dWxMx96ha2_0VmLy6fLvLkSOEtb4t-JUg/s320/BRDF_LUT_Creator_13_08_10.jpg" /></a></div><div class="separator" style="clear: both; text-align: justify;"><br />
</div><div class="separator" style="clear: both; text-align: justify;"><br />
</div><div class="separator" style="clear: both; text-align: justify;">You'll notice in the right part of the screenshot are 4 groups of control values. Theta Incident, Theta Reflectance, Phi Incident+Reflectance and Specularity. Right now, the Specularity part is an anomaly in my implementation. I still have to research the actual relation of specularity in my implementation. (I'll be delighted if anyone can help me with this or even to point me to the right direction). You may also notice that it only deals with monochromatic distortions. Currently, this is only our requirement so we decided to only use monochromatic. Of course, tri-color distortion (meaning individual/independent distortions of RGB colors), example reflectance of bubbles or oil sheen in water, can still be implemented by storing 3 channels per angle. Plus we are saving the remaining channels for something special....a 'SS' special. (*wink**wink*). Each of these control values (except Specularity) represents the 1D wave slicing the full BRDF into 3 1D waves.</div><div class="separator" style="clear: both; text-align: justify;"><br />
</div><div class="separator" style="clear: both; text-align: justify;">The control values are...</div><div class="separator" style="clear: both; text-align: justify;"><strong>Theta θ</strong> - represents the angle with highest spike or the peak of the wave</div><div class="separator" style="clear: both; text-align: justify;"><strong>dθ </strong>- is the differential angle or range/size/cone angle of the lobe</div><div class="separator" style="clear: both; text-align: justify;"><strong>C. Pow</strong> - is a pow function to steepen the curve</div><div class="separator" style="clear: both; text-align: justify;"><strong>Factor</strong> - is the wave magnitude factor</div><div class="separator" style="clear: both; text-align: justify;"><br />
</div><div class="separator" style="clear: both; text-align: justify;">You can also notice that the left part of the of screenshot are composed of 4 viewports. The upper left, is the preview. The rest are each channel of the look-up table. The upper right is the look-up table preview of the Theta Incident and Theta Reflectance 'combined' together (will be explained later). The lower left is the Phi Incident+Reflectance combined with Specularity. This one I'm still not convinced if my assumptions are correct on this. Previously, I separated the Phi I+R and Specularity as they are not possible to combine. So before they were in the lower left and lower right.</div><div class="separator" style="clear: both; text-align: justify;"><br />
</div><div class="separator" style="clear: both; text-align: justify;">Combining these waves into a single 2D channel is possible due to its one important commonality... the Normal. The T or time of the wave/curve is Normal 0 to 1 (actual represents -1 to 1 of the Normal). Another relation is 'multiplication', so each of the result of the 1D wave are eventually multiplied together, hence we can pre-multiply it.</div><div class="separator" style="clear: both; text-align: justify;"><br />
</div><div class="separator" style="clear: both; text-align: justify;">I know its quite confusing. I guess I'm not really good translating this into words. I need some sort of illustration for this. Anyway, to those who are still with me, here's the formula,</div><div class="separator" style="clear: both; text-align: justify;"><br />
</div><div class="separator" style="clear: both; text-align: justify;">Theta Incident = N.L</div><div class="separator" style="clear: both; text-align: justify;">Theta Reflectance = N.V</div><div class="separator" style="clear: both; text-align: justify;">Phi Incident+Reflectance = V.(normalize(N-L))</div><div class="separator" style="clear: both; text-align: justify;">Specularity = N.H (I'm still not sure to keep this)</div><div class="separator" style="clear: both; text-align: justify;"><br />
</div><div class="separator" style="clear: both; text-align: justify;">Maybe in the future I can produce a study papers on this. Or maybe someone wants to offer me to write an article for their awesome highly anticipated graphics book. Yes? No? No takers? Oh well. hehhehe...</div><div class="separator" style="clear: both; text-align: justify;"><br />
</div>John David Bonifacio Uyhttp://www.blogger.com/profile/04108888915485048942noreply@blogger.com0tag:blogger.com,1999:blog-6829904489614710740.post-51124018039339911572010-07-12T22:40:00.003+08:002012-03-09T00:09:33.837+08:00LO-SL BRDF Explained...<div dir="ltr" style="text-align: left;" trbidi="on"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhPKufeGZ4WJdt2RWcO0Sk0bUi94cI4k6FBpw9YQbMQ_F3kCd8IWxmEOyIeqYJvj_OJAqeyCaAO9Ut94NKtFJWxEneghucboiauHc2GLHvFFUO4lK7AqBO08BlJFSMCZEzH0MErH2GpVls/s1600/LoSPBRDF.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" rw="true" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhPKufeGZ4WJdt2RWcO0Sk0bUi94cI4k6FBpw9YQbMQ_F3kCd8IWxmEOyIeqYJvj_OJAqeyCaAO9Ut94NKtFJWxEneghucboiauHc2GLHvFFUO4lK7AqBO08BlJFSMCZEzH0MErH2GpVls/s320/LoSPBRDF.jpg" /></a></div>...sort of. As the screenshot persists, I'm still <a href="http://my-pseudocode-life.blogspot.com/2010/07/fast-diet-lo-sl-brdf.html">tinkering on BRDF</a>. I turned off other effects including shadows so I have a better feel on what I'm doing. Its better to reduce our parameter control points, otherwise we'll end up alchemisticly mixing one variables or attributes to another (which I often do), the outcome is we less understand the result and we'll ultimately waste time. In my approach on BRDF, I use the same concept in breaking down this function into true realtime game application. Before I expound on this, I must explain first what BRDF is (as I understood it).<br />
<br />
Bidirectional Reflectance Distribution Function or BRDF means, in its literal meaning, 2 direction composing how light reflection is distributed on the surface of a material. These 2 direction are light direction towards the surface, which is the 'IN' hence they call this 'incident light', and eye-to-surface direction or view direction, which is the 'OUT' or they this called 'reflectance'. Each type of material surface reacts differently with light. This reaction is more of a distortion of the light. Because of this distortion, we perceive textures and colors reflecting directly or indirectly from objects. If its a reflection of light, this means if the surface is perfect flat, the reflection can be off when it hits our eyes. Of course this perception is based from the characteristics of the light we use. You lit a red light, we see red light blending on the surfaces. A very good example is shooting a billiard ball on table sides, the direction of the force will determine how it will react on the side wall texture and will ultimately bounce after some energy distortion or absorption. And also by its result will show how far will the ball be off from the hole.<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjUeGSXuDsDvzsG8e_-jpUUIG_vvdcMtgZ6vh6HL-wTK3xZq_exs6XeeyZAF89zMT_T7wSv5WbNi_u48wgotxufOTUd7UQakWl-86mI2o_J1SuyinvZ2mh1eI5XP5UjQkZDqPLm_LCq8vA/s1600/Oren-nayar-reflection.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" rw="true" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjUeGSXuDsDvzsG8e_-jpUUIG_vvdcMtgZ6vh6HL-wTK3xZq_exs6XeeyZAF89zMT_T7wSv5WbNi_u48wgotxufOTUd7UQakWl-86mI2o_J1SuyinvZ2mh1eI5XP5UjQkZDqPLm_LCq8vA/s320/Oren-nayar-reflection.jpg" /></a></div>This diagram is based from <a href="http://en.wikipedia.org/wiki/Oren%E2%80%93Nayar_Reflectance_Model">Oren-Nayar reflection model</a>. Here is a very informative link of a <a href="http://www.cs.cmu.edu/afs/cs/academic/class/16823-f06/">series of lectures</a> regarding reflectance and other photometric phenomenon.<br />
<br />
<br />
Now here comes the technical part. In order for a BRDF to be used in graphics rendering, typically, one would need the following, light direction, view direction, normals and tangents. Addition to that, we need to have the Theta and Phi of both Incident and Reflectance light as we are dealing with 3D angles called <a href="http://en.wikipedia.org/wiki/Solid_angle">Solid Angles</a>. <br />
<br />
My implementation is composed of 2 ideas/theories in reducing the Function's complexities. <br />
<br />
The first part is 'LO' or Light Oriented, to reduce the required data, everything will be oriented to Phi Incident direction. With this, we do not need the tangents and (considering we do not care about <a href="http://en.wikipedia.org/wiki/Subsurface_scattering">subsurface scattering</a>) we just add the Phi Incident to the Phi Reflectance. By doing so, we reduce 1D of the BRDF dimension requirements. We then assume that the Incident light on the Phi angle is perfectly aligned to the Oren-Nayar model. With the sum of Phi Incident and Phi Reflectance angles we still have almost to perfect similarity of light distortion/absorption as compare to the complete function.<br />
<br />
The second part is 'SL' or Spike and Lobe theory. The standard illustration of 'light lobes' of this function is always against the light direction. In my theory, I use the functions lobe and spike but this time, not only against the light but also towards it. This lobe represents how much energy/light was absorbed and bounced by the surface before it reflects light.<br />
<br />
This data is then stored somewhere, it can be a N.L/N.H lookup table like in <a href="http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter09.html">STALKER in GPU Gems</a> or for Lafortune lighting model (using a matrix to mimic distortion of light). In my implementation I used the flattening of Phi Incident/Phi Reflectance, which I hope to explain on later posts.<br />
<br />
So there you have it, Light Oriented - Spike and Lobe BRDF implementation.... (batteries sold separately). Kinda' neat-o-burrito aye?<br />
<br />
Btw. regarding the screenshot. I just guessed the BRDF parameters hence the '?' on the labels. hehehhe</div>John David Bonifacio Uyhttp://www.blogger.com/profile/04108888915485048942noreply@blogger.com5tag:blogger.com,1999:blog-6829904489614710740.post-37831642490689031482010-07-02T19:57:00.000+08:002010-07-02T19:57:21.870+08:00Fast Diet "LO-SL" BRDF<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg01NjlZkCHNsPXWYOCB4UfMN0mg9db0LKzmE5E_EbQtf-dJM-zMM5pw_J3BleZf-RHt7m5D3vgTF0cxzNzAj8NAuujxAg7JYL9dIUF0kFm8vlt1FoRM_hke-J4T5t1SX6k7qrnaowHE9A/s1600/LO_SL_BRDF.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" rw="true" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg01NjlZkCHNsPXWYOCB4UfMN0mg9db0LKzmE5E_EbQtf-dJM-zMM5pw_J3BleZf-RHt7m5D3vgTF0cxzNzAj8NAuujxAg7JYL9dIUF0kFm8vlt1FoRM_hke-J4T5t1SX6k7qrnaowHE9A/s320/LO_SL_BRDF.jpg" /></a></div><div class="separator" style="clear: both; text-align: center;"><br />
</div><div class="separator" style="clear: both; text-align: justify;">After a long absent from my blogging, I'm finally back for some graphics goodies! As you may notice in the screenshot, I'm doing some <a href="http://en.wikipedia.org/wiki/Bidirectional_reflectance_distribution_function">BRDF</a> or Bidirectional Reflectance Distribution Function magic. This past few days I was in deep <a href="http://en.wikipedia.org/wiki/Photometry_(optics)">photometry(optics)</a> waters... dived and almost drowned. Anyway back to the topic at hand, BRDF in a nut shell is a formula or process in understanding how light reacts differently to different material. Example is when a light beam hits a matte material such as leather, the light is diffused along the material, thus spreading the light into the surface. Compare this when you do that to a mirror, the light instead of being diffused it reflects the beam and you see a contrasting(not spread) lit in the mirror. Another example is when a white light hits a prism, it splits into visible chroma colors.</div><div class="separator" style="clear: both; text-align: justify;"><br />
</div><div class="separator" style="clear: both; text-align: justify;">Now the tricky part here is implementing this to games. Considering so much computations are involved in BRDF, its likely impossible with our current technology to perform a full BRDF applied on a real-time game. Several games and game engines came up with tricks and simplifications to try to mimic this function.</div><div class="separator" style="clear: both; text-align: justify;"><br />
</div><div class="separator" style="clear: both; text-align: justify;">So put it simply, this is my humble attempt in implementing BRDF into a real-time application. As of now, this screenshot is fresh from the oven. And right now, I cannot disclosed how I implemented this. (Need some permission from the big 'squirrel' guy, lolz). Anyway, I hope to follow up on this when the coast is clear.</div>John David Bonifacio Uyhttp://www.blogger.com/profile/04108888915485048942noreply@blogger.com0tag:blogger.com,1999:blog-6829904489614710740.post-14492564562461751112009-08-21T12:05:00.012+08:002009-10-14T16:10:54.902+08:00Screen Space Global Illumination: Screen Space Gone Mad<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4x5iE0l1JPBbWD1YcYX62oVR2pIHHV0EkgovuYAc7GeL_tLUteS6bRMpiR8PIUPaHnHTMNhucmve2G7Wer8jaziPY87MtJyptw5BBIGT5IryggBSoviXZ59a8a-vbQ_INeXuvSaPEbiA/s1600-h/SSGI_20_08_09.jpg"><img style="TEXT-ALIGN: center; MARGIN: 0px auto 10px; WIDTH: 400px; DISPLAY: block; HEIGHT: 301px; CURSOR: hand" id="BLOGGER_PHOTO_ID_5373115493216624082" border="0" alt="" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4x5iE0l1JPBbWD1YcYX62oVR2pIHHV0EkgovuYAc7GeL_tLUteS6bRMpiR8PIUPaHnHTMNhucmve2G7Wer8jaziPY87MtJyptw5BBIGT5IryggBSoviXZ59a8a-vbQ_INeXuvSaPEbiA/s400/SSGI_20_08_09.jpg" /></a> <div>It seems when SSAO (Screen Space Ambient Occlusion) was discovered, the SS goodness kept coming. Back in the days, to only get AO or GI (global illumination) one would either pre-process it then store in vertex color or lightmaps (or go <a href="http://www2.ati.com/developer/gdc/D3DTutorial10_Half-Life2_Shading.pdf">RNM - Radiosity Normal Map</a>), or use Ray-tracing techniques which is still too framerate heavy to be used in realtime application. </div><div><br /></div><div><a href="http://en.wikipedia.org/wiki/Global_illumination">Global Illumination</a>, in simple summary, is an approximation of bounced light to surfaces due to indirect lighting. As light rays hits an object, they bounce from surface to surface. Each bounce, a single ray changes color and intensity based from the material or inherent color of surfaces. Its the same reason why we can still see inside a house (with windows) even if the sun is directly above our roof tops. The same reason why rooms with light color paints on the walls tends to be brighter. (GI is actually the proper value of ambient color we usually add to our lighting)</div><br /><div>There are several algorithms how to compute GI in 3D graphics, but all have the same concept in mind.. its either an approximation of an approximation or just plain approximation (hehehehhe).</div><div></div><br /><div>Enter, Screen Space Global Illumination. I've read through one implementation which they aptly named <a href="http://isdlibrary.intel-dispatch.com/vc/2552/SSDO.pdf">SSDO or Screen Space Direct Occlusion</a> which is an image based approximation of GI. I must say, its really impressive, minus some maths that goes over my head but thats my fault. Mr always-have-something-brilliant-in-mind Wolfgang Engel, wrote a short but interesting <a href="http://diaryofagraphicsprogrammer.blogspot.com/2008/06/screen-space-global-illumination.html">post in his blog</a> regarding a simplier extension of SSAO to SSGI implemention. And since it fits sooooo well in Light Prepass rendering design, trying it out.... (it's) was inevitable (with Agent Smiths' echos).</div><br /><div></div><div>Understanding how <a href="http://my-pseudocode-life.blogspot.com/2009/07/no-projection-but-proper-ssao-new.html">SSAO </a>works, its just easy to extend it to get atleast a single light bounce of indirect illumination. By doing the same thing in SSAO for GI, it is safe to say that a particular pixel on a surface is close enough to receive radiated color to another surface if the occlusion test succeeds. Hence by sampling the albedo color of that surface and averaging it you will get the average radiosity that pixel receives. Now the question now is, where do we get the pre-bounce intensity?</div><br /><div></div><div>Working at Zealot Digital here in Singapore, my Lead Programmer is Alberto Demichelis, the author and maker of <a href="http://squirrel-lang.org/default.aspx">Squirrel scripting languange</a> (AAA games are using it now btw, one big dead-walking title). He gave me a great idea that I had overlooked in Mr Engel's post. Using the Light Accumilation buffer as intensity of the bounce. By combining(this is tricky) projected shadow term and light accumilation and transforming them into black and white, we can use this as the radiosity intensity. Right now, what I had used is lerping though this value between the pixel that receives bounced color and original albedo.</div><br /><br /><span style="font-family:courier new;font-size:85%;"><span style="color:#009900;">// SS Magic: </span></span><br /><span style="font-family:courier new;font-size:85%;">for(int i=0; i>NUM_SAMP; i++) </span><br /><span style="font-family:courier new;font-size:85%;">{ </span><br /><span style="font-family:courier new;font-size:85%;color:#009900;">// here u do the AO generation stuffs </span><br /><span style="font-family:courier new;font-size:85%;">if(occNorm > <span id="SPELLING_ERROR_0" class="blsp-spelling-error">occ</span>_<span id="SPELLING_ERROR_1" class="blsp-spelling-error">thres</span>)</span><br /><span style="font-family:courier new;font-size:85%;">{</span><br /><span style="font-family:courier new;font-size:85%;">float3 <span id="SPELLING_ERROR_2" class="blsp-spelling-error">sampleAlbedo</span> = <span id="SPELLING_ERROR_3" class="blsp-spelling-error">tex</span>2D(<span id="SPELLING_ERROR_4" class="blsp-spelling-error">albSamp</span>, <span id="SPELLING_ERROR_5" class="blsp-spelling-error">uv</span> + <span id="SPELLING_ERROR_6" class="blsp-spelling-error">offsetnoise</span>);</span><br /><span style="font-family:courier new;font-size:85%;">float <span id="SPELLING_ERROR_7" class="blsp-spelling-error">intesity</span> = dot(<span id="SPELLING_ERROR_8" class="blsp-spelling-error">tex</span>2D(<span id="SPELLING_ERROR_9" class="blsp-spelling-error">lightAccum</span>, <span id="SPELLING_ERROR_10" class="blsp-spelling-error">uv</span> + <span id="SPELLING_ERROR_11" class="blsp-spelling-error">offsetnoise</span>), 1);</span><br /><span style="font-family:courier new;font-size:85%;"><span id="SPELLING_ERROR_12" class="blsp-spelling-error">resultRad</span> += <span id="SPELLING_ERROR_13" class="blsp-spelling-error">lerp</span>(<span id="SPELLING_ERROR_14" class="blsp-spelling-error">sampleAlbedo</span>, <span id="SPELLING_ERROR_15" class="blsp-spelling-error">curAlbedo</span>, intensity);</span><br /><span style="font-family:courier new;font-size:85%;">}</span><br /><span style="font-family:courier new;font-size:85%;">} <span id="SPELLING_ERROR_16" class="blsp-spelling-error">resultRad</span> /= <span id="SPELLING_ERROR_17" class="blsp-spelling-error">NUM</span>_<span id="SPELLING_ERROR_18" class="blsp-spelling-error">SAMP</span>;</span><br /><span style="font-family:courier new;font-size:85%;color:#009900;">// <span id="SPELLING_ERROR_19" class="blsp-spelling-error">pls</span> note that I'm just recoding this through recollection but the idea is here.</span><br /><br /><div>We can further extend this by going through the pass again, but this time using the current GI as the intensity to simulate multiple light bounce. <span id="SPELLING_ERROR_20" class="blsp-spelling-error">Buuuuuuut</span> I didn't bother to try, I think single bounce will suffice in a game application.</div><br /><br /><div>After implementing this, I realized, <span id="SPELLING_ERROR_21" class="blsp-spelling-error">that's</span> its possible to 'fit' this in the <span id="SPELLING_ERROR_22" class="blsp-spelling-error">SSAO</span> pass. This would save blur passes to remove the <span id="SPELLING_ERROR_23" class="blsp-spelling-corrected">graininess</span> (screenshot isn't the combined <span id="SPELLING_ERROR_24" class="blsp-spelling-error">SSAO</span> and <span id="SPELLING_ERROR_25" class="blsp-spelling-error">SSGI</span> implementation yet). (screenshot of this implementation will follow... hopefully).</div><br /><div></div><br /><div></div>John David Bonifacio Uyhttp://www.blogger.com/profile/04108888915485048942noreply@blogger.com1tag:blogger.com,1999:blog-6829904489614710740.post-10387966874408319272009-07-28T18:42:00.018+08:002009-07-31T12:01:37.430+08:00SSAO Blurring: Making It Less Smart But Low In CarbohydratesAs the title points out... making a slimmer and less of a genius smart ssao blur.<br /><br />Screen Space Ambient Occlusion, commonly uses two passes. First the ambient occlusion generation (see 'my' <a href="http://my-pseudocode-life.blogspot.com/2009/07/no-projection-but-proper-ssao-new.html">Accurate 2D SSAO</a>) and then the blur pass to remove the graininess of the AO. Unfortunately, the blur pass is not your average toolbox blurring. Its all because of the <strong>'edge'</strong> of the models or of the relief normals. The blurring must be 'smart' enough not to blur over edges otherwise bleeding will occur. The common idea around the game dev community is make a smart blur by using a similar delta depth/normal check in the AO generation. (If its beyond a threshold, its an edge). This would mean however, to do this every sample, which is typically more than once to get that proper smoothness. The result is the blur pass is more complicated and heavier than the actual AO generation.<br /><br /><br />Hence, I came up with a simple solution. Lessening the calorie of the SSAO blur pass, by reusing the data already computed by the AO pass. How? The delta (depth or normal comparison). Using that delta compare it with an edge_threshold. This means we are doing this while in the AO sampling. Let me explain.<br /><br /><span style="font-family:courier new;font-size:85%;"><span style="color:#009900;">// SSAO: AO generation pass</span> </span><br /><span style="font-family:courier new;font-size:85%;">for(int i=0; i>NUM_SAMP; i++) </span><br /><span style="font-family:courier new;font-size:85%;">{ </span><br /><span style="font-family:courier new;font-size:85%;color:#009900;">// here u do the AO generation stuffs </span><br /><span style="font-family:courier new;font-size:85%;"><span style="color:#009900;">// use if(deltaN > edge_threshold) if u want finer details</span></span><br /><span style="font-family:courier new;font-size:85%;"><span style="color:#009900;">// deltaN = 1-dot(N, Nsample) or deltaZ = depth - depthSamp</span></span><br /><span style="font-family:courier new;font-size:85%;">if(deltaZ > edge_threshold) { edge++; } </span><br /><span style="font-family:courier new;font-size:85%;">} </span><br /><span style="font-family:courier new;font-size:85%;">edge /= NUM_SAMP;</span><br /><br />The result is you have a gradient data of the edges. Although, one price we pay is to encode AO in a dual channel... one for the occlusion and one for the edge data (encode it by 1-edge). Now for the kicker... we will use this data NOT as a toggle flag on which to blur or not to blur... but as a <strong><em>size factor</em></strong> of the <strong><em>blur radius</em></strong>.<br /><br /><span style="font-family:courier new;font-size:85%;"><span style="color:#009900;">// SSAO: Blur pass</span> </span><br /><span style="font-family:courier new;font-size:85%;">float2 origSamp = tex2D(AOSamp, IN.uv).xy; </span><br /><span style="font-family:courier new;font-size:85%;"><span style="color:#009900;">// x=occlusion; y=edge_data</span></span><br /><span style="font-family:courier new;font-size:85%;"></span><br /><span style="font-family:courier new;font-size:85%;">float2 blurKern = InvTextureSize * radius * origSamp.y;</span><br /><span style="font-family:courier new;font-size:85%;"><span style="color:#009900;">// the edge data resizes the the kernel as it goes closer/further away from the edge</span></span><br /><span style="font-family:Courier New;font-size:85%;color:#009900;">// when origSamp.y=ZERO, these means theres no offset</span><br /><span style="font-family:Courier New;font-size:85%;color:#009900;">// therefore there's no blur, no edge bleed!</span><br /><span style="font-family:courier new;font-size:85%;">for(int i=0; i>NUM_SAMP; i++) </span><br /><span style="font-family:courier new;font-size:85%;">{ </span><br /><span style="font-family:courier new;font-size:85%;">float2 offsetUV = IN.uv + (samples[i] * blurKern); </span><br /><span style="font-family:courier new;font-size:85%;">ret += tex2D(AOSamp, offsetUV).x; </span><br /><span style="font-family:courier new;font-size:85%;">...</span><br /><br />If you notice, this just uses one extra sample for channel where the edge data is stored and one multiplication.... that's it! We have just save tons of operations on the common smart blur pass. Less smart but low in carbs!John David Bonifacio Uyhttp://www.blogger.com/profile/04108888915485048942noreply@blogger.com1tag:blogger.com,1999:blog-6829904489614710740.post-27966989253402527822009-07-17T18:22:00.026+08:002009-07-20T18:56:11.559+08:00Accurate 2D SSAO: A new implementation?<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJjBjc7vz2svncOmIe4-X9SD6A1eFpMUwQrddzDJa0UU6yg3czYt4RPW7sUb6zg2wIZsvY_XVC1zPow1UCtsffMoXBcJAj_mmXvUmQkDhpAIDurAwCcJMeyQW5SXdxeUyoXOlT7R1mqqE/s1600-h/SSAO_new_implementation170709.jpg"><img style="TEXT-ALIGN: center; MARGIN: 0px auto 10px; WIDTH: 400px; DISPLAY: block; HEIGHT: 300px; CURSOR: hand" id="BLOGGER_PHOTO_ID_5359385581716206786" border="0" alt="" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJjBjc7vz2svncOmIe4-X9SD6A1eFpMUwQrddzDJa0UU6yg3czYt4RPW7sUb6zg2wIZsvY_XVC1zPow1UCtsffMoXBcJAj_mmXvUmQkDhpAIDurAwCcJMeyQW5SXdxeUyoXOlT7R1mqqE/s400/SSAO_new_implementation170709.jpg" /></a><br /><div>Well, I hope it is...<span id="SPELLING_ERROR_0" class="blsp-spelling-error"><span id="SPELLING_ERROR_0" class="blsp-spelling-error"><span id="SPELLING_ERROR_0" class="blsp-spelling-error">lol</span></span></span>. <span id="SPELLING_ERROR_1" class="blsp-spelling-error"><span id="SPELLING_ERROR_1" class="blsp-spelling-error"><span id="SPELLING_ERROR_1" class="blsp-spelling-error">SSAO</span></span></span> or Screen Space Ambient Occlusion is a way to approximate global illumination light and shadow. Basically, its like shadow created by indirect lighting or bounced light. <span id="SPELLING_ERROR_2" class="blsp-spelling-error"><span id="SPELLING_ERROR_2" class="blsp-spelling-error"><span id="SPELLING_ERROR_2" class="blsp-spelling-error">SSAO</span></span></span> first presented by <span id="SPELLING_ERROR_3" class="blsp-spelling-error"><span id="SPELLING_ERROR_3" class="blsp-spelling-error"><span id="SPELLING_ERROR_3" class="blsp-spelling-error">Crytek's</span></span></span> back a few years ago with their <span id="SPELLING_ERROR_4" class="blsp-spelling-error"><span id="SPELLING_ERROR_4" class="blsp-spelling-error"><span id="SPELLING_ERROR_4" class="blsp-spelling-error">CryEngine</span></span></span> 2 for the game <span id="SPELLING_ERROR_5" class="blsp-spelling-error"><span id="SPELLING_ERROR_5" class="blsp-spelling-error"><span id="SPELLING_ERROR_5" class="blsp-spelling-error">Crysis</span></span></span> (too much Cry <span id="SPELLING_ERROR_6" class="blsp-spelling-error"><span id="SPELLING_ERROR_6" class="blsp-spelling-error"><span id="SPELLING_ERROR_6" class="blsp-spelling-error">lol</span></span></span>). I first implemented <a href="http://www.gamedev.net/community/forums/topic.asp?topic_id=463075&PageSize=25&WhichPage=4">pure 2D <span id="SPELLING_ERROR_7" class="blsp-spelling-error"><span id="SPELLING_ERROR_7" class="blsp-spelling-error"><span id="SPELLING_ERROR_7" class="blsp-spelling-error">SSAO</span></span></span></a> (depth compare) by <span id="SPELLING_ERROR_8" class="blsp-spelling-error"><span id="SPELLING_ERROR_8" class="blsp-spelling-error"><span id="SPELLING_ERROR_8" class="blsp-spelling-error">Arkano</span></span></span>22 which is quite straight forward. It basically compares depth of a random <span id="SPELLING_ERROR_9" class="blsp-spelling-corrected"><span id="SPELLING_ERROR_9" class="blsp-spelling-error"><span id="SPELLING_ERROR_9" class="blsp-spelling-error">offsetted</span></span></span> sample. I also did the from the <a href="http://www.gamerendering.com/2009/01/14/ssao/"><span id="SPELLING_ERROR_10" class="blsp-spelling-error"><span id="SPELLING_ERROR_10" class="blsp-spelling-error"><span id="SPELLING_ERROR_10" class="blsp-spelling-error">GameRendering</span></span></span> </a>website which uses a projected randomized <span id="SPELLING_ERROR_11" class="blsp-spelling-error"><span id="SPELLING_ERROR_11" class="blsp-spelling-error"><span id="SPELLING_ERROR_11" class="blsp-spelling-error">offsetted</span></span></span> normals and project it back to image space. I find these two <span id="SPELLING_ERROR_12" class="blsp-spelling-corrected">implementation</span> very interesting.</div><br /><div></div><div>The projected technique is, I would say, the correct computation of <span id="SPELLING_ERROR_13" class="blsp-spelling-error"><span id="SPELLING_ERROR_12" class="blsp-spelling-error"><span id="SPELLING_ERROR_12" class="blsp-spelling-error">SSAO</span></span></span>, as it compares occlusion from the projected normals. But the price of projecting and transforming it back to texture space is just too heavy especially this is done in every sampling (8 or 16).</div><br /><div>In terms of speed nothing beats pure 2D <span id="SPELLING_ERROR_14" class="blsp-spelling-error"><span id="SPELLING_ERROR_13" class="blsp-spelling-error"><span id="SPELLING_ERROR_13" class="blsp-spelling-error">SSAO</span></span></span> <span id="SPELLING_ERROR_15" class="blsp-spelling-corrected">of course</span>, but this is only an estimate because the angle of the normals are not taken account of... in short, this is would not work in extreme cases. This becomes obvious when the scene is rotated in a axis, the <span id="SPELLING_ERROR_14" class="blsp-spelling-error"><span id="SPELLING_ERROR_14" class="blsp-spelling-error">AO</span></span> shrinks and expands.<br /></div><div>Hence, I came up with a different approach of computing the <span id="SPELLING_ERROR_16" class="blsp-spelling-error"><span id="SPELLING_ERROR_15" class="blsp-spelling-error"><span id="SPELLING_ERROR_15" class="blsp-spelling-error">SSAO</span></span></span>. This is the sucker-punch question, why do I need to project the normals back to image space each sample if I'm already plotting data in a 2D space? Pure 2D was correct, I agree on this techniques assumption. Projecting normals is correct as it is the proper estimation of occlusion.</div><br /><div>My <span id="SPELLING_ERROR_18" class="blsp-spelling-corrected">implementation</span> in the screenshot above is working in a pure 2D <span id="SPELLING_ERROR_19" class="blsp-spelling-error"><span id="SPELLING_ERROR_17" class="blsp-spelling-error"><span id="SPELLING_ERROR_16" class="blsp-spelling-error">SSAO</span></span></span> but with the normals taken accounted WITHOUT projecting to texture space. We know normals are directions. By simply extending it a few units what we get is the expanded version of the normal, as if we are scaling the vertexes up. With that in mind, even if this is 3D space, if you imagine looking at the offset is really just a two-dimensional information (as in deferred <span id="SPELLING_ERROR_18" class="blsp-spelling-error"><span id="SPELLING_ERROR_17" class="blsp-spelling-error">uv</span></span> as data). Then, dividing it with the current depth so we offset properly and PRESTO, we got ourselves the UV offset of the sample based from the extended normals. Now do an original normal vs <span id="SPELLING_ERROR_20" class="blsp-spelling-corrected"><span id="SPELLING_ERROR_19" class="blsp-spelling-error"><span id="SPELLING_ERROR_18" class="blsp-spelling-error">offsetted</span></span></span> normal when sampling and you get the same occlusion of the projected normals. The unique part here is that I compute the offsets of the normal prior to sampling, which means I am doing a projected normal comparison in pure 2D space. (My hunch is that this is also <span id="SPELLING_ERROR_19" class="blsp-spelling-error">posible</span> in depth comparison instead of normals, I'll probably test this <span id="SPELLING_ERROR_20" class="blsp-spelling-error">tommorrow</span>.)<br /></div><div>I don't know if this is a new implementation, but the result is as if I am just doing a 16 sampled filter or simple Depth of Field or <span id="SPELLING_ERROR_22" class="blsp-spelling-corrected">simpler</span>. Its is more optimized than projected normals as Mr <a href="http://iquilezles.org/www/articles/ssao/ssao.htm"><span id="SPELLING_ERROR_23" class="blsp-spelling-error"><span id="SPELLING_ERROR_20" class="blsp-spelling-error"><span id="SPELLING_ERROR_21" class="blsp-spelling-error">Iñigo's</span></span></span> implementation</a>. No matrix multiplication, no reflect function, no sign function, etc and the result is <span id="SPELLING_ERROR_24" class="blsp-spelling-corrected">theoretically</span> the same. With a simple multiplication of the normals and using that to be the offset position prior to sampling it has achieved a similar result.</div><br /><div></div><div>ADD:</div><br /><div>I look into <span id="SPELLING_ERROR_25" class="blsp-spelling-error"><span id="SPELLING_ERROR_21" class="blsp-spelling-error"><span id="SPELLING_ERROR_22" class="blsp-spelling-error">nvidia's</span></span></span> sample of their own implementation of <span id="SPELLING_ERROR_26" class="blsp-spelling-error"><span id="SPELLING_ERROR_22" class="blsp-spelling-error"><span id="SPELLING_ERROR_23" class="blsp-spelling-error">SSAO</span></span></span>. But the mathematics is way beyond me so I don't bother trying it out. Plus considering they had a lot of mathematical instructions to go through so I bet<span style="color:#000000;"> its</span> heavier.</div><br /><p>ADD: 20/07/09 Accurate 2D <span id="SPELLING_ERROR_24" class="blsp-spelling-error">SSAO</span> with 'not so good' noise texture<br /><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvulr0F7utDm7oFCpGf5DfGPxjTYoOL_wdf5U2gFYvOsqpQU8ZPp0S1mAEWEJEbPSLvxaqANHYzk10s0HvrsD4bjyBwB92zayn9Cvkg_sZq3CsE018HBr2z4idsp68mMkyHnJ40WDLFno/s1600-h/SSAO_new_implementation200709.jpg"><img style="TEXT-ALIGN: center; MARGIN: 0px auto 10px; WIDTH: 400px; DISPLAY: block; HEIGHT: 300px; CURSOR: hand" id="BLOGGER_PHOTO_ID_5360384429559829666" border="0" alt="" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvulr0F7utDm7oFCpGf5DfGPxjTYoOL_wdf5U2gFYvOsqpQU8ZPp0S1mAEWEJEbPSLvxaqANHYzk10s0HvrsD4bjyBwB92zayn9Cvkg_sZq3CsE018HBr2z4idsp68mMkyHnJ40WDLFno/s400/SSAO_new_implementation200709.jpg" /></p></a>John David Bonifacio Uyhttp://www.blogger.com/profile/04108888915485048942noreply@blogger.com12tag:blogger.com,1999:blog-6829904489614710740.post-26743291500391692582009-07-07T16:49:00.010+08:002009-07-07T22:33:29.854+08:00Light Prepass: Dual Paraboloid Shadow Mapping<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiKtuQoAk4QKjGMFIU30-pYMiphpvjbxtYq0p1RtWHDTO638hCQ81eJzW8lpzkKlH-4I1cm93ZD1OJfgRp2m9aw37Hl10-t5J-Pt0cE-vXHRrLZzUXueWYF42MC5BJ7F8g77j3dyPzovN0/s1600-h/ShadowMap_DualPara_07_7_09.jpg"><img style="TEXT-ALIGN: center; MARGIN: 0px auto 10px; WIDTH: 400px; DISPLAY: block; HEIGHT: 300px; CURSOR: hand" id="BLOGGER_PHOTO_ID_5355646555907773250" border="0" alt="" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiKtuQoAk4QKjGMFIU30-pYMiphpvjbxtYq0p1RtWHDTO638hCQ81eJzW8lpzkKlH-4I1cm93ZD1OJfgRp2m9aw37Hl10-t5J-Pt0cE-vXHRrLZzUXueWYF42MC5BJ7F8g77j3dyPzovN0/s400/ShadowMap_DualPara_07_7_09.jpg" /></a> <div>Shadows are quite essential in helping perceive depth and distance. Some games even <span id="SPELLING_ERROR_0" class="blsp-spelling-corrected">deliberately</span> <span id="SPELLING_ERROR_1" class="blsp-spelling-corrected">exaggerate</span> them in order to deliver emotions, intrigues and/or the dramas to the player (e.i <span id="SPELLING_ERROR_0" class="blsp-spelling-error"><span id="SPELLING_ERROR_2" class="blsp-spelling-error">Bioshock</span></span>, Dead Space, etc). <span id="SPELLING_ERROR_1" class="blsp-spelling-corrected"><span id="SPELLING_ERROR_3" class="blsp-spelling-corrected">Unfortunately</span></span>, simulating shadow is one of those expensive luxury or but to some is a necessity. Shadow mapping is one of the common technique on shadow rendering. A shadow map is rendered in the perspective of the light's eye, storing the depth value in a texture which then be used on depth test on the shadow projection pass. Simple as it may sound but when it comes to <span id="SPELLING_ERROR_2" class="blsp-spelling-error"><span id="SPELLING_ERROR_4" class="blsp-spelling-error">omni</span></span>-directional lights (imagine a sun, lighting on every direction) it drastically increase its complexity. </div><br /><div></div><div>One solution is to render shadow maps with 6 primary 3D axis (up, down, left, right, forward and backward). This would mean one must render the light frustum on each axis 6 times, 600% of the time spent on a single light source, not to mention 6 shadow map textures, heavy on speed and memory (EDIT: Mei <span id="SPELLING_ERROR_5" class="blsp-spelling-error">de</span> <span id="SPELLING_ERROR_6" class="blsp-spelling-error">Koh</span>, a friend of mine added that its possible to use a virtual cube map shadow map as not to render the scene 6 times... I haven't study this one though). Enter Dual Paraboloid Shadow Mapping.</div><br /><div></div><div>Dual Paraboloid Shadow Mapping is a form of optimizing <span id="SPELLING_ERROR_3" class="blsp-spelling-error"><span id="SPELLING_ERROR_7" class="blsp-spelling-error">omni</span></span>-light shadows. Instead of 6 maps, it will only use 2. How? Imagine curving the lens <span id="SPELLING_ERROR_4" class="blsp-spelling-corrected">up to</span> the point that if you render the scene pointing forward then pointing backward, you will get a close to perfect spherical vista of the scene. This can be used for <span id="SPELLING_ERROR_5" class="blsp-spelling-corrected"><span id="SPELLING_ERROR_8" class="blsp-spelling-corrected">environmental</span></span> mapping (or reflections) but this for another time. The image you see above is using this technique. As you may notice, the shadow penumbra differs based from position and distance to the light source. Although based from developers of STALKER article in <a href="http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter09.html"><span id="SPELLING_ERROR_6" class="blsp-spelling-error"><span id="SPELLING_ERROR_9" class="blsp-spelling-error">GPU</span></span> Gems 2</a> they avoided this technique as they are using a deferred rendering (very similar to Light <span id="SPELLING_ERROR_7" class="blsp-spelling-error"><span id="SPELLING_ERROR_10" class="blsp-spelling-error">Prepass</span></span> which I am using). I don't really know why they stated that, but it seem to work on my side. Hopefully, this will be just enough for our game <span id="SPELLING_ERROR_9" class="blsp-spelling-corrected"><span id="SPELLING_ERROR_11" class="blsp-spelling-corrected">requirements</span></span>.</div><div></div><br /><div>(Technical clue: It does really pay off when your native space is the View space.)</div><br /><div></div><div>There is one technical drawback on my <span id="SPELLING_ERROR_10" class="blsp-spelling-corrected"><span id="SPELLING_ERROR_12" class="blsp-spelling-corrected">implementation</span></span> though. As I curve the scene when I'm rendering the lights perspective into a paraboloid, some mesh don't bend well (I'm doing this in the vertex <span id="SPELLING_ERROR_11" class="blsp-spelling-error"><span id="SPELLING_ERROR_13" class="blsp-spelling-error">shader</span></span>). Example is the plane that is the image above. I spent a bit of time on trying to solve this. When the plane was bent, a weird black appears on the edges of the plane. My solution was to flash the shadow map with WHITE first, and do a <span id="SPELLING_ERROR_12" class="blsp-spelling-corrected"><span id="SPELLING_ERROR_14" class="blsp-spelling-corrected">multiplicative</span></span> blending (<span id="SPELLING_ERROR_13" class="blsp-spelling-error"><span id="SPELLING_ERROR_15" class="blsp-spelling-error">src</span></span>=<span id="SPELLING_ERROR_14" class="blsp-spelling-error"><span id="SPELLING_ERROR_16" class="blsp-spelling-error">DestColor</span></span>; <span id="SPELLING_ERROR_15" class="blsp-spelling-error"><span id="SPELLING_ERROR_17" class="blsp-spelling-error">dest</span></span>=Zero) when you are rendering the shadow depth. Then presto! no more black edges.</div><br /><div></div><div>Next, I need to optimize my shadow <span id="SPELLING_ERROR_16" class="blsp-spelling-corrected"><span id="SPELLING_ERROR_18" class="blsp-spelling-corrected">implementation</span></span>. If you notice, there still some tweaking to do on the shadow edges. Cheers!</div>John David Bonifacio Uyhttp://www.blogger.com/profile/04108888915485048942noreply@blogger.com5tag:blogger.com,1999:blog-6829904489614710740.post-28946817930030017602009-06-30T21:54:00.007+08:002009-06-30T23:39:36.852+08:00Light Prepass Cascaded Shadow Mapping, continued<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_ABYdjfi7OYI_up9ce9BWru7lUJ62S9UXOvKjE3YhjAjrTjshm3MTu2MrCc12gXiMNrXlm72l5a74H4MOQ6JIyUxxUUErtAdd-2Ag5UM0-QKrk_OF5-0HGkvnfXn0ZOd-0ChL1KuNlfY/s1600-h/ShadowFilters_30_6_09c.jpg"><img style="TEXT-ALIGN: center; MARGIN: 0px auto 10px; WIDTH: 400px; DISPLAY: block; HEIGHT: 300px; CURSOR: hand" id="BLOGGER_PHOTO_ID_5353124072906715186" border="0" alt="" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_ABYdjfi7OYI_up9ce9BWru7lUJ62S9UXOvKjE3YhjAjrTjshm3MTu2MrCc12gXiMNrXlm72l5a74H4MOQ6JIyUxxUUErtAdd-2Ag5UM0-QKrk_OF5-0HGkvnfXn0ZOd-0ChL1KuNlfY/s400/ShadowFilters_30_6_09c.jpg" /></a> <div><div>Ah... shadow map filtering, I didn't know before that there's a lot of them. (The screenshot were using a 512x512 resolution shadow map to easy compare the various filtering I implemented.) In my older screenshots, I'm using the 5x5 <span id="SPELLING_ERROR_0" class="blsp-spelling-error">PCF</span> (middle) ( short for Percentage Closer Filtering) which, of course better in smoothing out the shadows but the heaviest of all implementation I did. The 4 Tap <span id="SPELLING_ERROR_1" class="blsp-spelling-error">PCF</span> (left) is the fastest but the ugliest, not very nice if your shadow map resolution is low. I also tried Gaussian blur and random filters(not in screenshot). I also tried non <span id="SPELLING_ERROR_2" class="blsp-spelling-error">PCFiltering</span> like Variance Shadow mapping but considering I'm doing Cascaded Shadow Mapping, <span id="SPELLING_ERROR_3" class="blsp-spelling-error">VSM</span> has a bigger memory appetite considering it needs two channels to store depth and depth*depth. Which brings me to my own implementation.</div><div> </div><div> </div><div></div><div>Similar to what I've been doing in the past, and sometime good at it... I dub or name stuffs. I dubbed my implementation as<span style="color:#330099;"> <strong>8 Tap O-<span id="SPELLING_ERROR_4" class="blsp-spelling-error">PCF</span> </strong><span style="color:#000000;">or </span><strong>Occasional-<span id="SPELLING_ERROR_5" class="blsp-spelling-error">PCF</span></strong></span>. It may sound funny but <span id="SPELLING_ERROR_6" class="blsp-spelling-error">thats</span> only half of the point. My real point or reason why I name such is because how it self-optimized itself. Let me explain: (inhaled intensely)</div><div> </div><div> </div><div></div><div></div><div></div><div>The square first 4 tap of 8 will have enough information to proceed with the other 4 taps. By dissecting the <span id="SPELLING_ERROR_7" class="blsp-spelling-error">texel</span> into 3x3, the center is literally ignored due to the fact that the size and amount of the <span id="SPELLING_ERROR_8" class="blsp-spelling-error">texels</span> sampled are enough already. Sampling is basically 1/3 of the <span id="SPELLING_ERROR_9" class="blsp-spelling-error">texel</span> around the <span id="SPELLING_ERROR_10" class="blsp-spelling-error">texel and should not go outside the texel</span>. The wonderful thing about this is that I'm only sampling 8 times but only the first 4 if shadow test failed. If you closely examine its inner penumbra of the shadow, you'll notice the smoothness of the penumbra, almost to that point it appears to be using a higher shadow map resolution. Of course the outer penumbra will still be jagged but hey considering the 3 tones I added, and the edge are lightest, with a good Depth of Field, this will even look like 2048x2048 shadows. I like the flexibility of this filtering, much so that it can fake shadow bleeding already, beating the soft <span id="SPELLING_ERROR_11" class="blsp-spelling-error">PCFs</span> <span id="SPELLING_ERROR_12" class="blsp-spelling-corrected">in terms</span> of speed and control. So there you have it 8 Tap O-PCF... sounds like those nifty items in Monkey Island like Mug O'Grog or Spit O'Matic. Just say, 8-Tap-O'Pac-eF.... (windblowin tumble weed) ...nah that was corny. Ha!</div><div> </div><div> </div><div></div><div></div><div></div><div>I wouldn't say I'm finish with this topic... cause definitely, I'll be revisiting this when the renderer is formalized. Next stop, Dual Paraboloid Shadow Mapping for our indoor and outdoor shadowing daily needs.</div></div>John David Bonifacio Uyhttp://www.blogger.com/profile/04108888915485048942noreply@blogger.com6tag:blogger.com,1999:blog-6829904489614710740.post-50855972589079283432009-06-24T17:41:00.008+08:002009-06-25T11:11:50.771+08:00Light Prepass Cascaded Shadow Mapping<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMOJUumk1LbxMzNdAuADAGLirnT4qOcYeUwDGpmDDcjYz3RzT5UtYeL8YRrArXFomjb3JIy4gUS_zPXTCrHy4VgyFMvUVgOkobaza0G0Rf7jnyrE1m9yZWAguyU6Iv-ADQmGggqpqXhvk/s1600-h/ShadowMap_24_6_09.jpg"><img style="TEXT-ALIGN: center; MARGIN: 0px auto 10px; WIDTH: 400px; DISPLAY: block; HEIGHT: 300px; CURSOR: hand" id="BLOGGER_PHOTO_ID_5350827113212146546" border="0" alt="" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMOJUumk1LbxMzNdAuADAGLirnT4qOcYeUwDGpmDDcjYz3RzT5UtYeL8YRrArXFomjb3JIy4gUS_zPXTCrHy4VgyFMvUVgOkobaza0G0Rf7jnyrE1m9yZWAguyU6Iv-ADQmGggqpqXhvk/s400/ShadowMap_24_6_09.jpg" /></a><br />Updates, updates, updates (actually, it's only an update. Singular). The image you see above is Light Prepass rendering with Cascaded Shadow Mapping. Each of the color changes in the image above represents the splits of the CSSM based on distance from the camera.<br /><br />I first implemented the Nvidia's <a href="http://developer.download.nvidia.com/SDK/10.5/opengl/src/cascaded_shadow_maps/doc/cascaded_shadow_maps.pdf">CSSM </a>implementation (which I intentionally didn't post the results here because how I implemented it was too embarassing to show). I can't seem to stabilize that implementation. Then I tried Wolfgang Engel's [ShaderX5] CSSM, which I then simplify based from Michael Valiant's [ShaderX6]. Although, I didn't fully implementated exactly as he did. Primarily on MEC (minimal enclosing circle) part. I instead used a transformed-axis bounding box on each frustum split. With bounding box, I don't have to recompute the split even if I rotate or translate the camera. Depth can also be constant, depends on the requirements. But what I did is the just using the radius of the BB plus the length of the difference between center of BB and original light position. The center of the BB is the untransformed axis. I call it Camera-Frustum-Split Bound Depth. It's a just my fancy way of saying, 'getting enough' precision from an orthographic shadow kinda' thing. In simple terms, imagine a light tripod attached on top of a helmet beaming in a certain direction downward with a special gimbal ignoring panning, yawing and rolling of the head (rolling heads... o_O). The distance of the light from the eyes are constant. Though I think this is only applicable to outdoor sun type scene.<br /><br />For the shadow view matrix, I preserved the light direction in the BB(transformed) space so whenever I turn or move the camera the depth and shadow view angle are constant.<br /><br />I still have to work on the filtering and fading-into-the-next shadow split. The initial part I've done which is the transition of cascades are based from pixel depth(view space) distance vs the far distance of each split. My plan is to gradually fade in between splits. I choose this way because it make sense, atleast for me, that transitions are based from a field of view of the eye. I read somewhere, I can't remember where, that this also avoids shadow split transition popping.<br /><br />Btw, I store the splits in per channel rather than a spliting a single channel texture. I don't know how this will affect my rendering or if this is better or worse, but good enough... for now.John David Bonifacio Uyhttp://www.blogger.com/profile/04108888915485048942noreply@blogger.com0tag:blogger.com,1999:blog-6829904489614710740.post-7251624291206006242009-06-12T13:49:00.006+08:002009-06-12T14:24:35.306+08:00Light Prepass Shadow Mapping<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiEW62Q1EBq7ZqZE4KOxo_ebFHuGZuyuWZLbW0APiZ9nwlMkNzC8IynDboyHNdukvvwzvpkwlKLUAjofa9oczN5RZbH279W0NUakaIycm52RPwlcarhoQLjbyIAy3GweQcXKb5czY75F5A/s1600-h/ShadowMap01.jpg"><img style="TEXT-ALIGN: center; MARGIN: 0px auto 10px; WIDTH: 400px; DISPLAY: block; HEIGHT: 300px; CURSOR: hand" id="BLOGGER_PHOTO_ID_5346322585411873410" border="0" alt="" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiEW62Q1EBq7ZqZE4KOxo_ebFHuGZuyuWZLbW0APiZ9nwlMkNzC8IynDboyHNdukvvwzvpkwlKLUAjofa9oczN5RZbH279W0NUakaIycm52RPwlcarhoQLjbyIAy3GweQcXKb5czY75F5A/s400/ShadowMap01.jpg" /></a><br /><br />Ok, update on my Light Prepass journey. Shadow mapping is actually new to me, let alone ever using it in a deferred rendering. In fact this my first time to probably nail this thing to its head. Currently, its a simple shadow mapping... no magic.. no fancy footwork. The image you see here has a point light(w/ attenuation) casting shadow with 5x5 PCF (percent closer filtering). At this point, its all raw-brute-coding, definitely screaming for optimization. And obviously, I'm hiding all the shadow errors with a neat camera angle.<br /><br />The model rendering however exhibits sound optimation. I've managed to remove all matrix computation in the pixel shader on the light passes (full screen quad and light convex mesh - I call these guys 'light blobs'). The mathematics here was such a nose bleed. The key here is making View Space your battle ground. Good thing the graphics gurus are around... MJP, Drilian, Engel, etc. Here's the <a href="http://mynameismjp.wordpress.com/2009/03/10/reconstructing-position-from-depth/">link</a>.<br /><br />In terms of prepass packing, specifically the normals, the Cryengine 3 <a href="http://www.crytek.com/fileadmin/user_upload/inside/presentations/2009/A_bit_more_deferred_-_CryEngine3.ppt">suggestion </a>seem to produce some inaccuracies when I error check it with the fresh untouch normals. I added a weird value just to remove the error. Here's the code.<br /><br />float4 PackDepthNormal(in float z, in float3 normal)<br />{ <br />float4 output; normal = normalize(normal); <br />normal.x *= 1.000000000000001f; //<--- my nasty mod <br />output.xy = normalize(normal.xy) * sqrt(normal.z * .5f + .5f); <br />return PackDepth(output, z);<br />}<br /><br />Anyway, I need to investigate this further. I find Pat Wilson's <a href="http://www.gamedev.net/community/forums/topic.asp?topic_id=514536">idea </a>on converting normals to spherical coord better. I haven't profiled this but this seem to be a more optimized approach.<br /><br />Back to shadows, my target is to use Cascaded Shadow Maps. Hopefully, in a few days time I can post the results.John David Bonifacio Uyhttp://www.blogger.com/profile/04108888915485048942noreply@blogger.com10tag:blogger.com,1999:blog-6829904489614710740.post-75630609872842589342009-05-28T23:19:00.006+08:002009-05-28T23:31:26.816+08:00My First Deferred Shading: Light Prepass Rendering<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8v3v0WG-93wtm6H73FskNazcqp6rK5PnISeHum24lGs-Xxr0ch7yiH5NPaz8tAYakQs5vh77yaNuoTFCr84xTOggS0M0nJO9PxSaLArbuXNB4RCuyyaWDy56-3ZjC2UoJPgwGSaLmbg4/s1600-h/MyFirstDeferred.jpg"><img style="TEXT-ALIGN: center; MARGIN: 0px auto 10px; WIDTH: 400px; DISPLAY: block; HEIGHT: 267px; CURSOR: hand" id="BLOGGER_PHOTO_ID_5340894973240388738" border="0" alt="" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8v3v0WG-93wtm6H73FskNazcqp6rK5PnISeHum24lGs-Xxr0ch7yiH5NPaz8tAYakQs5vh77yaNuoTFCr84xTOggS0M0nJO9PxSaLArbuXNB4RCuyyaWDy56-3ZjC2UoJPgwGSaLmbg4/s400/MyFirstDeferred.jpg" /></a> <div>Ahh the teapot mesh... the always used but pne of many useful mesh for rendering objects. Especially in prototypes which I am currently doing. Now, without further ado, I give you, MY FIRST DEFERRED RENDERING (drum-rolls).</div><div><br />I'm doing a <strong>Light PrePass</strong> rendering by good'olde Wolfgang. The process is remarkably simple once when you understand it. Similar to Deferred Rendering, (well this IS deferred rendering), which only renders normal/depth in the first pass. The lights are then rendered using the first pass normal and depth buffer. The light pass are accumulated and then applied on the gather pass.</div><div><br />The image above is just a teapot rendered with two point light. Not much I can show right now. But the key thing to do here is how you pack the data in the buffers. Currently, I tested out 3 ways of packing the normal/depth and 2 ways of light accumulation passes. I find Pat Wilson's suggestion on <strong>transforming the normals to spherical coordinates</strong> you can mind-blowingly pack the 3 floats into a 1 and a partial half(just enough to store the sign of the normal.z). I find Reltham via Drilian's (in <a href="http://www.gamedev.net/">http://www.gamedev.net/</a>) suggestion also interesting on how the light accumulation pass is done, which is to do a multiplicative blending, instead of standard alphabend. The colors are sharper and the mid-blend of two lights seems to look more 'realistic'. I haven't done any tone mapping or normal mapping, I'm quite excited on that after I nail the packing of buffers.</div>John David Bonifacio Uyhttp://www.blogger.com/profile/04108888915485048942noreply@blogger.com6tag:blogger.com,1999:blog-6829904489614710740.post-70461200181300529542009-05-18T10:18:00.012+08:002009-05-24T15:32:01.170+08:00Advent Rising: A Game Taken Me By Surprise<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjSUnuMCfjAjEmm5AoaaUGRZ3ehCxJWQyb9WrpIMD_Mwf5zAmupfeo6crdGmZS_piOj2a8aYJiU6uWpnEN8PGOksKq6RHCtZEVafCvz3O0MJTjgMG9uaPVm8s55JYB-KOeHqT0aQz-J1VY/s1600-h/advent_rising.jpg"><img style="TEXT-ALIGN: center; MARGIN: 0px auto 10px; WIDTH: 405px; DISPLAY: block; HEIGHT: 328px; CURSOR: hand" id="BLOGGER_PHOTO_ID_5336983250930587506" border="0" alt="" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjSUnuMCfjAjEmm5AoaaUGRZ3ehCxJWQyb9WrpIMD_Mwf5zAmupfeo6crdGmZS_piOj2a8aYJiU6uWpnEN8PGOksKq6RHCtZEVafCvz3O0MJTjgMG9uaPVm8s55JYB-KOeHqT0aQz-J1VY/s400/advent_rising.jpg" /></a> When Nazarene (my loving other-half) and I went to Sim Lim Square to buy some stuffs, we saw this game store which sells a wide variety of PC games. Much to my delight, they also sell old games which I missed playing. Best of all, they were on sale (and my GF said she'll pay for it). Buy 2 get 1 free. A good deal, I should say. I took FEAR 1 and Broken Sword 3. Then I needed to choose the free game.<br /><br />Then I saw Advent Rising. I was a bit hesitant, at first. I only heard about this game from my ex-colleague in Emerging Entertainment, Charles, a game designer who's a frantic RPG-er. Honestly, I only chose it to be the free game because of its intriguing orange-green DVD casing cover.<br /><br />When I got home, I installed FEAR and Broken Sword 3 first. Installing Advent Rising to me was just an after thought. A week later, I pop in Advent Rising in my laptop. And after a few hours of playing it, I was taken a back... This game is a hidden gem!<br /><br />I have to admit, the game is not easy to get into. The controls were quite complicated, but the next thing I know, they became 2nd nature to me. It's story.... one word... SURPRISINGLY AWESOME! (ok that was 2 words)<br /><br />How this game intriques me was the way it's experience changes as it progress. If I could put it in a title, I'll dubbed it as "Game Evolved Within A Game". The game has also a final trick on its sleeves after the credits, but I won't it spoil for you.<br /><br />Advent Rising was intended to be a trilogy. The game ends in a cliff hanger. But like the Back to the Future, with its famous 'To be continued...', the story satisfies its objectives within itself. Unfortunately, news said the sequel were cancelled due to its poor sales. But for now, I'll be positive about this, they WILL make the sequels.<br /><br />Why didn't it sell that good? Maybe poor marketing strategy on the part of its publisher, Majesco or THQ. Kudos to the developer Glyphx, though. Maybe because of some technical hiccups the game has. But for me, I was bought.<br /><br />I read some of the reviews of this game. Most of them shout 'rip-offs' or 'cliches'. But in reality, no ideas are new under the sun. I don't see, even if they're similar to other story, it intentionally ripping off other games. Uniqueness shouldn't be a standard but a plus. Otherwise, anything will be a rip off of something.<br /><br />Underrated games are out there. Most of them were overshadowed by big titles when they come out (example: Psyconauts). My advise to everyone, don't just be interested on overly-hyped games. There are hidden gems out there. In this 'case', its an orange-green diamond.John David Bonifacio Uyhttp://www.blogger.com/profile/04108888915485048942noreply@blogger.com3tag:blogger.com,1999:blog-6829904489614710740.post-20388754734575036212009-02-25T18:05:00.009+08:002009-02-25T18:46:56.914+08:00Cooking Programmer (ver 0.01) "Home Super-duper Burger"<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZYkRw7FiGO1FjMehjjmiiZeCsKF6yeV-6ANpQdmkqmBXzYiPqNWKPbT7yjBI06W_r9McJrPTxhz60R46BW8veME94DFqfh0UZ6_WriTYRABonUzYE0JbAc13g9DtIN4ZOLKG6dk_E2jk/s1600-h/HomeBurger.jpg"><img id="BLOGGER_PHOTO_ID_5306674443680174562" style="DISPLAY: block; MARGIN: 0px auto 10px; WIDTH: 400px; CURSOR: hand; HEIGHT: 300px; TEXT-ALIGN: center" alt="" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZYkRw7FiGO1FjMehjjmiiZeCsKF6yeV-6ANpQdmkqmBXzYiPqNWKPbT7yjBI06W_r9McJrPTxhz60R46BW8veME94DFqfh0UZ6_WriTYRABonUzYE0JbAc13g9DtIN4ZOLKG6dk_E2jk/s400/HomeBurger.jpg" border="0" /></a> After having much free time at home, I made this unhealthy but perfect<br />hunger quencher meal. Enjoy....<br /><br /><strong>Cooking Programmer's</strong><br /><strong>Home Made Super-duper Burger</strong><br /><strong>with Spicy Cajun Fries</strong><br /><br /><strong>Burger</strong><br />- beef burger patties<br />- Smoked Sliced Cheese<br />- finely chopped Onion mixed with tomato catsup<br />- cucumber pickles<br />Grill burger patties (brush with butter to enhance taste).Slightly grill the burger bun.<br />Place the cheese on top of the burger patties while on grill to slightly melt it. Assemble the burger!<br /><br /><strong>Fries</strong><br />- 1 - 2 pcs (julienned) potatoes soaked in slightly salted water<br />- 1 tbps Cajun powder<br />- 1/2 tsp chili powder<br />- 1/2 - 1 tsp salt<br />Fry potatoes in hot oil. Mix spices with potatoes after.<br /><br /><strong>Dip</strong><br />- 1-2 tbsp mayonaise (low-fat will do)<br />- dash of pepper<br />- crushed fresh Basil (or dry-powdered)John David Bonifacio Uyhttp://www.blogger.com/profile/04108888915485048942noreply@blogger.com1tag:blogger.com,1999:blog-6829904489614710740.post-46068190239961718512009-01-13T10:48:00.006+08:002009-01-13T10:59:29.829+08:00New Wolverine Game... Awesome Screenshot!<strong>X-Men Origins: Wolverine</strong><br /><br /><object type="application/x-shockwave-flash" data="http://widgets.clearspring.com/o/4967f37713b4903b/496c007e333a1035/4967f677f7aa52cb/7c33627d" id="W4967f37713b4903b496c007e333a1035" width="500" height="281"><param name="movie" value="http://widgets.clearspring.com/o/4967f37713b4903b/496c007e333a1035/4967f677f7aa52cb/7c33627d" /><param name="wmode" value="transparent" /><param name="allowNetworking" value="all" /><param name="allowScriptAccess" value="always" /><param name="allowFullScreen" value="true" /></object><br /><br />Well... atleast how it was presented. I just hope the game (and even the movie with the same title) lives up to its hype.John David Bonifacio Uyhttp://www.blogger.com/profile/04108888915485048942noreply@blogger.com0tag:blogger.com,1999:blog-6829904489614710740.post-26602433550757972382008-11-23T06:01:00.005+08:002008-11-30T15:32:04.535+08:00Lonewolf Gameplay Compilation 2008<p><object height="344" width="425"><param name="movie" value="http://www.youtube.com/v/avRJfWFak8c&hl=en&fs=1"><param name="allowFullScreen" value="true"><param name="allowscriptaccess" value="always"><embed src="http://www.youtube.com/v/avRJfWFak8c&hl=en&fs=1" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"></embed></object></p><p>Well... so far so good. This video is a compilation of demos and milestones captured periodically by the dev team. The combo system is quite redundant on this video, but I can assure you a lot of changes happened since this compilation was done. Some of the 'disciplines' are overpowered right now, but eventually we'll be able to balance them out.</p><p>For more videos go to <a href="http://www.ksatria.com/Video_01.html">http://www.ksatria.com/Video_01.html</a></p>John David Bonifacio Uyhttp://www.blogger.com/profile/04108888915485048942noreply@blogger.com1tag:blogger.com,1999:blog-6829904489614710740.post-54059474509770984402008-06-30T12:40:00.003+08:002008-06-30T12:50:01.842+08:00Gaming World Tribute<object type="application/x-shockwave-flash" data="http://www.collegehumor.com/moogaloop/moogaloop.swf?clip_id=1819021&fullscreen=1" width="400" height="300" ><param name="allowfullscreen" value="true" /><param name="movie" quality="best" value="http://www.collegehumor.com/moogaloop/moogaloop.swf?clip_id=1819021&fullscreen=1" /></object><div style="padding:5px 0; text-align:center; width:400;"></div><br />Makes me wanna si....hum.John David Bonifacio Uyhttp://www.blogger.com/profile/04108888915485048942noreply@blogger.com0tag:blogger.com,1999:blog-6829904489614710740.post-44949680879107992862008-06-13T02:09:00.010+08:002008-12-11T11:09:23.277+08:00Back To PC... Now Portable<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh0kovJO8kzMOCZDi-IWQwnZZo74vwPwX4Z_a7eh3-aFHS6CYJQL4p-6F6O1SlEI4v3DFfYY20_q5RkvU_6ua3khpigi09CZK0M10a8IVsqVCxIOvsPLU4GESZuSvKK-v69L3bHK7nR19U/s1600-h/Acer_Aspire_6920G_view.jpg"><img id="BLOGGER_PHOTO_ID_5211059671453112146" style="DISPLAY: block; MARGIN: 0px auto 10px; CURSOR: hand; TEXT-ALIGN: center" alt="" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh0kovJO8kzMOCZDi-IWQwnZZo74vwPwX4Z_a7eh3-aFHS6CYJQL4p-6F6O1SlEI4v3DFfYY20_q5RkvU_6ua3khpigi09CZK0M10a8IVsqVCxIOvsPLU4GESZuSvKK-v69L3bHK7nR19U/s400/Acer_Aspire_6920G_view.jpg" border="0" /></a><br /><br />And so I finally got a new PC errr... laptop. My last set was, believe or not, a Pentium III 1GHz with 512MB RAM and Geforce 5600. Super-doper low it may seem but I shared fond memories with that computer. Game-memories, that is. I definitely need a new rig. <br /><br />So on June 10, 2008, my girl(Ms. Nazarene Madlangbayan) and I got Joey. Yes I gave it a name. My <a href="http://www.acer.com.sg/products/aspire6920G">Acer Aspire 6920G-833G32Mn</a><em>(Joey - an obnoxious but useful robot from Beneath the Steel Sky game)</em><br /><br />I bought a laptop for ease of transport and avoiding my landlord getting ideas on increasing my rent if I buy a deskstop instead for consuming more electricity.<br /><br />I never had a laptop before so I consider myself a NEWB on anything about it. <br /><br />I was doubtful at first if Joey can run the current gen games smoothly, especially its OS is MS Vista SP1 which is a resource monster. All of my fears simply vanished away when I ran Bioshock in Full HD resolution with all options set to HIGH. It played almost hitch free. It had some frame drops here and there, but WOW! DirectX 10 in a laptop at 1920 x 1080!<br /><br /><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgWh9Gz-jt79dS45pcgUrI0JZTuJ4cdi3EM4U1xVDist3OM75KncVfSAzdibc88GK0Str9MAXtZ9k4to7WU-UEPqtu9iEAR9yU6F5ApmWPbtzp5IZoXT64GVFxQ0ILyydZiAE4nP5aBlhE/s1600-h/Acer_Aspire_6920G_cover.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgWh9Gz-jt79dS45pcgUrI0JZTuJ4cdi3EM4U1xVDist3OM75KncVfSAzdibc88GK0Str9MAXtZ9k4to7WU-UEPqtu9iEAR9yU6F5ApmWPbtzp5IZoXT64GVFxQ0ILyydZiAE4nP5aBlhE/s400/Acer_Aspire_6920G_cover.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5211076674413373138" /></a><br /><br />Although I have yet to test Crysis with it, I'm pretty much sure Joey can run it atleast maybe with a low to mid setting.<br /><br />My next mission is to transform Joey into a developer's machine. That will be my first Vista and DirectX 10 programming experience.<br /><br />I'm back to PC... now portable.John David Bonifacio Uyhttp://www.blogger.com/profile/04108888915485048942noreply@blogger.com0