Screen Space Reflection Ray-Tracing Visualisation
One of the problems I was helping Losti out with in Kingdom Come: Deliverance was some Screen Space Reflections, which I was able to fix accurately. While getting these working I found it useful to be able to visualise what each pixel was actually trying to reflect so I could see if I was getting close (and in fact at one point the ray-tracing seemed to be working perfectly, but the reflections were still broken, suggesting that there was another shader at play). This is a little more complicated than most things we might use my CustomShaderDebug2D for, because we don't want to see what every pixel is doing simultaneously - we want to be able to look at an individual pixel and find out where it is ray-tracing for the reflection. The result looks like this: [url=http://darkstarsword.net/KingdomCome%20-%202018-03-26%20-%20013219.0.jps][img]http://darkstarsword.net/KingdomCome%20-%202018-03-26%20-%20013219.0.jps[/img][/url] In this image you can see that the dots in the right eye stop when they meet the gate, and so that pixel draws a reflection of the gate in the right eye. Conversely, the dots in the left eye miss the gate and keep going until they reach the edge of the screen, so that pixel does not draw any reflection in the left eye. This uses the mouse cursor to select which pixel we want to see the ray-traced sample positions. Losti has a HUD hiding feature in his fix, so I could pause the game, hide the HUD and use the mouse to choose a pixel. The mouse is slightly in depth in that screenshot from Losti's fix, so ignore that part - we're interested in the 2D / screen depth position. In the shader I added debugging that checks if it is the pixel under the mouse cursor, and if so it writes white dots to a RWTexture2D UAV that can then be used with the Debug2D shader as usual, changing the blend mode to "ADD ONE ONE" to see the ray-tracing and the game at the same time. I'm not going to post the code here, because SSR shaders vary a lot, but this is in fc333b27d8efa460-ps_replace.txt in Losti's Kingdom Come fix - you can enable the debugging in the shader and there's a comment with the boilerplate you need in the d3dx.ini (in addition to the usual boiler plate for the debug 2D shader). A couple of observations: - In the screenshot above you can see a line of dots in each eye, and if you attempt to stereo fuse them they appear to be coming out of the screen. These are actually two separate and independent lines, so ignore where your brain is trying to interpret them in 3D space, and look at the lines for the left and right images separately (in fact - if I had the mouse positioned over the dirt, those lines could be pointing in two very different directions). It is however, worth taking a closer look at those lines in stereo 3D, because that false stereo fusing showing the lines coming out of the screen could be something you look for to know when you are getting close. If the shader were not corrected at all, the lines would appear to stay at screen depth (assuming they are both starting on the same flat surface, and there's no ripples or whatever distorting them). - The lines need to start at the 2D screen depth pixel where the mouse cursor is - they should not start at depth. When the shader is completely unmodified this will likely be the case, but that does not mean it is correct yet, and a correction that moves them into depth might still be a step in the right direction, as a further correction elsewhere might result in them coming back to screen depth. In other words, in order to fix it you might need to make it worse for a little while first. - If you've found all the transformations you need to correct and have managed to get the reflections flat on the surface (*not* lifting up when the camera is tilted) instead of the correct depth, then you are getting close - at this point even the ray tracing should look like what you want (the end points are maybe only half a centimetre out, and you can't really tell from the ray-tracing at this point). Look for something like a camera or view position and try adjusting it by -separation*convergence (usually converted into world or view-space), as that will hopefully slightly adjust the ray-tracing directions so that the reflections appear at their accurate depths.
One of the problems I was helping Losti out with in Kingdom Come: Deliverance was some Screen Space Reflections, which I was able to fix accurately. While getting these working I found it useful to be able to visualise what each pixel was actually trying to reflect so I could see if I was getting close (and in fact at one point the ray-tracing seemed to be working perfectly, but the reflections were still broken, suggesting that there was another shader at play). This is a little more complicated than most things we might use my CustomShaderDebug2D for, because we don't want to see what every pixel is doing simultaneously - we want to be able to look at an individual pixel and find out where it is ray-tracing for the reflection.

The result looks like this:

Image

In this image you can see that the dots in the right eye stop when they meet the gate, and so that pixel draws a reflection of the gate in the right eye. Conversely, the dots in the left eye miss the gate and keep going until they reach the edge of the screen, so that pixel does not draw any reflection in the left eye.

This uses the mouse cursor to select which pixel we want to see the ray-traced sample positions. Losti has a HUD hiding feature in his fix, so I could pause the game, hide the HUD and use the mouse to choose a pixel. The mouse is slightly in depth in that screenshot from Losti's fix, so ignore that part - we're interested in the 2D / screen depth position. In the shader I added debugging that checks if it is the pixel under the mouse cursor, and if so it writes white dots to a RWTexture2D UAV that can then be used with the Debug2D shader as usual, changing the blend mode to "ADD ONE ONE" to see the ray-tracing and the game at the same time. I'm not going to post the code here, because SSR shaders vary a lot, but this is in fc333b27d8efa460-ps_replace.txt in Losti's Kingdom Come fix - you can enable the debugging in the shader and there's a comment with the boilerplate you need in the d3dx.ini (in addition to the usual boiler plate for the debug 2D shader).

A couple of observations:

- In the screenshot above you can see a line of dots in each eye, and if you attempt to stereo fuse them they appear to be coming out of the screen. These are actually two separate and independent lines, so ignore where your brain is trying to interpret them in 3D space, and look at the lines for the left and right images separately (in fact - if I had the mouse positioned over the dirt, those lines could be pointing in two very different directions). It is however, worth taking a closer look at those lines in stereo 3D, because that false stereo fusing showing the lines coming out of the screen could be something you look for to know when you are getting close. If the shader were not corrected at all, the lines would appear to stay at screen depth (assuming they are both starting on the same flat surface, and there's no ripples or whatever distorting them).

- The lines need to start at the 2D screen depth pixel where the mouse cursor is - they should not start at depth. When the shader is completely unmodified this will likely be the case, but that does not mean it is correct yet, and a correction that moves them into depth might still be a step in the right direction, as a further correction elsewhere might result in them coming back to screen depth. In other words, in order to fix it you might need to make it worse for a little while first.

- If you've found all the transformations you need to correct and have managed to get the reflections flat on the surface (*not* lifting up when the camera is tilted) instead of the correct depth, then you are getting close - at this point even the ray tracing should look like what you want (the end points are maybe only half a centimetre out, and you can't really tell from the ray-tracing at this point). Look for something like a camera or view position and try adjusting it by -separation*convergence (usually converted into world or view-space), as that will hopefully slightly adjust the ray-tracing directions so that the reflections appear at their accurate depths.

2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit

Alienware M17x R4 w/ built in 3D, Intel i7 3740QM, GTX 680m 2GB, 16GB DDR3 1600MHz RAM, Win7 64bit, 1TB SSD, 1TB HDD, 750GB HDD

Pre-release 3D fixes, shadertool.py and other goodies: http://github.com/DarkStarSword/3d-fixes
Support me on Patreon: https://www.patreon.com/DarkStarSword or PayPal: https://www.paypal.me/DarkStarSword

#1
Posted 03/25/2018 03:38 PM   
Another useful addition to my ever growing collection of bookmarks, probably 80-90% attributed to info you've provided. Perfect timing as I've literally been going between Vermintide 2 and FFXV trying to fix SSR's in both, and yeah, like you said, every single adjustment I make either makes things worse, or fixes one thing at the expense of breaking something that looks correct elsewhere, but that hopefully doesn't mean it's a bad thing, could be just a matter of finding the right combination. Last night I actually spent a good chunk of time comparing the shaders I'm working on in these 2 games vs all the SSR examples you have up on your github trying to find common threads, but unfortunately nothing is evident. A big thing is that neither of the shaders I'm working on make use of a camera position resource (nor any of the other SSR's I've worked on, strangely enough), and that seems to be a common key factor in most of the SSR's you've fixed. FFXV does have a lot of r1.xyz * r2.xxx + r3.xyz calculations, though, which lead me to think that's a world space adjustment and the r3.xyz might be the camera position. Adjusting those do affect the reflection, but it's hard to know if it's right or not. Kinda weird in that there's a few of them, but it seems like it's the same coordinate being adjusted multiple times(with various other calculations occurring in between). Anyway, I'll try interpreting this information and the info you provided me via PM and play around with them some more, and hopefully will have an "ah hah" moment after enough toying around. Here's a question I have, may not necessarily be in relation to SSR's (I do see it happen here, but I also see it everywhere else) but what is the significance of something like this: r5.x = dot(r8.xyz, r8.xyz); r5.x = rsqrt(r5.x); r9.xyz = r8.xyz * r5.xxx + r2.xyz; What does multiplying a coordinate by the square root of the dot product of the coordinate achieve? Is that somehow calculating depth of some fashion?
Another useful addition to my ever growing collection of bookmarks, probably 80-90% attributed to info you've provided. Perfect timing as I've literally been going between Vermintide 2 and FFXV trying to fix SSR's in both, and yeah, like you said, every single adjustment I make either makes things worse, or fixes one thing at the expense of breaking something that looks correct elsewhere, but that hopefully doesn't mean it's a bad thing, could be just a matter of finding the right combination.

Last night I actually spent a good chunk of time comparing the shaders I'm working on in these 2 games vs all the SSR examples you have up on your github trying to find common threads, but unfortunately nothing is evident. A big thing is that neither of the shaders I'm working on make use of a camera position resource (nor any of the other SSR's I've worked on, strangely enough), and that seems to be a common key factor in most of the SSR's you've fixed. FFXV does have a lot of r1.xyz * r2.xxx + r3.xyz calculations, though, which lead me to think that's a world space adjustment and the r3.xyz might be the camera position. Adjusting those do affect the reflection, but it's hard to know if it's right or not. Kinda weird in that there's a few of them, but it seems like it's the same coordinate being adjusted multiple times(with various other calculations occurring in between). Anyway, I'll try interpreting this information and the info you provided me via PM and play around with them some more, and hopefully will have an "ah hah" moment after enough toying around.

Here's a question I have, may not necessarily be in relation to SSR's (I do see it happen here, but I also see it everywhere else) but what is the significance of something like this:

r5.x = dot(r8.xyz, r8.xyz);
r5.x = rsqrt(r5.x);
r9.xyz = r8.xyz * r5.xxx + r2.xyz;

What does multiplying a coordinate by the square root of the dot product of the coordinate achieve? Is that somehow calculating depth of some fashion?

3D Gaming Rig: CPU: i7 7700K @ 4.9Ghz | Mobo: Asus Maximus Hero VIII | RAM: Corsair Dominator 16GB | GPU: 2 x GTX 1080 Ti SLI | 3xSSDs for OS and Apps, 2 x HDD's for 11GB storage | PSU: Seasonic X-1250 M2| Case: Corsair C70 | Cooling: Corsair H115i Hydro cooler | Displays: Asus PG278QR, BenQ XL2420TX & BenQ HT1075 | OS: Windows 10 Pro + Windows 7 dual boot

Like my fixes? Dontations can be made to: www.paypal.me/DShanz or rshannonca@gmail.com
Like electronic music? Check out: www.soundcloud.com/dj-ryan-king

#2
Posted 03/25/2018 10:07 PM   
[quote="DJ-RK"]Here's a question I have, may not necessarily be in relation to SSR's (I do see it happen here, but I also see it everywhere else) but what is the significance of something like this: r5.x = dot(r8.xyz, r8.xyz); r5.x = rsqrt(r5.x); r9.xyz = r8.xyz * r5.xxx + r2.xyz; What does multiplying a coordinate by the square root of the dot product of the coordinate achieve? Is that somehow calculating depth of some fashion?[/quote]Worth noting that rsqrt is not the square root (which if you expand the dot product algebraically you should recognise as the Pythagorean formula to find the distance of r8.xyz from the origin), but the *reciprocal* square root (the reason for this being a separate instruction is quite interesting, but off-topic. Summary: better performance), so it's actually dividing r8.xyz by the distance of r8.xyz to find a vector pointing in the same direction as r8.xyz, but with only a unit length. In other words, this is the HLSL normalize() function. My experience has been that I never needed to adjust a vector being normalised, and seeing that construct may even be an indication that I'm looking in the wrong area altogether. The reason that it is not interesting to us, is that this implies that r8.xyz is a vector, not a coordinate, and we are never adjusting vectors (such as normals), only coordinates - we're adjusting positions, not directions. The vector may well be used in a calculation operating on a coordinate, and if we have adjusted that coordinate then the whole calculation should work. e.g. I mentioned that adjusting the world camera position would change the direction of the ray-tracing, but I didn't have to directly change the direction - I changed a coordiante, and that meant that the calculations in the shader resulted in a different (now correct) direction being used.
DJ-RK said:Here's a question I have, may not necessarily be in relation to SSR's (I do see it happen here, but I also see it everywhere else) but what is the significance of something like this:

r5.x = dot(r8.xyz, r8.xyz);
r5.x = rsqrt(r5.x);
r9.xyz = r8.xyz * r5.xxx + r2.xyz;

What does multiplying a coordinate by the square root of the dot product of the coordinate achieve? Is that somehow calculating depth of some fashion?
Worth noting that rsqrt is not the square root (which if you expand the dot product algebraically you should recognise as the Pythagorean formula to find the distance of r8.xyz from the origin), but the *reciprocal* square root (the reason for this being a separate instruction is quite interesting, but off-topic. Summary: better performance), so it's actually dividing r8.xyz by the distance of r8.xyz to find a vector pointing in the same direction as r8.xyz, but with only a unit length. In other words, this is the HLSL normalize() function.

My experience has been that I never needed to adjust a vector being normalised, and seeing that construct may even be an indication that I'm looking in the wrong area altogether.

The reason that it is not interesting to us, is that this implies that r8.xyz is a vector, not a coordinate, and we are never adjusting vectors (such as normals), only coordinates - we're adjusting positions, not directions. The vector may well be used in a calculation operating on a coordinate, and if we have adjusted that coordinate then the whole calculation should work. e.g. I mentioned that adjusting the world camera position would change the direction of the ray-tracing, but I didn't have to directly change the direction - I changed a coordiante, and that meant that the calculations in the shader resulted in a different (now correct) direction being used.

2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit

Alienware M17x R4 w/ built in 3D, Intel i7 3740QM, GTX 680m 2GB, 16GB DDR3 1600MHz RAM, Win7 64bit, 1TB SSD, 1TB HDD, 750GB HDD

Pre-release 3D fixes, shadertool.py and other goodies: http://github.com/DarkStarSword/3d-fixes
Support me on Patreon: https://www.patreon.com/DarkStarSword or PayPal: https://www.paypal.me/DarkStarSword

#3
Posted 03/26/2018 05:02 AM   
Scroll To Top