Westbrook348, I remember seeing notes on some games about the cursor being a "hardware" cursor and therefore not associated with shaders and possibly unfixable. (I don't know much about this though) This may be the case with Civ4.
I've been trying to fix Dark Souls 2 Scholar of the First Sin, but I've run into a problem with shadows that only render in 1 eye. These particular shadows render on characters or objects. I've found 2 Vertex shaders that when disabled will get rid of the shadows on characters, but it also gets rid of the shadow cast by the character/object.
Ideally I'd like to fix the shadow rendering issue, but if that's too hard I'd like to only disable the shadows on the character/object and not the ones they cast. One of the vertex shader dumps is below if any of you could lend some advice on it. Setting either of the TEXCOORD to 0 seems to do nothing.
Since the original DX9 version of Dark Souls 2 has a fix I'd suggest studying that to see if it the same patterns still apply to the DX11 version. In particular, the code in that shader you posted looks very similar to CDBB89BD.txt in the original fix (same constants, same matrices, code roughly in the same order) - you will have to account for the syntax differences between assembly and HLSL, but it looks like there's a good chance that applying the same fix should work here as well.
There's probably some other shaders you will need to deal with as well - pixel shader 61B57F3C.txt looks relevant to shadows, and some of the other vertex shaders have comments that they are related to shadows, though I'm not entirely clear on the details - Helifax posted the original fix, so he might be able to help you more.
Since the original DX9 version of Dark Souls 2 has a fix I'd suggest studying that to see if it the same patterns still apply to the DX11 version. In particular, the code in that shader you posted looks very similar to CDBB89BD.txt in the original fix (same constants, same matrices, code roughly in the same order) - you will have to account for the syntax differences between assembly and HLSL, but it looks like there's a good chance that applying the same fix should work here as well.
There's probably some other shaders you will need to deal with as well - pixel shader 61B57F3C.txt looks relevant to shadows, and some of the other vertex shaders have comments that they are related to shadows, though I'm not entirely clear on the details - Helifax posted the original fix, so he might be able to help you more.
2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit
Thanks for taking a look! I've attempted to translate it to HLSL with some results in game, but not quite fixed yet. My translation can be seen below. I might ask Helifax about some of his code as well as I'm not sure I translated all of it properly.
//input is now r0
float4 reg10=r0;
float4 reg9;
//multiply by current separation.
reg9.x=separation*separation;
//apply formula
reg9.z=v0.z-reg9.x;
reg9.x=reg9.x*reg9.z;
reg10.x=reg10.x-reg9.x;
I was wondering if anyone could help me out with a modification I've been trying to make. I am not sure if this is a bit obscure or anything. But 4everawake was the only one that had any input in my original thread so I thought It might be better to post it here. I think I am just about there, but I am having issues understanding the what filtering code I need to use in the shader file.
https://forums.geforce.com/default/topic/888367/3d-vision/guild-wars-2-expac-action-cam-crosshair-2d-trying-my-hand-at-a-fix-/
Cheers.
This game has a fix that simply pushes out the UI (it's practically 3d perfect otherwise) However because the Cursor is actually rendered in 3d as well and snaps to UI elements. The UI at Depth doesn't really work IMO. especially with a depth at 100%
However This new action cam cross hair they just put in would be a huge benefit to be at depth with the rest of the UI at screen. I was thinking of trying a new fix from scratch but on the blog it mentions the author had difficulty getting Helix to work without using the sweet fx files.
If anyone could take a look at my thread and put me in the right direction in terms of the code for the shader filtering, would be much appreciated.
I was wondering if anyone could help me out with a modification I've been trying to make. I am not sure if this is a bit obscure or anything. But 4everawake was the only one that had any input in my original thread so I thought It might be better to post it here. I think I am just about there, but I am having issues understanding the what filtering code I need to use in the shader file.
This game has a fix that simply pushes out the UI (it's practically 3d perfect otherwise) However because the Cursor is actually rendered in 3d as well and snaps to UI elements. The UI at Depth doesn't really work IMO. especially with a depth at 100%
However This new action cam cross hair they just put in would be a huge benefit to be at depth with the rest of the UI at screen. I was thinking of trying a new fix from scratch but on the blog it mentions the author had difficulty getting Helix to work without using the sweet fx files.
If anyone could take a look at my thread and put me in the right direction in terms of the code for the shader filtering, would be much appreciated.
i7-4790K CPU 4.8Ghz stable overclock.
16 GB RAM Corsair
ASUS Turbo 2080TI
Samsung SSD 840Pro
ASUS Z97-WS3D
Surround ASUS Rog Swift PG278Q(R), 2x PG278Q (yes it works)
Obutto R3volution.
Windows 10 pro 64x (Windows 7 Dual boot)
Sorry if this has been asked/answered a bazillion times already, but how does one go about learning to fix lighting and shadow issues? I remember it being said that is quite difficult, but I've come across a few games that are close, but not perfect, which usually involve a few odd effects here and there, so at the very least I think it's easy enough in these cases to identify the shaders, only question is how to properly fix them. The games I'm currently looking at are Dx11, so would be interested more in the HLSL approach.
Sorry if this has been asked/answered a bazillion times already, but how does one go about learning to fix lighting and shadow issues? I remember it being said that is quite difficult, but I've come across a few games that are close, but not perfect, which usually involve a few odd effects here and there, so at the very least I think it's easy enough in these cases to identify the shaders, only question is how to properly fix them. The games I'm currently looking at are Dx11, so would be interested more in the HLSL approach.
3D Gaming Rig: CPU: i7 7700K @ 4.9Ghz | Mobo: Asus Maximus Hero VIII | RAM: Corsair Dominator 16GB | GPU: 2 x GTX 1080 Ti SLI | 3xSSDs for OS and Apps, 2 x HDD's for 11GB storage | PSU: Seasonic X-1250 M2| Case: Corsair C70 | Cooling: Corsair H115i Hydro cooler | Displays: Asus PG278QR, BenQ XL2420TX & BenQ HT1075 | OS: Windows 10 Pro + Windows 7 dual boot
[quote="DJ-RK"]Sorry if this has been asked/answered a bazillion times already, but how does one go about learning to fix lighting and shadow issues?[/quote]
The first question is: how do those effects behave by default? As in being at screen depth, being affected by convergence in an unexpected way, etc.
I've just released a fix for Dark Souls 2 (DX11 version) that for example fixes lighting that was displayed at screen depth. Check my latest post for the download link: https://forums.geforce.com/default/topic/822709/3d-vision/dark-souls-2-scholar-of-the-first-sin-direct-x11-version/post/4713536/#4713536
The files you want to look are these:
- "0b9a4f4bd9c5b45a-vs_replace.txt": the vertex shader.
- "de925318511ab23f-ps_replace.txt": the pixel shader.
I made a shader override in the "d3dx.ini" so the changes I make to the vertex shader affect only that pixel shader. I also made experimental "else" statements for the rest of pixel shaders, but they aren't the main point of the explanation.
First of all, you need to load the stereoparams and optionally the iniparams (for shader overrides, hotkeys, etc):
[code]
float4 iniparams = IniParams.Load(0);
float4 stereo = StereoParams.Load(0);
float separation = stereo.x;
float convergence = stereo.y;
[/code]
First thing to try when something is at screen depth is this code:
[code]
o0.x-=separation*(o0.w-convergence);
[/code]
"o0" can be other variables, depending on which you need to change (o1, o2, r0, r1, r2, etc). That line of code has into account the separation of the left and right images, so the effect has positive value to one eye and negative to the other, convergence and the depth value of the effect (the "w" axis). Make sure the variable you're modifying has "w" axis. In a vertex shader not related to lighting, I once needed to use the depth value of a different variable. So you can use the "w" of a different variable if you know that it is at the same position.
In my case, the fix above wasn't enough. It just made the effect visible at any convergence setting. I needed to modify the pixel shader. And this was the result:
[code]r1.x-=separation*(r1.w-convergence*0.07);[/code]
First thing to remember is that the effect of "r" variables is applied "in real time" inside of the method at many points. So you may have to try fixing it at different steps, because you may be changing the position, or the color, or other effects inside it, etc.
About my line of code, it's very similar to the other one. What does that "*0.07" mean, you may ask? It's the influence of the convergence. Without it, when increasing convergence the lights moved at a much faster pace than they should. So I had so slow it down to the point where the lights are always at their correct position, no matter the convergence setting. Trial and error.
Now shadows are very different monster that I couldn't fix yet in that game. They don't change their angle when increasing depth (the rest of the scenery does), and that's a very big problem that I still don't know how to fix.
I hope this explanation can help you.
DJ-RK said:Sorry if this has been asked/answered a bazillion times already, but how does one go about learning to fix lighting and shadow issues?
The first question is: how do those effects behave by default? As in being at screen depth, being affected by convergence in an unexpected way, etc.
I've just released a fix for Dark Souls 2 (DX11 version) that for example fixes lighting that was displayed at screen depth. Check my latest post for the download link: https://forums.geforce.com/default/topic/822709/3d-vision/dark-souls-2-scholar-of-the-first-sin-direct-x11-version/post/4713536/#4713536
The files you want to look are these:
- "0b9a4f4bd9c5b45a-vs_replace.txt": the vertex shader.
- "de925318511ab23f-ps_replace.txt": the pixel shader.
I made a shader override in the "d3dx.ini" so the changes I make to the vertex shader affect only that pixel shader. I also made experimental "else" statements for the rest of pixel shaders, but they aren't the main point of the explanation.
First of all, you need to load the stereoparams and optionally the iniparams (for shader overrides, hotkeys, etc):
First thing to try when something is at screen depth is this code:
o0.x-=separation*(o0.w-convergence);
"o0" can be other variables, depending on which you need to change (o1, o2, r0, r1, r2, etc). That line of code has into account the separation of the left and right images, so the effect has positive value to one eye and negative to the other, convergence and the depth value of the effect (the "w" axis). Make sure the variable you're modifying has "w" axis. In a vertex shader not related to lighting, I once needed to use the depth value of a different variable. So you can use the "w" of a different variable if you know that it is at the same position.
In my case, the fix above wasn't enough. It just made the effect visible at any convergence setting. I needed to modify the pixel shader. And this was the result:
r1.x-=separation*(r1.w-convergence*0.07);
First thing to remember is that the effect of "r" variables is applied "in real time" inside of the method at many points. So you may have to try fixing it at different steps, because you may be changing the position, or the color, or other effects inside it, etc.
About my line of code, it's very similar to the other one. What does that "*0.07" mean, you may ask? It's the influence of the convergence. Without it, when increasing convergence the lights moved at a much faster pace than they should. So I had so slow it down to the point where the lights are always at their correct position, no matter the convergence setting. Trial and error.
Now shadows are very different monster that I couldn't fix yet in that game. They don't change their angle when increasing depth (the rest of the scenery does), and that's a very big problem that I still don't know how to fix.
[quote="DJ-RK"]Sorry if this has been asked/answered a bazillion times already, but how does one go about learning to fix lighting and shadow issues? I remember it being said that is quite difficult, but I've come across a few games that are close, but not perfect, which usually involve a few odd effects here and there, so at the very least I think it's easy enough in these cases to identify the shaders, only question is how to properly fix them. The games I'm currently looking at are Dx11, so would be interested more in the HLSL approach.[/quote]
No good tutorials at present. I want to revisit my School for ShaderHackers to add more in depth and specific glitches with specific fixes, but spare time is presently unavailable. So, best we can do at present is to give you pointers. We can generally make time to answer questions. The only way to learn how to fix Shadows at the moment is to read a bunch of stuff, and look at examples of prior fixes. We have a giant raft of prior fixes, and if you happen to be on a known game engine, we probably already have working examples.
To get started on shadow fixing, you'll want to get a lot more background about how they are created, specifically the deferred rendering approaches. They vary a lot, but the basic premise is that they are most often fixed in the PixelShaders (hence deferred to PS). They will often calculate a depth, a wrong depth because at PS time there is no automated way to inject the fact that stereo has altered the scenario. Unlike VS, where every vertice is automatically modified by the driver to reflect the stereo position.
This is why shadows often break, the automatic mode cannot fix the vertices properly, because they are calculated late in the pixel shader itself. The canonical example of this is the bunny sample: [url]https://developer.nvidia.com/sites/default/files/akamai/gamedev/files/sdk/11/StereoIssues.zip[/url]
One of the better resources for digging deeper is an old post of mine, where I collated a bunch of links that help fill in background: [url]http://helixmod.blogspot.com/2014/01/learning-how-to-fix-games.html[/url]
Once you've got the ideas down, you'll need to look at examples of fixes. DarkStarSword has the most extensive and well documented set of fixes: [url]https://github.com/DarkStarSword/3d-fixes[/url]
There are numerous examples of Mike_ar69 fixes in game specific folders on 3Dmigoto: [url]https://github.com/bo3b/3Dmigoto[/url]
And, there are a whole bunch of others on HelixModBlog itself. We always include the shader code, not just binaries, so you can look at examples where it's working.
@masterotaku: for your example, it would probably be worth looking at Helix's fix for the original DarkSouls. Game has possibly changed, but it's the same people doing the code, so the technique Helix used might be applicable to DS2.
DJ-RK said:Sorry if this has been asked/answered a bazillion times already, but how does one go about learning to fix lighting and shadow issues? I remember it being said that is quite difficult, but I've come across a few games that are close, but not perfect, which usually involve a few odd effects here and there, so at the very least I think it's easy enough in these cases to identify the shaders, only question is how to properly fix them. The games I'm currently looking at are Dx11, so would be interested more in the HLSL approach.
No good tutorials at present. I want to revisit my School for ShaderHackers to add more in depth and specific glitches with specific fixes, but spare time is presently unavailable. So, best we can do at present is to give you pointers. We can generally make time to answer questions. The only way to learn how to fix Shadows at the moment is to read a bunch of stuff, and look at examples of prior fixes. We have a giant raft of prior fixes, and if you happen to be on a known game engine, we probably already have working examples.
To get started on shadow fixing, you'll want to get a lot more background about how they are created, specifically the deferred rendering approaches. They vary a lot, but the basic premise is that they are most often fixed in the PixelShaders (hence deferred to PS). They will often calculate a depth, a wrong depth because at PS time there is no automated way to inject the fact that stereo has altered the scenario. Unlike VS, where every vertice is automatically modified by the driver to reflect the stereo position.
Once you've got the ideas down, you'll need to look at examples of fixes. DarkStarSword has the most extensive and well documented set of fixes: https://github.com/DarkStarSword/3d-fixes
And, there are a whole bunch of others on HelixModBlog itself. We always include the shader code, not just binaries, so you can look at examples where it's working.
@masterotaku: for your example, it would probably be worth looking at Helix's fix for the original DarkSouls. Game has possibly changed, but it's the same people doing the code, so the technique Helix used might be applicable to DS2.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607 Latest 3Dmigoto Release Bo3b's School for ShaderHackers
[quote="masterotaku"]
[code]
o0.x-=separation*(o0.w-convergence);
[/code]
[/quote]
Just for a quick note;))
The actual formula is the other way around: You want to add to the Position an offset based on the "eye" that you are rending.
The full and correct formula is:
[code]
Positon.x += eye * separation * (Position.w - convergence);
[/code]
Where "eye" is:
-1 for Left Eye
+1 for Right Eye.
Just wanted to be clear on this thing;)
Just for a quick note;))
The actual formula is the other way around: You want to add to the Position an offset based on the "eye" that you are rending.
The full and correct formula is:
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
The game I'm currently working on is Warhammer: The End Times - Vermintide (have already posted a partial fix in the thread for that game on here), which is using the Stingray engine by Autodesk. As far as I'm aware, it's a new engine and I haven't been able to find mention of other games using this yet.
Even after disabling all the shadows and lighting settings in the options, there are still a few broken shaders here and there. These shaders are in stereo, but from what I can tell with incorrect depth/separattion values, which seem to behave differently from different angles. In some of them I've modified values to get them looking correct from one angle, to have them look completely wrong in another angle.
If it's not too much to ask, I'm going to post up some screenshots and shader code, and if any specific direction can be provided to help me understand, that would be greatly appreciated. As much as I'm trying to read & understand the Nvidia documents up on Bo3b's resources link, or go through every shader in DarkStarSword's and Mike_ar69's fixes, that's a lot to take in and difficult to pinpoint which shaders in a fix would be relevant to what I'm working on, let alone understanding it. Mind you, I'm not asking you guys to fix these for me, but I'm hoping that maybe by taking a look someone will recognize what I'm working with and can say, "Ok, read info found on this particular link" or "take a look at shader abc in game xyz that I worked on"... although if anyone COULD provide a suggestion on fixing the code I provide, that might give me enough to work with and learn from.
However, if by looking at this, or just the fact I'm asking for this amount of help, you feel this is beyond my ability I'm willing to concede. Just wanted to at least ask and see how it goes.
Ex 1.
Here is a lighting shader being cast through a window, which also creates shadows from the window frame. Even from the angle in the screenshot it almost looks right, but slightly off, but if you were to look at it from another angle the window frame shadows separation will be further apart and look entirely broken.
As Bo3b indicated, pretty sure the problem is in the PS, but I'll post both PS and VS just in case.
3D Gaming Rig: CPU: i7 7700K @ 4.9Ghz | Mobo: Asus Maximus Hero VIII | RAM: Corsair Dominator 16GB | GPU: 2 x GTX 1080 Ti SLI | 3xSSDs for OS and Apps, 2 x HDD's for 11GB storage | PSU: Seasonic X-1250 M2| Case: Corsair C70 | Cooling: Corsair H115i Hydro cooler | Displays: Asus PG278QR, BenQ XL2420TX & BenQ HT1075 | OS: Windows 10 Pro + Windows 7 dual boot
Inline shots to make it more clear:
[img]https://forums.geforce.com/cmd/default/download-comment-attachment/66862/[/img]
[img]https://forums.geforce.com/cmd/default/download-comment-attachment/66863/[/img]
It's worth noting that the two examples are pretty much identical in terms of the shader code itself. This is actually really common with games, where if we can crack the nut for a given shadow, we can then very often use the same technique/pattern for others in the game.
It's even possible to do offline fixes of hundreds of shaders this way, like DarkStarSword does with his python script, HelixMod does with its built-in LUA script, or Mike_ar69's offline VBA scripts. I've used regular expressions in NotePad++ to do the same thing. Only worth the trouble if you find you have hundreds to edit (from dumping all shaders).
For this specific shadow, let's see if we can figure out the right fix. Keeping in mind that while I understand the principles, I've never successfully fixed a shadow myself.
The basic principle is what I noted above for the bunny sample program. The PS is calculating a depth, in this case the:
[code] r0.xyzw = __tex_linear_depth.Sample(__samp_linear_depth_s, v1.xy).xyzw;[/code]
And using that to decide how to place the shadows. The problem is that 3D Vision Automatic has no way of knowing or fixing this in the PixelShader, and we can't move the vertices in the Vertex Shader because the actual location is not known until the Pixel Shader calculates that depth and applies it.
The fundamental background reading on this is the NVidia whitepaper:
[url]http://developer.download.nvidia.com/whitepapers/2011/StereoUnproject.pdf[/url]
And some terrific background comments and example from Mike_ar69 to fix a game:
[url]https://forums.geforce.com/default/topic/513190/3d-vision/how-to-fix-disable-shaders-in-games-dll-guide-and-fixes-/post/4069271/#4069271[/url]
Look at that example carefully (posts further down) to see how Mike shifts the 'depth' into the View space where it can be fixed using the Prime Directive, then doing the inverse-view-projection to get it back to the world space that the shader is working upon. Getting all this in the right spot in the code, in the right coordinate space is pretty hard. This example is all ASM code, but you should be able to understand it.
Now, how to apply that to your shader here? Let me move the next part into a new post.
It's worth noting that the two examples are pretty much identical in terms of the shader code itself. This is actually really common with games, where if we can crack the nut for a given shadow, we can then very often use the same technique/pattern for others in the game.
It's even possible to do offline fixes of hundreds of shaders this way, like DarkStarSword does with his python script, HelixMod does with its built-in LUA script, or Mike_ar69's offline VBA scripts. I've used regular expressions in NotePad++ to do the same thing. Only worth the trouble if you find you have hundreds to edit (from dumping all shaders).
For this specific shadow, let's see if we can figure out the right fix. Keeping in mind that while I understand the principles, I've never successfully fixed a shadow myself.
The basic principle is what I noted above for the bunny sample program. The PS is calculating a depth, in this case the:
And using that to decide how to place the shadows. The problem is that 3D Vision Automatic has no way of knowing or fixing this in the PixelShader, and we can't move the vertices in the Vertex Shader because the actual location is not known until the Pixel Shader calculates that depth and applies it.
Look at that example carefully (posts further down) to see how Mike shifts the 'depth' into the View space where it can be fixed using the Prime Directive, then doing the inverse-view-projection to get it back to the world space that the shader is working upon. Getting all this in the right spot in the code, in the right coordinate space is pretty hard. This example is all ASM code, but you should be able to understand it.
Now, how to apply that to your shader here? Let me move the next part into a new post.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607 Latest 3Dmigoto Release Bo3b's School for ShaderHackers
This is where my knowledge runs out. If I squint I can see several things that match what Mike is saying, but I can't quite put all the pieces together.
We want to always fix the stereo in View space. It doesn't work in World, and it doesn't work in Camera space.
Let me compare some pieces of Mike's example to yours:
[code] mov r1.w, c42.w <-- This line is important. This sets the w comp of r1 to "1" making it a 'point' in the affine scheme, and also helping to confirm that it is in fact a world coord
r2.w = 1 < r1.w;
if (r2.w != 0) {
if (-1 != 0) discard;
}
[/code]
Non obvious, but the result is "if r1.w > 1 then discard". So we can be sure that r1.w will always be one. Maybe fractional too, but let's ignore that.
[code] dp4 r0.w, r1, c19 <-- Note that I made sure I corrected r1 *before* this line
// view_projection c16 4 <-- Very handy!
c19 is last row of view_projection matrix.
r4.x = world._m10;
r4.y = world._m11;
r4.z = world._m12;
r3.w = dot(-r3.xyz, r4.xyz);
[/code]
If I really squint, I see it using that row out of the 'world' matrix, which might be transforming it to a view coordinate, matching Mike's example. So if that's true, we'd want to fix the stereo location *before* this spot. Admittedly this is a little weak though, because the names don't really line up. I'd actually expect it to use the 'world_view_proj' parameter.
If that is the scenario, we presumably still don't have a full solution though, because although we have the world_view_proj to move the coordinate into View space where we can stereo-correct it, I can't see how we get that View coordinate back to a World coordinate, so that the shader can finish applying the shadow masks.
[s]We could do something like:
[code] view.x = dot(r3.xyzw, world_view_proj._m00_m10_m20_m30);
view.y = dot(r3.xyzw, world_view_proj._m01_m11_m21_m31);
view.z = dot(r3.xyzw, world_view_proj._m02_m12_m22_m32);
view.w = dot(r3.xyzw, world_view_proj._m03_m13_m23_m33);
[/code][/s] This is wrong. See DarkStarSword's better answer below.
But the inverse is not clear to me. But, maybe, maybe it's the 'inv_world' matrix and we get super lucky.
So... totally guessing by this point, maybe try something like:
[code]// Move presumed world coordinate into View space
view.x = dot(r3.xyzw, world_view_proj._m00_m10_m20_m30);
view.y = dot(r3.xyzw, world_view_proj._m01_m11_m21_m31);
view.z = dot(r3.xyzw, world_view_proj._m02_m12_m22_m32);
view.w = dot(r3.xyzw, world_view_proj._m03_m13_m23_m33);
// Stereo correct the coordinate using the Prime Directive
view.x += separation * (view.w-convergence);
// Unproject back to world space
r3.x = dot(view.xyzw, inv_world._m00_m10_m20_m30);
r3.y = dot(view.xyzw, inv_world._m01_m11_m21_m31);
r3.z = dot(view.xyzw, inv_world._m02_m12_m22_m32);
r3.w = dot(view.xyzw, inv_world._m03_m13_m23_m33);
[/code]
But... usually they'll use the world 'proj' or 'projection', and that matrix is probably not for projecting.
More importatnly, that hopefully gives you an idea of where to start at least, where you can experiment and try to nail it down.
This is where my knowledge runs out. If I squint I can see several things that match what Mike is saying, but I can't quite put all the pieces together.
We want to always fix the stereo in View space. It doesn't work in World, and it doesn't work in Camera space.
Let me compare some pieces of Mike's example to yours:
mov r1.w, c42.w <-- This line is important. This sets the w comp of r1 to "1" making it a 'point' in the affine scheme, and also helping to confirm that it is in fact a world coord
r2.w = 1 < r1.w;
if (r2.w != 0) {
if (-1 != 0) discard;
}
Non obvious, but the result is "if r1.w > 1 then discard". So we can be sure that r1.w will always be one. Maybe fractional too, but let's ignore that.
dp4 r0.w, r1, c19 <-- Note that I made sure I corrected r1 *before* this line
// view_projection c16 4 <-- Very handy!
c19 is last row of view_projection matrix.
If I really squint, I see it using that row out of the 'world' matrix, which might be transforming it to a view coordinate, matching Mike's example. So if that's true, we'd want to fix the stereo location *before* this spot. Admittedly this is a little weak though, because the names don't really line up. I'd actually expect it to use the 'world_view_proj' parameter.
If that is the scenario, we presumably still don't have a full solution though, because although we have the world_view_proj to move the coordinate into View space where we can stereo-correct it, I can't see how we get that View coordinate back to a World coordinate, so that the shader can finish applying the shadow masks.
From my experience: Broken effects
When an effect renders in 3D but at wrong depth (but you can clearly see it in 3D) there two ways you can go about it:
I.
- Find the corresponding Vertex Shader and from the Position subtract the normal Stereo Formula. This should make the effect render at screen depth (2D).
- In the Pixel Shader find the position in View Space and correct it. The correction MUST take place in World Space not view space!!!
- To do this:
A. (Thanks to Mike_ar69)
1. Multiply the vPos with the Projection Matrix (we are now in World Space)
2. Apply (addition operation) the normal Stereo Formula. (Sometimes you need to MULTIPLY using the FOVx as well)
Ex: vPos += separation * (vPos.w - g_convergence);
3. Multiply the resulted vector with the Inverse Projection Matrix (now we are back in the View Space).
B. (Thanks to DarkStarSword)
1. Instead of doing all the above you can simplify the operation quite a bit:
Apply (addition operation) the normal Stereo Formula AND DIVIDE the result with FOVx.
Ex: vPos += separation * (vPos.w - g_convergence) / mtxProjectionMatrix.m00;
II.
- Forget about the Vertex Shader. We will do everything in the Pixel shader:
- Both A and B apply but instead of using an addition operation use a subtract operation:
Ex: vPos -= separation * (vPos.w - g_convergence) / mtxProjectionMatrix.m00;
Take note that you only sometimes need to divide/multiply by the FOV.
Horizontal FOV (FOVx = mtxProjectionMatrix[0].x)
Vertical FOV (FOVy = mtxProjectionMatrix[1].y)
Sometimes you need a combination/deviation from the above said.
Now in your example:
In the Vertex I see that TEXCOORD1(o1) is already stereo corrected by using r1 that is stereo corected by the 3D Vision Driver.
In the Pixel Shader I see this:
[code]
// This looks like Clip coord
r0.xy = v0.xy / back_buffer_size.xy;
// This one is using the depth buffer to get the position in View Space
r1.xyzw = __tex_linear_depth.Sample(__samp_linear_depth_s, r0.xy).xyzw;
// Here is were we need to correct the position
//CASE A:
vec4 temp = r1;
temp = temp * camera_projection; // operation might be the other way
temp.x += separation * (vPos.w - g_convergence) * camera_projection.m00;
temp = temp * camera_inv_projection; // operation might be the other way
//CASE B:
r1.x -= separation * (vPos.w - g_convergence) / camera_projection.m00;
// This is a standard divide of all components by the w component "to scale" it correctly
r1.yzw = v1.xyz / v1.www;
r1.xyz = r1.yzw * r1.xxx + camera_pos.xyz;
[/code]
Now, basically you need to experiment a bit and see what actually fits and works. (by changing the operation order, addition/division).
Also, sometimes you need to use this variant of the formula (notice the minus):
vPos += separation * -(vPos.w - g_convergence);
vPos.w might not always be availalbe is vPos is a vec3, in which case you can use the .z component).
Basically now you need to experiment a bit and see what works on a particular game;))
Hope this helps out
From my experience: Broken effects
When an effect renders in 3D but at wrong depth (but you can clearly see it in 3D) there two ways you can go about it:
I.
- Find the corresponding Vertex Shader and from the Position subtract the normal Stereo Formula. This should make the effect render at screen depth (2D).
- In the Pixel Shader find the position in View Space and correct it. The correction MUST take place in World Space not view space!!!
- To do this:
A. (Thanks to Mike_ar69)
1. Multiply the vPos with the Projection Matrix (we are now in World Space)
2. Apply (addition operation) the normal Stereo Formula. (Sometimes you need to MULTIPLY using the FOVx as well)
Ex: vPos += separation * (vPos.w - g_convergence);
3. Multiply the resulted vector with the Inverse Projection Matrix (now we are back in the View Space).
B. (Thanks to DarkStarSword)
1. Instead of doing all the above you can simplify the operation quite a bit:
Apply (addition operation) the normal Stereo Formula AND DIVIDE the result with FOVx.
Ex: vPos += separation * (vPos.w - g_convergence) / mtxProjectionMatrix.m00;
II.
- Forget about the Vertex Shader. We will do everything in the Pixel shader:
- Both A and B apply but instead of using an addition operation use a subtract operation:
Ex: vPos -= separation * (vPos.w - g_convergence) / mtxProjectionMatrix.m00;
Take note that you only sometimes need to divide/multiply by the FOV.
Horizontal FOV (FOVx = mtxProjectionMatrix[0].x)
Vertical FOV (FOVy = mtxProjectionMatrix[1].y)
Sometimes you need a combination/deviation from the above said.
Now in your example:
In the Vertex I see that TEXCOORD1(o1) is already stereo corrected by using r1 that is stereo corected by the 3D Vision Driver.
In the Pixel Shader I see this:
// This looks like Clip coord
r0.xy = v0.xy / back_buffer_size.xy;
// This one is using the depth buffer to get the position in View Space
r1.xyzw = __tex_linear_depth.Sample(__samp_linear_depth_s, r0.xy).xyzw;
// Here is were we need to correct the position
//CASE A:
vec4 temp = r1;
temp = temp * camera_projection; // operation might be the other way
temp.x += separation * (vPos.w - g_convergence) * camera_projection.m00;
temp = temp * camera_inv_projection; // operation might be the other way
// This is a standard divide of all components by the w component "to scale" it correctly
r1.yzw = v1.xyz / v1.www;
r1.xyz = r1.yzw * r1.xxx + camera_pos.xyz;
Now, basically you need to experiment a bit and see what actually fits and works. (by changing the operation order, addition/division).
Also, sometimes you need to use this variant of the formula (notice the minus):
vPos += separation * -(vPos.w - g_convergence);
vPos.w might not always be availalbe is vPos is a vec3, in which case you can use the .z component).
Basically now you need to experiment a bit and see what works on a particular game;))
Hope this helps out
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
Looks like there's already some good info here for you to work through, but I can see that some of the info isn't quite right so I'll weigh in as well.
[quote="bo3b"]We want to always fix the stereo in View space. It doesn't work in World, and it doesn't work in Camera space.[/quote]To clarify, we can adjust a coordinate either in "view space" or "projection space" (AKA "clip space"). These are similar coordinate systems as they are both relative to the camera, but they aren't quite the same thing and the differences can be confusing.
A coordinate typically undergoes several transformations to get it to the screen. Depending on the game, several of these steps will often be combined (e.g. in a world-view-projection matrix):
[list]
[.]local space coordinate * world matrix = world space coordinate[/.]
[.]world space coordinate - camera position = world space coordinate (relative to camera)[/.]
[.]world space coordinate about camera * view matrix = view space coordinate[/.]
[.]view space coordinate * projection matrix = projection / clip space coordinate[/.]
[.]projection space coordinate / w = screen space coordinate between -1 and 1[/.]
[.](screen space coordinate / 2 + 0.5) * resolution = pixel[/.]
[/list]
Projection space is a weird homogeneous coordinate system used just before the perspective divide to make a 3D image render on a 2D screen, taking the FOV into account (and clipping planes to scale the depth buffer) - this is what we end up with in the position output from the vertex shaders and is where we would use the regular stereo correction formula.
View space is a much simpler 3D Cartesian coordinate system relative to the camera - when we correct a coordinate in this system we use a slight variation of the correction formula that includes a multiply by part of the inverse projection matrix (or divide by part of the forward projection matrix).
We can also technically adjust a "world space" coordinate as well, but we have to have converted the correction from projection or view space into world space first since the orientation will be different, but since we can't determine the correction amount without the depth (which we may or may not have) it's often no simpler than converting the entire coordinate to view or projection space and back again (though it can produce more accurate results in some cases as it minimises floating point error, especially if one of the matrices includes a position transformation).
[quote="DJ-RK"]In some of them I've modified values to get them looking correct from one angle, to have them look completely wrong in another angle.[/quote]
Experience has taught me to recognise the significance of certain observations - that particular observation suggests that the correction is being applied on the wrong side of a transformation (matrix multiply) as it is adjusting the coordinates on the wrong axis.
[quote]... or go through every shader in DarkStarSword's and Mike_ar69's fixes, that's a lot to take in and difficult to pinpoint which shaders in a fix would be relevant to what I'm working on, let alone understanding it.[/quote]
If you are looking at my fixes on github I highly recommend you use the history view. Every time I change something I do so in a "commit" which has some extra notes describing what I was changing, and will also show a diff view so you can clearly see what I changed in the shader vs what was already there. Generally you should look for the first shadow or two I fixed in a given game/engine which is where I will have written the most detailed notes - I won't write much detail on any commits that are just repeating a previous pattern.
[quote][code]
cbuffer global_viewport : register(b0)
{
float3 camera_unprojection : packoffset(c0);
float3 camera_pos : packoffset(c1);
float4x4 camera_view : packoffset(c2);
float4x4 camera_projection : packoffset(c6);
float4x4 camera_inv_view : packoffset(c10);
float4x4 camera_inv_projection : packoffset(c14);
[/code][/quote]
Well, the good news is this game has given you access to all the matrices you are likely to need - you appear to have access to the projection, view and both of their inverse matrices available here :)
I often find it helpful to look at the vertex shader to try to understand what coordinates it is passing to the pixel shader. Sometimes the vertex shader is complex and hard to follow (in which case I'll just make an adjustment somewhere and observe what happens to make a guess at what coordinate system I'm working with), but in this case the vertex shader is quite easy to follow:
[code]
void main(
float4 v0 : POSITION0,
// This shader has two outputs. o0 is always in "clip space" since it is the
// position output, but the game could do whatever it likes with the texcoord:
out float4 o0 : SV_POSITION0,
out float4 o1 : TEXCOORD1)
{
float4 r0,r1;
uint4 bitmask, uiDest;
float4 fDest;
r0.xyz = light_proxy_scale.xyz * v0.xyz;
r0.w = 1;
// r1 and o0 are using the world_view_proj matrix, so they will both be in
// "clip space" (AKA "projection space", similar to but not quite the same as
// "screen coordinates") after this:
r1.x = dot(r0.xyzw, world_view_proj._m00_m10_m20_m30);
r1.y = dot(r0.xyzw, world_view_proj._m01_m11_m21_m31);
r1.z = dot(r0.xyzw, world_view_proj._m02_m12_m22_m32);
r1.w = dot(r0.xyzw, world_view_proj._m03_m13_m23_m33);
o0.xyzw = r1.xyzw;
// the clip space coordinates are now being multiplied by the inverse
// projection matrix, which will make r0 "view space" after this:
r0.x = dot(r1.xyzw, camera_inv_projection._m00_m10_m20_m30);
r0.y = dot(r1.xyzw, camera_inv_projection._m01_m11_m21_m31);
r0.z = dot(r1.xyzw, camera_inv_projection._m02_m12_m22_m32);
// And now the "view space" coordinates are being multiplied by the inverse
// view matrix. This will most likely mean that the result is in "world space"
// coordinates, but you cannot be sure if it is relative to the camera
// (likely), or a fixed point in the world. I also note that the homogeneous W
// coordinate is being set to the same value as the output position depth,
// which is a bit unusual, but suggests that that the world coordinate has not
// yet been normalised into a Cartesian coordinate:
o1.w = r1.w;
o1.x = dot(r0.xyz, camera_inv_view._m00_m10_m20);
o1.y = dot(r0.xyz, camera_inv_view._m01_m11_m21);
o1.z = dot(r0.xyz, camera_inv_view._m02_m12_m22);
return;
}[/code]
[code]//Inn - broken window lighting
<snip>
void main(
// v1 is the un-normalised world space coordinate passed from the vertex
// shader:
float4 v0 : SV_POSITION0,
float4 v1 : TEXCOORD1,
out float4 o0 : SV_TARGET0)
{
float4 r0,r1,r2,r3,r4,r5,r6,r7;
uint4 bitmask, uiDest;
float4 fDest;
r0.xy = v0.xy / back_buffer_size.xy;
// This is something you will almost always want to look for in a shadow shader
// - where the depth information is sampled. One thing to note here is that
// this texture name indicates that it contains the LINEAR DEPTH. This game is
// really making things easy for you - that means you don't need to worry about
// scaling a value from the Z Buffer (0-1) to world depth scale (which the game
// would typically do at this point, but you would have to recognise which
// instructions are relevant to it).
// The linear depth will usually be in r1.x (you can look at which is used in
// the multiply below to check). The y, z and w parameters will be meaningless.
r1.xyzw = __tex_linear_depth.Sample(__samp_linear_depth_s, r0.xy).xyzw;
// As we noted above, the world coordinates were not normalised in the vertex
// shader - this line is normalising them:
r1.yzw = v1.xyz / v1.www;
// This is another line that you will almost always need to find in a shadow
// shader - specifically you are looking for the line that multiplies the depth
// value (in r1.x) by some coordinate. In this case we know from looking at the
// vertex shader that the coordinate is in "world space", and afterwards it
// adds the coordinates of the camera position, which indicates that the world
// space coordinate in r1.yzw is centered around the camera.
r1.xyz = r1.yzw * r1.xxx + camera_pos.xyz;
[/code][/quote]
That last line usually signifies that the correction needs to go nearby (in this case, my bet is after the r1.yzw*r1.xxx and before the +camera_pos.xyz - so you will need to split that line up to two separate instructions with equivalent results).
Depending on what coordinate system you are currently in you will need to do one of several options:
[list]
[.] Projection/Clip space:
[list]
[.] Subtract usual stereo formula[/.]
[/list]
[/.]
[.] View Space:
[list]
[.] Option 1: Subtract usual stereo formula multiplied by camera_inv_projection._m00[/.]
[.] Option 2: Subtract usual stereo formula divided by camera_projection._m00[/.]
[.] Option 3: Convert to projection space, subtract usual formula, convert back to view space[/.]
[.] Option 4: Convert correction value based on depth alone to view-space then subtract it from the coordinate (may produce more accurate results than option 3 in some cases)[/.]
[/list]
[/.]
[.] World Space:
[list]
[.] Option 1: Convert to either view or projection space, see above, convert back to world space[/.]
[.] Option 2: Convert correction value based on depth alone to world-space then subtract it from the coordinate (may produce more accurate results in some cases)[/.]
[/list]
[/.]
[/list]
[quote="bo3b"]
We could do something like:
[code]
view.x = dot(r3.xyzw, world_view_proj._m00_m10_m20_m30);
view.y = dot(r3.xyzw, world_view_proj._m01_m11_m21_m31);
view.z = dot(r3.xyzw, world_view_proj._m02_m12_m22_m32);
view.w = dot(r3.xyzw, world_view_proj._m03_m13_m23_m33);
[/code]
[/quote]
You wouldn't be multiplying by the world matrix - if you needed to that would suggest you are in local coordinates relative to the light. I also would not call the result "view" as that would imply it's in view-space coordinates, but the result of the WVP matrix will be in projection-space coordinates.
This is the type of code you will see from the decompiler to do a matrix multiply, but I don't personally recommend using this style when using DX11 and HLSL - it will work, but requires more effort to experiment with since there's a bunch of redundant text there. Instead, I suggest:
[code]
result = mul(coord, matrix);
[/code]
Why do it this way? Because you may need to experiment with a few variations, such as:
[code]
1. Use if game uses dot();dot();dot();dot() for column-major matrix multiply
Edit: or uses "row_major" matrices with mul(); mad(); mad(); mad/add():
result = mul(coord, matrix);
2. Use if game uses mul(); mad(); mad(); add/mad(); for column-major matrix multiply
Edit: or uses "row_major" matrices with dot(); dot(); dot(); dot():
result = mul(matrix, coord);
3. May need to normalise the result:
float4 tmp = mul(coord, matrix);
result = tmp / tmp.w;
4. May only have a three dimensional Cartesian coordinate:
result = mul(float4(coord.xyz, 1), matrix);
5. As 4, but suppresses position transformations in the matrix:
result = mul(float4(coord.xyz, 0), matrix);
[/code]
and combinations thereof. Even now that I've fixed a bunch of different shadows and think I have a pretty good understanding of the above variations I still find that I often get this part wrong on the first try.
In this case I think you will need something like this, but you may need to experiment to find the right variation.
Vertex shader:
[code]
<snip>
r1.z = dot(r0.xyzw, world_view_proj._m02_m12_m22_m32);
r1.w = dot(r0.xyzw, world_view_proj._m03_m13_m23_m33);
o0.xyzw = r1.xyzw;
// You may or may not need to stereo correct r1 at this point. If the driver is
// correcting o0 (e.g. in-world light like a point or spot light) you will need
// to so that it matches, otherwise you should not (e.g. full-screen
// directional lighting pass). Try it and see, but be aware that it might
// interact with the correction in the pixel shader:
float4 stereo = StereoParams.Load(0);
r1.x += stereo.x * (r1.w - stereo.y);
r0.x = dot(r1.xyzw, camera_inv_projection._m00_m10_m20_m30);
r0.y = dot(r1.xyzw, camera_inv_projection._m01_m11_m21_m31);
<snip>
[/code]
And in the pixel shader:
[code]
<snip>
// We need to apply the correction in the *middle* of this instruction before
// adding the camera position, so split it into two separate instructions:
// r1.xyz = r1.yzw * r1.xxx + camera_pos.xyz;
r1.xyz = r1.yzw * r1.xxx;
// We have a coordinate in world-space relative to the camera. We convert this
// into projection-space, apply the correction, then convert it back (there are
// some alternative variations that may work as well, such as only converting
// to view-space instead of projection-space and using the view-space formula, or
// since we already have the linear depth just convert float4(separation * (depth
// - convergence), 0, 0, 1) to world-space and subtract that from r1 (which may
// help if you find this produces flickering, or just slightly inaccurate shadows).
float4 stereo = StereoParams.Load(0);
float4 tmp;
tmp = mul(camera_view, float4(r1.xyz, 1)); // If 1 doesn't work, try 0
tmp = mul(camera_projection, tmp);
tmp = tmp / tmp.w; // May or may not be necessary
tmp.x -= stereo.x * (tmp.z - stereo.y);
tmp = mul(camera_inv_projection, tmp);
tmp = mul(camera_inv_view, tmp);
r1.xyz = tmp.xyz / tmp.w; // May or may not be necessary
// This is the final part of the original instruction we split up:
r1.xyz = r1.xyz + camera_pos.xyz;
<snip>
[/code]
[quote="helifax"]
I.
- Find the corresponding Vertex Shader and from the Position subtract the normal Stereo Formula. This should make the effect render at screen depth (2D).
[/quote]
When I suspect this may be necessary I run a simple experiment first. I'll disable any discard instructions in the pixel shader and modify it to output a solid colour. That will tell me whether the total bounding box of the effect is in the correct position in 3D or not. It should totally cover the area the effect will be rendered in regardless of distance to the effect and the convergence setting. Getting this wrong and over-correcting elsewhere can lead to an effect clipping on the sides from a distance.
[quote="helifax"]1. Multiply the vPos with the Projection Matrix (we are now in World Space)[/quote]If you've multiplied by the projection matrix you are in projection/clip space, not world space.
[quote="helifax"]In the Vertex I see that TEXCOORD1(o1) is already stereo corrected by using r1 that is stereo corected by the 3D Vision Driver.[/quote]Actually it won't be (possibly unless the game's profile has StereoUseMatrix enabled, but that can be unreliable and can produce flickering so I don't recommend it). The only thing the driver will have corrected is o0. It may be that r1 needs a correction in the vertex shader as well since o0 is derived from it.
[quote="helifax"]In the Pixel Shader I see this:
[code]
// This one is using the depth buffer to get the position in View Space
r1.xyzw = __tex_linear_depth.Sample(__samp_linear_depth_s, r0.xy).xyzw;
[/code]
[/quote]It looks like your below cases assume this is the position in view space, but it is not. Only r1.x will be valid at this point and will contain the scene depth. The world-space coordinate comes a little further on when this is multiplied by a world space coordinate of fixed depth.
[quote="helifax"]Also, sometimes you need to use this variant of the formula (notice the minus):
vPos += separation * -(vPos.w - g_convergence);
[/quote]which is of course mathematically equivalent to:
vPos -= separation * (vPos.w - g_convergence);
As a general rule of thumb, if the adjustment is in the vertex shader you are probably adding, if it's in the pixel shader you are probably subtracting... But of course that can vary as well.
I agree with bo3b and helifax that experimentation is important at this point - even now that I've fixed a bunch of different shadow effects and understand the maths pretty well there is still an element of trial and error to find the right spot to do a correction, and the right pattern of the correction.
One tip I have to help understand what a correction is doing is to define a parameter in the d3dx.ini that can transition smoothly between 0 and 1 over several seconds on a key press. Multiply your correction by that parameter and observe which direction it moves when you press the key - that can often help you identify if you are on the right track or not, and might provide more clues than simply looking at a before / after comparison.
Looks like there's already some good info here for you to work through, but I can see that some of the info isn't quite right so I'll weigh in as well.
bo3b said:We want to always fix the stereo in View space. It doesn't work in World, and it doesn't work in Camera space.
To clarify, we can adjust a coordinate either in "view space" or "projection space" (AKA "clip space"). These are similar coordinate systems as they are both relative to the camera, but they aren't quite the same thing and the differences can be confusing.
A coordinate typically undergoes several transformations to get it to the screen. Depending on the game, several of these steps will often be combined (e.g. in a world-view-projection matrix):
local space coordinate * world matrix = world space coordinate
world space coordinate - camera position = world space coordinate (relative to camera)
world space coordinate about camera * view matrix = view space coordinate
view space coordinate * projection matrix = projection / clip space coordinate
projection space coordinate / w = screen space coordinate between -1 and 1
Projection space is a weird homogeneous coordinate system used just before the perspective divide to make a 3D image render on a 2D screen, taking the FOV into account (and clipping planes to scale the depth buffer) - this is what we end up with in the position output from the vertex shaders and is where we would use the regular stereo correction formula.
View space is a much simpler 3D Cartesian coordinate system relative to the camera - when we correct a coordinate in this system we use a slight variation of the correction formula that includes a multiply by part of the inverse projection matrix (or divide by part of the forward projection matrix).
We can also technically adjust a "world space" coordinate as well, but we have to have converted the correction from projection or view space into world space first since the orientation will be different, but since we can't determine the correction amount without the depth (which we may or may not have) it's often no simpler than converting the entire coordinate to view or projection space and back again (though it can produce more accurate results in some cases as it minimises floating point error, especially if one of the matrices includes a position transformation).
DJ-RK said:In some of them I've modified values to get them looking correct from one angle, to have them look completely wrong in another angle.
Experience has taught me to recognise the significance of certain observations - that particular observation suggests that the correction is being applied on the wrong side of a transformation (matrix multiply) as it is adjusting the coordinates on the wrong axis.
... or go through every shader in DarkStarSword's and Mike_ar69's fixes, that's a lot to take in and difficult to pinpoint which shaders in a fix would be relevant to what I'm working on, let alone understanding it.
If you are looking at my fixes on github I highly recommend you use the history view. Every time I change something I do so in a "commit" which has some extra notes describing what I was changing, and will also show a diff view so you can clearly see what I changed in the shader vs what was already there. Generally you should look for the first shadow or two I fixed in a given game/engine which is where I will have written the most detailed notes - I won't write much detail on any commits that are just repeating a previous pattern.
Well, the good news is this game has given you access to all the matrices you are likely to need - you appear to have access to the projection, view and both of their inverse matrices available here :)
I often find it helpful to look at the vertex shader to try to understand what coordinates it is passing to the pixel shader. Sometimes the vertex shader is complex and hard to follow (in which case I'll just make an adjustment somewhere and observe what happens to make a guess at what coordinate system I'm working with), but in this case the vertex shader is quite easy to follow:
void main(
float4 v0 : POSITION0,
// This shader has two outputs. o0 is always in "clip space" since it is the
// position output, but the game could do whatever it likes with the texcoord:
out float4 o0 : SV_POSITION0,
out float4 o1 : TEXCOORD1)
{
float4 r0,r1;
uint4 bitmask, uiDest;
float4 fDest;
// r1 and o0 are using the world_view_proj matrix, so they will both be in
// "clip space" (AKA "projection space", similar to but not quite the same as
// "screen coordinates") after this:
// And now the "view space" coordinates are being multiplied by the inverse
// view matrix. This will most likely mean that the result is in "world space"
// coordinates, but you cannot be sure if it is relative to the camera
// (likely), or a fixed point in the world. I also note that the homogeneous W
// coordinate is being set to the same value as the output position depth,
// which is a bit unusual, but suggests that that the world coordinate has not
// yet been normalised into a Cartesian coordinate:
// This is something you will almost always want to look for in a shadow shader
// - where the depth information is sampled. One thing to note here is that
// this texture name indicates that it contains the LINEAR DEPTH. This game is
// really making things easy for you - that means you don't need to worry about
// scaling a value from the Z Buffer (0-1) to world depth scale (which the game
// would typically do at this point, but you would have to recognise which
// instructions are relevant to it).
// The linear depth will usually be in r1.x (you can look at which is used in
// the multiply below to check). The y, z and w parameters will be meaningless.
// As we noted above, the world coordinates were not normalised in the vertex
// shader - this line is normalising them:
r1.yzw = v1.xyz / v1.www;
// This is another line that you will almost always need to find in a shadow
// shader - specifically you are looking for the line that multiplies the depth
// value (in r1.x) by some coordinate. In this case we know from looking at the
// vertex shader that the coordinate is in "world space", and afterwards it
// adds the coordinates of the camera position, which indicates that the world
// space coordinate in r1.yzw is centered around the camera.
r1.xyz = r1.yzw * r1.xxx + camera_pos.xyz;
That last line usually signifies that the correction needs to go nearby (in this case, my bet is after the r1.yzw*r1.xxx and before the +camera_pos.xyz - so you will need to split that line up to two separate instructions with equivalent results).
Depending on what coordinate system you are currently in you will need to do one of several options:
Projection/Clip space:
Subtract usual stereo formula
View Space:
Option 1: Subtract usual stereo formula multiplied by camera_inv_projection._m00
Option 2: Subtract usual stereo formula divided by camera_projection._m00
Option 3: Convert to projection space, subtract usual formula, convert back to view space
Option 4: Convert correction value based on depth alone to view-space then subtract it from the coordinate (may produce more accurate results than option 3 in some cases)
World Space:
Option 1: Convert to either view or projection space, see above, convert back to world space
Option 2: Convert correction value based on depth alone to world-space then subtract it from the coordinate (may produce more accurate results in some cases)
You wouldn't be multiplying by the world matrix - if you needed to that would suggest you are in local coordinates relative to the light. I also would not call the result "view" as that would imply it's in view-space coordinates, but the result of the WVP matrix will be in projection-space coordinates.
This is the type of code you will see from the decompiler to do a matrix multiply, but I don't personally recommend using this style when using DX11 and HLSL - it will work, but requires more effort to experiment with since there's a bunch of redundant text there. Instead, I suggest:
result = mul(coord, matrix);
Why do it this way? Because you may need to experiment with a few variations, such as:
1. Use if game uses dot();dot();dot();dot() for column-major matrix multiply
Edit: or uses "row_major" matrices with mul(); mad(); mad(); mad/add():
result = mul(coord, matrix);
2. Use if game uses mul(); mad(); mad(); add/mad(); for column-major matrix multiply
Edit: or uses "row_major" matrices with dot(); dot(); dot(); dot():
result = mul(matrix, coord);
3. May need to normalise the result:
float4 tmp = mul(coord, matrix);
result = tmp / tmp.w;
4. May only have a three dimensional Cartesian coordinate:
result = mul(float4(coord.xyz, 1), matrix);
5. As 4, but suppresses position transformations in the matrix:
result = mul(float4(coord.xyz, 0), matrix);
and combinations thereof. Even now that I've fixed a bunch of different shadows and think I have a pretty good understanding of the above variations I still find that I often get this part wrong on the first try.
In this case I think you will need something like this, but you may need to experiment to find the right variation.
// You may or may not need to stereo correct r1 at this point. If the driver is
// correcting o0 (e.g. in-world light like a point or spot light) you will need
// to so that it matches, otherwise you should not (e.g. full-screen
// directional lighting pass). Try it and see, but be aware that it might
// interact with the correction in the pixel shader:
// We need to apply the correction in the *middle* of this instruction before
// adding the camera position, so split it into two separate instructions:
// r1.xyz = r1.yzw * r1.xxx + camera_pos.xyz;
r1.xyz = r1.yzw * r1.xxx;
// We have a coordinate in world-space relative to the camera. We convert this
// into projection-space, apply the correction, then convert it back (there are
// some alternative variations that may work as well, such as only converting
// to view-space instead of projection-space and using the view-space formula, or
// since we already have the linear depth just convert float4(separation * (depth
// - convergence), 0, 0, 1) to world-space and subtract that from r1 (which may
// help if you find this produces flickering, or just slightly inaccurate shadows).
float4 stereo = StereoParams.Load(0);
float4 tmp;
tmp = mul(camera_view, float4(r1.xyz, 1)); // If 1 doesn't work, try 0
tmp = mul(camera_projection, tmp);
tmp = tmp / tmp.w; // May or may not be necessary
tmp.x -= stereo.x * (tmp.z - stereo.y);
tmp = mul(camera_inv_projection, tmp);
tmp = mul(camera_inv_view, tmp);
r1.xyz = tmp.xyz / tmp.w; // May or may not be necessary
// This is the final part of the original instruction we split up:
r1.xyz = r1.xyz + camera_pos.xyz;
<snip>
helifax said:
I.
- Find the corresponding Vertex Shader and from the Position subtract the normal Stereo Formula. This should make the effect render at screen depth (2D).
When I suspect this may be necessary I run a simple experiment first. I'll disable any discard instructions in the pixel shader and modify it to output a solid colour. That will tell me whether the total bounding box of the effect is in the correct position in 3D or not. It should totally cover the area the effect will be rendered in regardless of distance to the effect and the convergence setting. Getting this wrong and over-correcting elsewhere can lead to an effect clipping on the sides from a distance.
helifax said:1. Multiply the vPos with the Projection Matrix (we are now in World Space)
If you've multiplied by the projection matrix you are in projection/clip space, not world space.
helifax said:In the Vertex I see that TEXCOORD1(o1) is already stereo corrected by using r1 that is stereo corected by the 3D Vision Driver.
Actually it won't be (possibly unless the game's profile has StereoUseMatrix enabled, but that can be unreliable and can produce flickering so I don't recommend it). The only thing the driver will have corrected is o0. It may be that r1 needs a correction in the vertex shader as well since o0 is derived from it.
helifax said:In the Pixel Shader I see this:
// This one is using the depth buffer to get the position in View Space
r1.xyzw = __tex_linear_depth.Sample(__samp_linear_depth_s, r0.xy).xyzw;
It looks like your below cases assume this is the position in view space, but it is not. Only r1.x will be valid at this point and will contain the scene depth. The world-space coordinate comes a little further on when this is multiplied by a world space coordinate of fixed depth.
helifax said:Also, sometimes you need to use this variant of the formula (notice the minus):
vPos += separation * -(vPos.w - g_convergence);
which is of course mathematically equivalent to:
vPos -= separation * (vPos.w - g_convergence);
As a general rule of thumb, if the adjustment is in the vertex shader you are probably adding, if it's in the pixel shader you are probably subtracting... But of course that can vary as well.
I agree with bo3b and helifax that experimentation is important at this point - even now that I've fixed a bunch of different shadow effects and understand the maths pretty well there is still an element of trial and error to find the right spot to do a correction, and the right pattern of the correction.
One tip I have to help understand what a correction is doing is to define a parameter in the d3dx.ini that can transition smoothly between 0 and 1 over several seconds on a key press. Multiply your correction by that parameter and observe which direction it moves when you press the key - that can often help you identify if you are on the right track or not, and might provide more clues than simply looking at a before / after comparison.
2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit
Wow, today is my lucky day. I'm truly grateful for all of this extremely detailed and valuable instruction. I am coming from a background of no formal education in graphics rendering / game design, but some basic - intermediate level programming many years ago (which I like to think I had an affinity towards), so when I had read through the Nvidia whitepapers and other readings towards the rendering process, I could kinda make some sense of it, but much like anything, it was hard to absorb without a working example to apply it towards. I also did not realize that these shaders themselves are where the calculations and transformations are occurring between the different stages in the rendering pipeline before you guys started talking about all these different "spaces," but now it's all starting to come together for me (at least a little bit), and now that has me interested in going back over some of the previous documents so I can compare all of the information you've all provided here and come to an even better understanding. I've also been blessed with time availability (I just got packaged out of my job today!), so this gives me something to really sink my teeth into for the next little bit. So again, thank you all so much, I'll be studying this and putting it to as much good use as I possibly can, and will report back my results.
Wow, today is my lucky day. I'm truly grateful for all of this extremely detailed and valuable instruction. I am coming from a background of no formal education in graphics rendering / game design, but some basic - intermediate level programming many years ago (which I like to think I had an affinity towards), so when I had read through the Nvidia whitepapers and other readings towards the rendering process, I could kinda make some sense of it, but much like anything, it was hard to absorb without a working example to apply it towards. I also did not realize that these shaders themselves are where the calculations and transformations are occurring between the different stages in the rendering pipeline before you guys started talking about all these different "spaces," but now it's all starting to come together for me (at least a little bit), and now that has me interested in going back over some of the previous documents so I can compare all of the information you've all provided here and come to an even better understanding. I've also been blessed with time availability (I just got packaged out of my job today!), so this gives me something to really sink my teeth into for the next little bit. So again, thank you all so much, I'll be studying this and putting it to as much good use as I possibly can, and will report back my results.
3D Gaming Rig: CPU: i7 7700K @ 4.9Ghz | Mobo: Asus Maximus Hero VIII | RAM: Corsair Dominator 16GB | GPU: 2 x GTX 1080 Ti SLI | 3xSSDs for OS and Apps, 2 x HDD's for 11GB storage | PSU: Seasonic X-1250 M2| Case: Corsair C70 | Cooling: Corsair H115i Hydro cooler | Displays: Asus PG278QR, BenQ XL2420TX & BenQ HT1075 | OS: Windows 10 Pro + Windows 7 dual boot
@DarkStarSword: Awesome info, thanks for the detailed writeup.
I've been suggesting fixing only in View space, because that's what Mike has always recommended. Still, it seems like he might have been saying View when he meant Projection. His example for Shogun is clearly going to Projection space. One thing I've also been confusing is that when a variable is named something like [i]world_view_proj[/i], that the last part is Projection space, not the verb 'project'. The terminology of 3D graphics is surprisingly confusing and ill-defined.
For teaching purposes and also because I like to whittle things down to their simplest, I prefer the simplicity of a single technique, like Mike's suggestion from that earlier [url="https://forums.geforce.com/default/topic/513190/3d-vision/how-to-fix-disable-shaders-in-games-dll-guide-and-fixes-/post/4071376/#4071376"]post for Shogun[/url].
So unless I've confused it, that would be:
[code]proj_coordinate = mul(VPM, world_coordinate);
proj_coordinate.x -= separation * (proj_coordinate .w - convergence);
world_coordinate = mul(Inverse_VPM, proj_coordinate);
[/code]
Your toolbox is a [i]lot[/i] more powerful. Especially because you do not require a starting world coordinate. I like to start with a basic approach, but I really appreciate seeing the other options.
@DJ-RK: Between all of our suggestions (including Mike's fixes and posts), I'd recommend emphasizing DarkStarSword's. He has the deepest understanding of the graphics pipeline and the right spots for fixes.
Now you can see why we keep saying shadows are the hardest things to fix. Unless I'm mistaken, only 3 people have created de novo shadow fixes for unknown engines- Helix, Mike_ar69, and DarkStarSword. The rest of us fill out the team by doing a lot of the hunting and application of their techniques. This is why I urge everyone to put in some effort to understand the basics, because even if we can't quite pull off a new shadow fix, we can all still contribute in valuable ways.
@DarkStarSword: Awesome info, thanks for the detailed writeup.
I've been suggesting fixing only in View space, because that's what Mike has always recommended. Still, it seems like he might have been saying View when he meant Projection. His example for Shogun is clearly going to Projection space. One thing I've also been confusing is that when a variable is named something like world_view_proj, that the last part is Projection space, not the verb 'project'. The terminology of 3D graphics is surprisingly confusing and ill-defined.
For teaching purposes and also because I like to whittle things down to their simplest, I prefer the simplicity of a single technique, like Mike's suggestion from that earlier post for Shogun.
Your toolbox is a lot more powerful. Especially because you do not require a starting world coordinate. I like to start with a basic approach, but I really appreciate seeing the other options.
@DJ-RK: Between all of our suggestions (including Mike's fixes and posts), I'd recommend emphasizing DarkStarSword's. He has the deepest understanding of the graphics pipeline and the right spots for fixes.
Now you can see why we keep saying shadows are the hardest things to fix. Unless I'm mistaken, only 3 people have created de novo shadow fixes for unknown engines- Helix, Mike_ar69, and DarkStarSword. The rest of us fill out the team by doing a lot of the hunting and application of their techniques. This is why I urge everyone to put in some effort to understand the basics, because even if we can't quite pull off a new shadow fix, we can all still contribute in valuable ways.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607 Latest 3Dmigoto Release Bo3b's School for ShaderHackers
I've been trying to fix Dark Souls 2 Scholar of the First Sin, but I've run into a problem with shadows that only render in 1 eye. These particular shadows render on characters or objects. I've found 2 Vertex shaders that when disabled will get rid of the shadows on characters, but it also gets rid of the shadow cast by the character/object.
Ideally I'd like to fix the shadow rendering issue, but if that's too hard I'd like to only disable the shadows on the character/object and not the ones they cast. One of the vertex shader dumps is below if any of you could lend some advice on it. Setting either of the TEXCOORD to 0 seems to do nothing.
i7-8086k @ 5.2GHz
EVGA GTX 1080Ti
16GB 3800MHz DDR4
Windows 7 x64
There's probably some other shaders you will need to deal with as well - pixel shader 61B57F3C.txt looks relevant to shadows, and some of the other vertex shaders have comments that they are related to shadows, though I'm not entirely clear on the details - Helifax posted the original fix, so he might be able to help you more.
2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit
Alienware M17x R4 w/ built in 3D, Intel i7 3740QM, GTX 680m 2GB, 16GB DDR3 1600MHz RAM, Win7 64bit, 1TB SSD, 1TB HDD, 750GB HDD
Pre-release 3D fixes, shadertool.py and other goodies: http://github.com/DarkStarSword/3d-fixes
Support me on Patreon: https://www.patreon.com/DarkStarSword or PayPal: https://www.paypal.me/DarkStarSword
i7-8086k @ 5.2GHz
EVGA GTX 1080Ti
16GB 3800MHz DDR4
Windows 7 x64
https://forums.geforce.com/default/topic/888367/3d-vision/guild-wars-2-expac-action-cam-crosshair-2d-trying-my-hand-at-a-fix-/
Cheers.
This game has a fix that simply pushes out the UI (it's practically 3d perfect otherwise) However because the Cursor is actually rendered in 3d as well and snaps to UI elements. The UI at Depth doesn't really work IMO. especially with a depth at 100%
However This new action cam cross hair they just put in would be a huge benefit to be at depth with the rest of the UI at screen. I was thinking of trying a new fix from scratch but on the blog it mentions the author had difficulty getting Helix to work without using the sweet fx files.
If anyone could take a look at my thread and put me in the right direction in terms of the code for the shader filtering, would be much appreciated.
i7-4790K CPU 4.8Ghz stable overclock.
16 GB RAM Corsair
ASUS Turbo 2080TI
Samsung SSD 840Pro
ASUS Z97-WS3D
Surround ASUS Rog Swift PG278Q(R), 2x PG278Q (yes it works)
Obutto R3volution.
Windows 10 pro 64x (Windows 7 Dual boot)
3D Gaming Rig: CPU: i7 7700K @ 4.9Ghz | Mobo: Asus Maximus Hero VIII | RAM: Corsair Dominator 16GB | GPU: 2 x GTX 1080 Ti SLI | 3xSSDs for OS and Apps, 2 x HDD's for 11GB storage | PSU: Seasonic X-1250 M2| Case: Corsair C70 | Cooling: Corsair H115i Hydro cooler | Displays: Asus PG278QR, BenQ XL2420TX & BenQ HT1075 | OS: Windows 10 Pro + Windows 7 dual boot
Like my fixes? Dontations can be made to: www.paypal.me/DShanz or rshannonca@gmail.com
Like electronic music? Check out: www.soundcloud.com/dj-ryan-king
The first question is: how do those effects behave by default? As in being at screen depth, being affected by convergence in an unexpected way, etc.
I've just released a fix for Dark Souls 2 (DX11 version) that for example fixes lighting that was displayed at screen depth. Check my latest post for the download link: https://forums.geforce.com/default/topic/822709/3d-vision/dark-souls-2-scholar-of-the-first-sin-direct-x11-version/post/4713536/#4713536
The files you want to look are these:
- "0b9a4f4bd9c5b45a-vs_replace.txt": the vertex shader.
- "de925318511ab23f-ps_replace.txt": the pixel shader.
I made a shader override in the "d3dx.ini" so the changes I make to the vertex shader affect only that pixel shader. I also made experimental "else" statements for the rest of pixel shaders, but they aren't the main point of the explanation.
First of all, you need to load the stereoparams and optionally the iniparams (for shader overrides, hotkeys, etc):
First thing to try when something is at screen depth is this code:
"o0" can be other variables, depending on which you need to change (o1, o2, r0, r1, r2, etc). That line of code has into account the separation of the left and right images, so the effect has positive value to one eye and negative to the other, convergence and the depth value of the effect (the "w" axis). Make sure the variable you're modifying has "w" axis. In a vertex shader not related to lighting, I once needed to use the depth value of a different variable. So you can use the "w" of a different variable if you know that it is at the same position.
In my case, the fix above wasn't enough. It just made the effect visible at any convergence setting. I needed to modify the pixel shader. And this was the result:
First thing to remember is that the effect of "r" variables is applied "in real time" inside of the method at many points. So you may have to try fixing it at different steps, because you may be changing the position, or the color, or other effects inside it, etc.
About my line of code, it's very similar to the other one. What does that "*0.07" mean, you may ask? It's the influence of the convergence. Without it, when increasing convergence the lights moved at a much faster pace than they should. So I had so slow it down to the point where the lights are always at their correct position, no matter the convergence setting. Trial and error.
Now shadows are very different monster that I couldn't fix yet in that game. They don't change their angle when increasing depth (the rest of the scenery does), and that's a very big problem that I still don't know how to fix.
I hope this explanation can help you.
CPU: Intel Core i7 7700K @ 4.9GHz
Motherboard: Gigabyte Aorus GA-Z270X-Gaming 5
RAM: GSKILL Ripjaws Z 16GB 3866MHz CL18
GPU: MSI GeForce RTX 2080Ti Gaming X Trio
Monitor: Asus PG278QR
Speakers: Logitech Z506
Donations account: masterotakusuko@gmail.com
No good tutorials at present. I want to revisit my School for ShaderHackers to add more in depth and specific glitches with specific fixes, but spare time is presently unavailable. So, best we can do at present is to give you pointers. We can generally make time to answer questions. The only way to learn how to fix Shadows at the moment is to read a bunch of stuff, and look at examples of prior fixes. We have a giant raft of prior fixes, and if you happen to be on a known game engine, we probably already have working examples.
To get started on shadow fixing, you'll want to get a lot more background about how they are created, specifically the deferred rendering approaches. They vary a lot, but the basic premise is that they are most often fixed in the PixelShaders (hence deferred to PS). They will often calculate a depth, a wrong depth because at PS time there is no automated way to inject the fact that stereo has altered the scenario. Unlike VS, where every vertice is automatically modified by the driver to reflect the stereo position.
This is why shadows often break, the automatic mode cannot fix the vertices properly, because they are calculated late in the pixel shader itself. The canonical example of this is the bunny sample: https://developer.nvidia.com/sites/default/files/akamai/gamedev/files/sdk/11/StereoIssues.zip
One of the better resources for digging deeper is an old post of mine, where I collated a bunch of links that help fill in background: http://helixmod.blogspot.com/2014/01/learning-how-to-fix-games.html
Once you've got the ideas down, you'll need to look at examples of fixes. DarkStarSword has the most extensive and well documented set of fixes: https://github.com/DarkStarSword/3d-fixes
There are numerous examples of Mike_ar69 fixes in game specific folders on 3Dmigoto: https://github.com/bo3b/3Dmigoto
And, there are a whole bunch of others on HelixModBlog itself. We always include the shader code, not just binaries, so you can look at examples where it's working.
@masterotaku: for your example, it would probably be worth looking at Helix's fix for the original DarkSouls. Game has possibly changed, but it's the same people doing the code, so the technique Helix used might be applicable to DS2.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers
Just for a quick note;))
The actual formula is the other way around: You want to add to the Position an offset based on the "eye" that you are rending.
The full and correct formula is:
Where "eye" is:
-1 for Left Eye
+1 for Right Eye.
Just wanted to be clear on this thing;)
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
The game I'm currently working on is Warhammer: The End Times - Vermintide (have already posted a partial fix in the thread for that game on here), which is using the Stingray engine by Autodesk. As far as I'm aware, it's a new engine and I haven't been able to find mention of other games using this yet.
Even after disabling all the shadows and lighting settings in the options, there are still a few broken shaders here and there. These shaders are in stereo, but from what I can tell with incorrect depth/separattion values, which seem to behave differently from different angles. In some of them I've modified values to get them looking correct from one angle, to have them look completely wrong in another angle.
If it's not too much to ask, I'm going to post up some screenshots and shader code, and if any specific direction can be provided to help me understand, that would be greatly appreciated. As much as I'm trying to read & understand the Nvidia documents up on Bo3b's resources link, or go through every shader in DarkStarSword's and Mike_ar69's fixes, that's a lot to take in and difficult to pinpoint which shaders in a fix would be relevant to what I'm working on, let alone understanding it. Mind you, I'm not asking you guys to fix these for me, but I'm hoping that maybe by taking a look someone will recognize what I'm working with and can say, "Ok, read info found on this particular link" or "take a look at shader abc in game xyz that I worked on"... although if anyone COULD provide a suggestion on fixing the code I provide, that might give me enough to work with and learn from.
However, if by looking at this, or just the fact I'm asking for this amount of help, you feel this is beyond my ability I'm willing to concede. Just wanted to at least ask and see how it goes.
Ex 1.
Here is a lighting shader being cast through a window, which also creates shadows from the window frame. Even from the angle in the screenshot it almost looks right, but slightly off, but if you were to look at it from another angle the window frame shadows separation will be further apart and look entirely broken.
As Bo3b indicated, pretty sure the problem is in the PS, but I'll post both PS and VS just in case.
VS
PS
Ex 2.
Here is lighting and shadows on a bridge, where the shadows just seem like they do not have the right separation values.
VS
PS
3D Gaming Rig: CPU: i7 7700K @ 4.9Ghz | Mobo: Asus Maximus Hero VIII | RAM: Corsair Dominator 16GB | GPU: 2 x GTX 1080 Ti SLI | 3xSSDs for OS and Apps, 2 x HDD's for 11GB storage | PSU: Seasonic X-1250 M2| Case: Corsair C70 | Cooling: Corsair H115i Hydro cooler | Displays: Asus PG278QR, BenQ XL2420TX & BenQ HT1075 | OS: Windows 10 Pro + Windows 7 dual boot
Like my fixes? Dontations can be made to: www.paypal.me/DShanz or rshannonca@gmail.com
Like electronic music? Check out: www.soundcloud.com/dj-ryan-king
It's worth noting that the two examples are pretty much identical in terms of the shader code itself. This is actually really common with games, where if we can crack the nut for a given shadow, we can then very often use the same technique/pattern for others in the game.
It's even possible to do offline fixes of hundreds of shaders this way, like DarkStarSword does with his python script, HelixMod does with its built-in LUA script, or Mike_ar69's offline VBA scripts. I've used regular expressions in NotePad++ to do the same thing. Only worth the trouble if you find you have hundreds to edit (from dumping all shaders).
For this specific shadow, let's see if we can figure out the right fix. Keeping in mind that while I understand the principles, I've never successfully fixed a shadow myself.
The basic principle is what I noted above for the bunny sample program. The PS is calculating a depth, in this case the:
And using that to decide how to place the shadows. The problem is that 3D Vision Automatic has no way of knowing or fixing this in the PixelShader, and we can't move the vertices in the Vertex Shader because the actual location is not known until the Pixel Shader calculates that depth and applies it.
The fundamental background reading on this is the NVidia whitepaper:
http://developer.download.nvidia.com/whitepapers/2011/StereoUnproject.pdf
And some terrific background comments and example from Mike_ar69 to fix a game:
https://forums.geforce.com/default/topic/513190/3d-vision/how-to-fix-disable-shaders-in-games-dll-guide-and-fixes-/post/4069271/#4069271
Look at that example carefully (posts further down) to see how Mike shifts the 'depth' into the View space where it can be fixed using the Prime Directive, then doing the inverse-view-projection to get it back to the world space that the shader is working upon. Getting all this in the right spot in the code, in the right coordinate space is pretty hard. This example is all ASM code, but you should be able to understand it.
Now, how to apply that to your shader here? Let me move the next part into a new post.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers
We want to always fix the stereo in View space. It doesn't work in World, and it doesn't work in Camera space.
Let me compare some pieces of Mike's example to yours:
Non obvious, but the result is "if r1.w > 1 then discard". So we can be sure that r1.w will always be one. Maybe fractional too, but let's ignore that.
If I really squint, I see it using that row out of the 'world' matrix, which might be transforming it to a view coordinate, matching Mike's example. So if that's true, we'd want to fix the stereo location *before* this spot. Admittedly this is a little weak though, because the names don't really line up. I'd actually expect it to use the 'world_view_proj' parameter.
If that is the scenario, we presumably still don't have a full solution though, because although we have the world_view_proj to move the coordinate into View space where we can stereo-correct it, I can't see how we get that View coordinate back to a World coordinate, so that the shader can finish applying the shadow masks.
We could do something like:
This is wrong. See DarkStarSword's better answer below.But the inverse is not clear to me. But, maybe, maybe it's the 'inv_world' matrix and we get super lucky.
So... totally guessing by this point, maybe try something like:
But... usually they'll use the world 'proj' or 'projection', and that matrix is probably not for projecting.
More importatnly, that hopefully gives you an idea of where to start at least, where you can experiment and try to nail it down.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers
When an effect renders in 3D but at wrong depth (but you can clearly see it in 3D) there two ways you can go about it:
I.
- Find the corresponding Vertex Shader and from the Position subtract the normal Stereo Formula. This should make the effect render at screen depth (2D).
- In the Pixel Shader find the position in View Space and correct it. The correction MUST take place in World Space not view space!!!
- To do this:
A. (Thanks to Mike_ar69)
1. Multiply the vPos with the Projection Matrix (we are now in World Space)
2. Apply (addition operation) the normal Stereo Formula. (Sometimes you need to MULTIPLY using the FOVx as well)
Ex: vPos += separation * (vPos.w - g_convergence);
3. Multiply the resulted vector with the Inverse Projection Matrix (now we are back in the View Space).
B. (Thanks to DarkStarSword)
1. Instead of doing all the above you can simplify the operation quite a bit:
Apply (addition operation) the normal Stereo Formula AND DIVIDE the result with FOVx.
Ex: vPos += separation * (vPos.w - g_convergence) / mtxProjectionMatrix.m00;
II.
- Forget about the Vertex Shader. We will do everything in the Pixel shader:
- Both A and B apply but instead of using an addition operation use a subtract operation:
Ex: vPos -= separation * (vPos.w - g_convergence) / mtxProjectionMatrix.m00;
Take note that you only sometimes need to divide/multiply by the FOV.
Horizontal FOV (FOVx = mtxProjectionMatrix[0].x)
Vertical FOV (FOVy = mtxProjectionMatrix[1].y)
Sometimes you need a combination/deviation from the above said.
Now in your example:
In the Vertex I see that TEXCOORD1(o1) is already stereo corrected by using r1 that is stereo corected by the 3D Vision Driver.
In the Pixel Shader I see this:
Now, basically you need to experiment a bit and see what actually fits and works. (by changing the operation order, addition/division).
Also, sometimes you need to use this variant of the formula (notice the minus):
vPos += separation * -(vPos.w - g_convergence);
vPos.w might not always be availalbe is vPos is a vec3, in which case you can use the .z component).
Basically now you need to experiment a bit and see what works on a particular game;))
Hope this helps out
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
To clarify, we can adjust a coordinate either in "view space" or "projection space" (AKA "clip space"). These are similar coordinate systems as they are both relative to the camera, but they aren't quite the same thing and the differences can be confusing.
A coordinate typically undergoes several transformations to get it to the screen. Depending on the game, several of these steps will often be combined (e.g. in a world-view-projection matrix):
Projection space is a weird homogeneous coordinate system used just before the perspective divide to make a 3D image render on a 2D screen, taking the FOV into account (and clipping planes to scale the depth buffer) - this is what we end up with in the position output from the vertex shaders and is where we would use the regular stereo correction formula.
View space is a much simpler 3D Cartesian coordinate system relative to the camera - when we correct a coordinate in this system we use a slight variation of the correction formula that includes a multiply by part of the inverse projection matrix (or divide by part of the forward projection matrix).
We can also technically adjust a "world space" coordinate as well, but we have to have converted the correction from projection or view space into world space first since the orientation will be different, but since we can't determine the correction amount without the depth (which we may or may not have) it's often no simpler than converting the entire coordinate to view or projection space and back again (though it can produce more accurate results in some cases as it minimises floating point error, especially if one of the matrices includes a position transformation).
Experience has taught me to recognise the significance of certain observations - that particular observation suggests that the correction is being applied on the wrong side of a transformation (matrix multiply) as it is adjusting the coordinates on the wrong axis.
If you are looking at my fixes on github I highly recommend you use the history view. Every time I change something I do so in a "commit" which has some extra notes describing what I was changing, and will also show a diff view so you can clearly see what I changed in the shader vs what was already there. Generally you should look for the first shadow or two I fixed in a given game/engine which is where I will have written the most detailed notes - I won't write much detail on any commits that are just repeating a previous pattern.
Well, the good news is this game has given you access to all the matrices you are likely to need - you appear to have access to the projection, view and both of their inverse matrices available here :)
I often find it helpful to look at the vertex shader to try to understand what coordinates it is passing to the pixel shader. Sometimes the vertex shader is complex and hard to follow (in which case I'll just make an adjustment somewhere and observe what happens to make a guess at what coordinate system I'm working with), but in this case the vertex shader is quite easy to follow:
That last line usually signifies that the correction needs to go nearby (in this case, my bet is after the r1.yzw*r1.xxx and before the +camera_pos.xyz - so you will need to split that line up to two separate instructions with equivalent results).
Depending on what coordinate system you are currently in you will need to do one of several options:
You wouldn't be multiplying by the world matrix - if you needed to that would suggest you are in local coordinates relative to the light. I also would not call the result "view" as that would imply it's in view-space coordinates, but the result of the WVP matrix will be in projection-space coordinates.
This is the type of code you will see from the decompiler to do a matrix multiply, but I don't personally recommend using this style when using DX11 and HLSL - it will work, but requires more effort to experiment with since there's a bunch of redundant text there. Instead, I suggest:
Why do it this way? Because you may need to experiment with a few variations, such as:
and combinations thereof. Even now that I've fixed a bunch of different shadows and think I have a pretty good understanding of the above variations I still find that I often get this part wrong on the first try.
In this case I think you will need something like this, but you may need to experiment to find the right variation.
Vertex shader:
And in the pixel shader:
When I suspect this may be necessary I run a simple experiment first. I'll disable any discard instructions in the pixel shader and modify it to output a solid colour. That will tell me whether the total bounding box of the effect is in the correct position in 3D or not. It should totally cover the area the effect will be rendered in regardless of distance to the effect and the convergence setting. Getting this wrong and over-correcting elsewhere can lead to an effect clipping on the sides from a distance.
If you've multiplied by the projection matrix you are in projection/clip space, not world space.
Actually it won't be (possibly unless the game's profile has StereoUseMatrix enabled, but that can be unreliable and can produce flickering so I don't recommend it). The only thing the driver will have corrected is o0. It may be that r1 needs a correction in the vertex shader as well since o0 is derived from it.
It looks like your below cases assume this is the position in view space, but it is not. Only r1.x will be valid at this point and will contain the scene depth. The world-space coordinate comes a little further on when this is multiplied by a world space coordinate of fixed depth.
which is of course mathematically equivalent to:
vPos -= separation * (vPos.w - g_convergence);
As a general rule of thumb, if the adjustment is in the vertex shader you are probably adding, if it's in the pixel shader you are probably subtracting... But of course that can vary as well.
I agree with bo3b and helifax that experimentation is important at this point - even now that I've fixed a bunch of different shadow effects and understand the maths pretty well there is still an element of trial and error to find the right spot to do a correction, and the right pattern of the correction.
One tip I have to help understand what a correction is doing is to define a parameter in the d3dx.ini that can transition smoothly between 0 and 1 over several seconds on a key press. Multiply your correction by that parameter and observe which direction it moves when you press the key - that can often help you identify if you are on the right track or not, and might provide more clues than simply looking at a before / after comparison.
2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit
Alienware M17x R4 w/ built in 3D, Intel i7 3740QM, GTX 680m 2GB, 16GB DDR3 1600MHz RAM, Win7 64bit, 1TB SSD, 1TB HDD, 750GB HDD
Pre-release 3D fixes, shadertool.py and other goodies: http://github.com/DarkStarSword/3d-fixes
Support me on Patreon: https://www.patreon.com/DarkStarSword or PayPal: https://www.paypal.me/DarkStarSword
3D Gaming Rig: CPU: i7 7700K @ 4.9Ghz | Mobo: Asus Maximus Hero VIII | RAM: Corsair Dominator 16GB | GPU: 2 x GTX 1080 Ti SLI | 3xSSDs for OS and Apps, 2 x HDD's for 11GB storage | PSU: Seasonic X-1250 M2| Case: Corsair C70 | Cooling: Corsair H115i Hydro cooler | Displays: Asus PG278QR, BenQ XL2420TX & BenQ HT1075 | OS: Windows 10 Pro + Windows 7 dual boot
Like my fixes? Dontations can be made to: www.paypal.me/DShanz or rshannonca@gmail.com
Like electronic music? Check out: www.soundcloud.com/dj-ryan-king
I've been suggesting fixing only in View space, because that's what Mike has always recommended. Still, it seems like he might have been saying View when he meant Projection. His example for Shogun is clearly going to Projection space. One thing I've also been confusing is that when a variable is named something like world_view_proj, that the last part is Projection space, not the verb 'project'. The terminology of 3D graphics is surprisingly confusing and ill-defined.
For teaching purposes and also because I like to whittle things down to their simplest, I prefer the simplicity of a single technique, like Mike's suggestion from that earlier post for Shogun.
So unless I've confused it, that would be:
Your toolbox is a lot more powerful. Especially because you do not require a starting world coordinate. I like to start with a basic approach, but I really appreciate seeing the other options.
@DJ-RK: Between all of our suggestions (including Mike's fixes and posts), I'd recommend emphasizing DarkStarSword's. He has the deepest understanding of the graphics pipeline and the right spots for fixes.
Now you can see why we keep saying shadows are the hardest things to fix. Unless I'm mistaken, only 3 people have created de novo shadow fixes for unknown engines- Helix, Mike_ar69, and DarkStarSword. The rest of us fill out the team by doing a lot of the hunting and application of their techniques. This is why I urge everyone to put in some effort to understand the basics, because even if we can't quite pull off a new shadow fix, we can all still contribute in valuable ways.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers