Okay so far I've done the following:
1. http://unity3d.com/get-unity/download/archive - Downloaded built in unity shaders.
2. Disable D3D11 support in the player settings and set D3D9 to use exclusive fullscreen.
I guess my first step is to modify these shaders with your changes. See if that fixes the shadows. I have a very basic scene, a few cubes and terrain, that's it. Hopefully I can fix the shaders at this basic level based upon your info you provided.
I do not want to rely on helix or 3dmigoto if possible.
I guess my first step is to modify these shaders with your changes. See if that fixes the shadows. I have a very basic scene, a few cubes and terrain, that's it. Hopefully I can fix the shaders at this basic level based upon your info you provided.
I do not want to rely on helix or 3dmigoto if possible.
Do your shader modifications here - https://github.com/DarkStarSwod/3d-fixes/tree/master/Unity/builtin_shaders-4.5.5 - still require 3dmigoto or helix to make them work?
Interesting;)
- For Unity 5 games do you still need to hunt the shaders manually? or do you use a script on top of them? I see you have quite some scripts there for handling shaders.
I started taking a look at Unity 5 games but from the OpenGL point of view;)
Every Unity game can be forced to render in OpenGL;)
So far I made some progress on this:
- Had to write more in the wrapper to allow a hybrid rendering mode.
- The OpenGL renderer is pretty oldish style. Is a mix of modern with Pre-OGL 3.0 style. This means that I am now able to inject stereocopy in the game in 2 ways:
- Normal mode through shaders.
- Modifying the Projection Matrix directly that is sent to the shaders. This way the shaders don't need to be modified anymore.
- The OpenGL renderer is pretty slow for some reason... Might be because I am using "Layers of Fear" game for testing.
The problem that I currently encountered and have no idea how to fix currently is:
- Everything looks perfect in 3D (using the Proj_Matrix stereofication method) at low convergence (approx 0.1). The moment I am increasing the convergence to something like 1.5 the shadows break. (They kinda look like they are rendered from the wrong eye perspective). I looked through you shader fixes and noticed this one:
[code]
// Attach eye information to buffer that can be checked from a2d7f336b9c0abdf.
// For the right eye we negate pos.z and subtract 1 (to ensure it will
// definitely be negative):
float4 stereo = StereoParams.Load(0);
if (stereo.z == -1)
append.pos.z = -append.pos.z - 1;
append.color.rgba = float4(c.rgb * saturate(c.a*4), c.a);
pointBufferOutput.Append (append);
return float4(c.rgb * saturate(1-c.a*4), c.a);
}
[/code]
I am wandering why is this necessary or how to tackle this problem;)
Thank you in advance!
PS: I'll post the shaders in a few minutes;)
- For Unity 5 games do you still need to hunt the shaders manually? or do you use a script on top of them? I see you have quite some scripts there for handling shaders.
I started taking a look at Unity 5 games but from the OpenGL point of view;)
Every Unity game can be forced to render in OpenGL;)
So far I made some progress on this:
- Had to write more in the wrapper to allow a hybrid rendering mode.
- The OpenGL renderer is pretty oldish style. Is a mix of modern with Pre-OGL 3.0 style. This means that I am now able to inject stereocopy in the game in 2 ways:
- Normal mode through shaders.
- Modifying the Projection Matrix directly that is sent to the shaders. This way the shaders don't need to be modified anymore.
- The OpenGL renderer is pretty slow for some reason... Might be because I am using "Layers of Fear" game for testing.
The problem that I currently encountered and have no idea how to fix currently is:
- Everything looks perfect in 3D (using the Proj_Matrix stereofication method) at low convergence (approx 0.1). The moment I am increasing the convergence to something like 1.5 the shadows break. (They kinda look like they are rendered from the wrong eye perspective). I looked through you shader fixes and noticed this one:
// Attach eye information to buffer that can be checked from a2d7f336b9c0abdf.
// For the right eye we negate pos.z and subtract 1 (to ensure it will
// definitely be negative):
float4 stereo = StereoParams.Load(0);
if (stereo.z == -1)
append.pos.z = -append.pos.z - 1;
I am wandering why is this necessary or how to tackle this problem;)
Thank you in advance!
PS: I'll post the shaders in a few minutes;)
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
[quote="BenSmith"]Okay so far I've done the following:[/quote]Sorry for the delay in replying - I must have missed your post.
[quote]2. Disable D3D11 support in the player settings and set D3D9 to use exclusive fullscreen.[/quote]This may not be necessary any more - recent versions of Unity now support exclusive fullscreen mode in DX11, though they have some issues with Alt+Tab, so DX9 mode is still more reliable.
[quote]I guess my first step is to modify these shaders with your changes. See if that fixes the shadows. I have a very basic scene, a few cubes and terrain, that's it. Hopefully I can fix the shaders at this basic level based upon your info you provided.[/quote]If you can see the broken shaodws it should be enough, which IIRC in Unity 4 needed deferred rendering.
[quote]I do not want to rely on helix or 3dmigoto if possible.[/quote]
ok, in that case we will need to write a plugin for Unity to inject a texture that holds the stereo parameters from nvapi to the shaders. nvstereo.h has code to do this - it is available *somewhere* on the nvidia site, but can be hard to find (IIRC it was only ever shipped publically in one of their samples). Probably the easiest place to find it is in 3DMigoto, though keep in mind that it is somewhat modified from the original (to support DX11 and to inject some extra info):
https://github.com/bo3b/3Dmigoto/blob/master/nvstereo.h
Of course that is C++ code - I assume for Unity we would need to translate that to C# (or does Unity have some support for C++ plugins?)
[quote="BenSmith"]Do your shader modifications here - https://github.com/DarkStarSwod/3d-fixes/tree/master/Unity/builtin_shaders-4.5.5 - still require 3dmigoto or helix to make them work?[/quote]
Those were designed to work under some constraints that you won't have to worry about - namely, that I couldn't add any additional inputs from the game / Unity.
This part declares the textures holding the stereo parameters as injected by Helix mod - you will still need these, but may need to adjust the sampler register number depending on what you do in the plugin:
[code]
sampler2D NVStereoTextureVS : SAMPLER0; // Helix mod default for vertex shaders
sampler2D NVStereoTexturePS : SAMPLER13; // Helix mod default for pixel shaders
[/code]
You won't need this part:
[code]
// We really want the inverse projection matrix, but all we have is the MV and
// MVP matrices. Helix can invert the MV matrix, which allows us to calculate
// the projection matrix as P = MV.I * MVP, then we can just invert the one
// field we actually care about instead of the entire matrix.
//
// This is only necessary as we can't get the game to pass in extra parameters
// when using Helix mod for fixes - game devs should just use the inverse
// projection matrix directly.
float4x4 HelixModMV_I : register(c180); // DX9Settings.ini set to pass the inverse MV matrix in c180
float4x4 HelixModMVP : register(c190); // DX9Settings.ini set to pass the MVP matrix to c190
[/code]
And this part (there's two of these) you can simply replace with unity_CameraInvProjection._m00:
[code]
matrix ProjectionMatrix = mul(UNITY_MATRIX_MVP, HelixModMV_I);
float InverseProjectionMatrix_m00 = 1 / ProjectionMatrix._m00;
[/code]
And of course these were from Unity 4.5.5, but you can look at the commit history to see what I actually changed and make the equivalent changes to the Unity 5 shaders.
Both Internal-DeferredShading.shader and Internal-PrePassLighting.shader follow the same pattern to fix, while Internal-PrePassCollectShadows.shader only needs the adjustment in the pixel shader and no changed in the vertex shader.
Here's an example of fixing Internal-PrePassCollectShadows.shader, but this is a decompiled variant so it won't look quite the same as the ones you are looking at (and I'm still making changes to the vertex shader to simplify things, but that won't be necessary for you):
https://github.com/DarkStarSword/3d-fixes/commit/7e4a46874ec08b438716acd5706cacf5fe8dce14
BenSmith said:Okay so far I've done the following:
Sorry for the delay in replying - I must have missed your post.
2. Disable D3D11 support in the player settings and set D3D9 to use exclusive fullscreen.
This may not be necessary any more - recent versions of Unity now support exclusive fullscreen mode in DX11, though they have some issues with Alt+Tab, so DX9 mode is still more reliable.
I guess my first step is to modify these shaders with your changes. See if that fixes the shadows. I have a very basic scene, a few cubes and terrain, that's it. Hopefully I can fix the shaders at this basic level based upon your info you provided.
If you can see the broken shaodws it should be enough, which IIRC in Unity 4 needed deferred rendering.
I do not want to rely on helix or 3dmigoto if possible.
ok, in that case we will need to write a plugin for Unity to inject a texture that holds the stereo parameters from nvapi to the shaders. nvstereo.h has code to do this - it is available *somewhere* on the nvidia site, but can be hard to find (IIRC it was only ever shipped publically in one of their samples). Probably the easiest place to find it is in 3DMigoto, though keep in mind that it is somewhat modified from the original (to support DX11 and to inject some extra info):
Those were designed to work under some constraints that you won't have to worry about - namely, that I couldn't add any additional inputs from the game / Unity.
This part declares the textures holding the stereo parameters as injected by Helix mod - you will still need these, but may need to adjust the sampler register number depending on what you do in the plugin:
sampler2D NVStereoTextureVS : SAMPLER0; // Helix mod default for vertex shaders
sampler2D NVStereoTexturePS : SAMPLER13; // Helix mod default for pixel shaders
You won't need this part:
// We really want the inverse projection matrix, but all we have is the MV and
// MVP matrices. Helix can invert the MV matrix, which allows us to calculate
// the projection matrix as P = MV.I * MVP, then we can just invert the one
// field we actually care about instead of the entire matrix.
//
// This is only necessary as we can't get the game to pass in extra parameters
// when using Helix mod for fixes - game devs should just use the inverse
// projection matrix directly.
float4x4 HelixModMV_I : register(c180); // DX9Settings.ini set to pass the inverse MV matrix in c180
float4x4 HelixModMVP : register(c190); // DX9Settings.ini set to pass the MVP matrix to c190
And this part (there's two of these) you can simply replace with unity_CameraInvProjection._m00:
And of course these were from Unity 4.5.5, but you can look at the commit history to see what I actually changed and make the equivalent changes to the Unity 5 shaders.
Both Internal-DeferredShading.shader and Internal-PrePassLighting.shader follow the same pattern to fix, while Internal-PrePassCollectShadows.shader only needs the adjustment in the pixel shader and no changed in the vertex shader.
Here's an example of fixing Internal-PrePassCollectShadows.shader, but this is a decompiled variant so it won't look quite the same as the ones you are looking at (and I'm still making changes to the vertex shader to simplify things, but that won't be necessary for you):
https://github.com/DarkStarSword/3d-fixes/commit/7e4a46874ec08b438716acd5706cacf5fe8dce14
2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit
[quote="helifax"]Interesting;)
- For Unity 5 games do you still need to hunt the shaders manually? or do you use a script on top of them? I see you have quite some scripts there for handling shaders.[/quote]
I use a combination. Here's an example of my typical workflow:
[code]
$ cd /cygdrive/c/Steam/SteamApps/common/The\ Forest/
$ ~/3d-fixes/unity_asset_extractor.py theforest_data/Resources/* theforest_data/*.assets
$ cd extracted
$ ~/3d-fixes/extract_unity_shaders.py */*.shader
[/code]
For DX9 I'd then follow that up by using shadertool.py to auto-fix halo issues in the vertex shaders, and apply the tedious part of the shadow fix in the pixel shaders (the vertex shader part I just grab from my template). I haven't scripted this for DX11 yet, but DX11 Unity games tend to be Unity 5, where there are less broken effects than Unity 4... I'll get to scripting that some day.
By default, extract_unity_shaders.py will only extract d3d9 and d3d11 shaders and will match the hashes used by Helix Mod (under ShaderCRCs, requires shaderasm.exe to have been compiled) and 3DMigoto (under ShaderFNVs). You can make it extract all the variants including OpenGL shaders ("opengl", "glcore", "gles" and "gles3" variants) by passing --deep-dir like this, but be warned that this can easily exceed the path character limit of Windows, which may prevent the extracted files working if you try to open or delete them from explorer (this is purely an application bug, as cygwin has no problem with these no matter how long the filename ends up):
[code]
$ ~/3d-fixes/extract_unity_shaders.py */*.shader --deep-dir
[/code]
For any effects I did not fix by script, I'll hunt them in Helix Mod / 3DMigoto and then find the matching hash in the ShaderCRCs or ShaderFNVs directory and copy the extra headers in (the ShaderHeaders.json file that extract_unity_shaders.py produces is used by shadertool.py --lookup-header-json to do this for DX9 shaders, but that's purely for convienience).
I can likely make extract_unity_shaders.py duplicate the hash that you use in your OpenGL wrapper as well - what do you use?
[quote]I started taking a look at Unity 5 games but from the OpenGL point of view;)[/quote]Cool :)
[quote]Every Unity game can be forced to render in OpenGL;)
So far I made some progress on this:
- Had to write more in the wrapper to allow a hybrid rendering mode.
- The OpenGL renderer is pretty oldish style. Is a mix of modern with Pre-OGL 3.0 style. This means that I am now able to inject stereocopy in the game in 2 ways:
- Normal mode through shaders.
- Modifying the Projection Matrix directly that is sent to the shaders. This way the shaders don't need to be modified anymore.
[/quote]Most shaders in Unity only use a combined MVP matrix - does OpenGL give you the projection matrix directly, or are you using the idea I was thinking about in the Mad Max thread (if you are, I'm really interested to know if it works)?
[quote]The problem that I currently encountered and have no idea how to fix currently is:
- Everything looks perfect in 3D (using the Proj_Matrix stereofication method) at low convergence (approx 0.1). The moment I am increasing the convergence to something like 1.5 the shadows break. (They kinda look like they are rendered from the wrong eye perspective).[/quote]I guess initially I'd suggest following the same patterns I use to fix the shaders since we know that works. In Unity 5 the directional lighting shader (Hidden/Internal-PrePassCollectShadows.shader) has direct access to the inverse projection matrix so it can be fixed quite easily without needing to copy anything from other shaders, so it's probably worth tackling that first.
The point/spot/physical lighting shaders (Hidden/Internal-DeferredShading.shader and/or Hidden/Internal-PrePassLighting.shader) don't have it and either need it to be copied from the directional lighting shader, or derived like we did in Unity 4 (however, deriving it without copying the matrices from elsewhere will only work for point & spot lights - when it's used for physical lighting the matrices won't be valid).
[quote]I looked through you shader fixes and noticed this one:
[code]
// Attach eye information to buffer that can be checked from a2d7f336b9c0abdf.
// For the right eye we negate pos.z and subtract 1 (to ensure it will
// definitely be negative):
float4 stereo = StereoParams.Load(0);
if (stereo.z == -1)
append.pos.z = -append.pos.z - 1;
append.color.rgba = float4(c.rgb * saturate(c.a*4), c.a);
pointBufferOutput.Append (append);
return float4(c.rgb * saturate(1-c.a*4), c.a);
}
[/code]
I am wandering why is this necessary or how to tackle this problem;)[/quote]haha, no you won't need this. This was for a specific effect (DoF blur) in the Viking Village demo, and I have never seen a real game use it where we couldn't simply disable the effect with no loss. If you're interested though, the problem was that the effect was using an append structured buffer to store a list of very bright pixels that would be highlighted by a later shader, but the structured buffer was shared between both eyes, so I needed to come up with a way to tag each entry with which eye they were from.
- For Unity 5 games do you still need to hunt the shaders manually? or do you use a script on top of them? I see you have quite some scripts there for handling shaders.
I use a combination. Here's an example of my typical workflow:
$ cd /cygdrive/c/Steam/SteamApps/common/The\ Forest/
$ ~/3d-fixes/unity_asset_extractor.py theforest_data/Resources/* theforest_data/*.assets
$ cd extracted
$ ~/3d-fixes/extract_unity_shaders.py */*.shader
For DX9 I'd then follow that up by using shadertool.py to auto-fix halo issues in the vertex shaders, and apply the tedious part of the shadow fix in the pixel shaders (the vertex shader part I just grab from my template). I haven't scripted this for DX11 yet, but DX11 Unity games tend to be Unity 5, where there are less broken effects than Unity 4... I'll get to scripting that some day.
By default, extract_unity_shaders.py will only extract d3d9 and d3d11 shaders and will match the hashes used by Helix Mod (under ShaderCRCs, requires shaderasm.exe to have been compiled) and 3DMigoto (under ShaderFNVs). You can make it extract all the variants including OpenGL shaders ("opengl", "glcore", "gles" and "gles3" variants) by passing --deep-dir like this, but be warned that this can easily exceed the path character limit of Windows, which may prevent the extracted files working if you try to open or delete them from explorer (this is purely an application bug, as cygwin has no problem with these no matter how long the filename ends up):
For any effects I did not fix by script, I'll hunt them in Helix Mod / 3DMigoto and then find the matching hash in the ShaderCRCs or ShaderFNVs directory and copy the extra headers in (the ShaderHeaders.json file that extract_unity_shaders.py produces is used by shadertool.py --lookup-header-json to do this for DX9 shaders, but that's purely for convienience).
I can likely make extract_unity_shaders.py duplicate the hash that you use in your OpenGL wrapper as well - what do you use?
I started taking a look at Unity 5 games but from the OpenGL point of view;)
Cool :)
Every Unity game can be forced to render in OpenGL;)
So far I made some progress on this:
- Had to write more in the wrapper to allow a hybrid rendering mode.
- The OpenGL renderer is pretty oldish style. Is a mix of modern with Pre-OGL 3.0 style. This means that I am now able to inject stereocopy in the game in 2 ways:
- Normal mode through shaders.
- Modifying the Projection Matrix directly that is sent to the shaders. This way the shaders don't need to be modified anymore.
Most shaders in Unity only use a combined MVP matrix - does OpenGL give you the projection matrix directly, or are you using the idea I was thinking about in the Mad Max thread (if you are, I'm really interested to know if it works)?
The problem that I currently encountered and have no idea how to fix currently is:
- Everything looks perfect in 3D (using the Proj_Matrix stereofication method) at low convergence (approx 0.1). The moment I am increasing the convergence to something like 1.5 the shadows break. (They kinda look like they are rendered from the wrong eye perspective).
I guess initially I'd suggest following the same patterns I use to fix the shaders since we know that works. In Unity 5 the directional lighting shader (Hidden/Internal-PrePassCollectShadows.shader) has direct access to the inverse projection matrix so it can be fixed quite easily without needing to copy anything from other shaders, so it's probably worth tackling that first.
The point/spot/physical lighting shaders (Hidden/Internal-DeferredShading.shader and/or Hidden/Internal-PrePassLighting.shader) don't have it and either need it to be copied from the directional lighting shader, or derived like we did in Unity 4 (however, deriving it without copying the matrices from elsewhere will only work for point & spot lights - when it's used for physical lighting the matrices won't be valid).
I looked through you shader fixes and noticed this one:
// Attach eye information to buffer that can be checked from a2d7f336b9c0abdf.
// For the right eye we negate pos.z and subtract 1 (to ensure it will
// definitely be negative):
float4 stereo = StereoParams.Load(0);
if (stereo.z == -1)
append.pos.z = -append.pos.z - 1;
I am wandering why is this necessary or how to tackle this problem;)
haha, no you won't need this. This was for a specific effect (DoF blur) in the Viking Village demo, and I have never seen a real game use it where we couldn't simply disable the effect with no loss. If you're interested though, the problem was that the effect was using an append structured buffer to store a list of very bright pixels that would be highlighted by a later shader, but the structured buffer was shared between both eyes, so I needed to come up with a way to tag each entry with which eye they were from.
2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit
[quote="DarkStarSword"]
I can likely make extract_unity_shaders.py duplicate the hash that you use in your OpenGL wrapper as well - what do you use?
[/quote]
The way I dump the shaders:
- When the source code is loaded (before compilation) I grab it. I then calculate a CRC32 value over the whole information in the shader and write it in the filename. I do not know how 3DMigoto does it, but I expect is something like this? (Maybe not a CRC32 but a MD5 sum or equivalent?).
[quote="DarkStarSword"]Most shaders in Unity only use a combined MVP matrix - does OpenGL give you the projection matrix directly, or are you using the idea I was thinking about in the Mad Max thread (if you are, I'm really interested to know if it works)?
[/quote]
The OpenGL renderer in Unity5 is a bit weird;) It combines shaders with fixed pipeline.
The way I modify the projection matrix is like this:
Snippet from GL instructions (traced).
[code]
glViewport(0,0,1280,720)
glMatrixMode(GL_PROJECTION)
glLoadMatrixf([0.001562,0.000000,0.000000,0.000000,0.000000,-0.002778,0.000000,0.000000,0.000000,0.000000,-0.019802,0.000000,-1.000000,0.999998,-0.980198,1.000000])
glMatrixMode(GL_MODELVIEW)
glMatrixMode(GL_MODELVIEW)
glLoadMatrixf([1.000000,0.000000,0.000000,0.000000,0.000000,1.000000,0.000000,0.000000,0.000000,0.000000,1.000000,0.000000,0.000000,0.000000,0.000000,1.000000])
[/code]
I just hook the "glMatrixMode(GL_PROJECTION)" and the next "glLoadMatrixf" and apply the stereo directly in the matrix that is sent to the shaders. (kinda like MadMax but this is done automatically from the wrapper not the shader side).
Now OpenGL has built in-variables;) One of them is the "gl_ProjectionMatrix" variable that should contain the Projection matrix that is modified above ;) (So this projection Matrix is already in Stereo)
Calculating the inverse of this matrix can't be done with the inverse() function since is not available in GLSL #version 1.20 (but we can manually calculate it). Now I am unsure if I need the inverse of the Stereo-ProjectionMatrix or just the inverse of the Mono-ProjectionMatrix?
For example, this is a shader that is responsible for some shadows. (notice the #IFDEF blocks, it contains both the Vertex and Pixel source code).
[code]
#version 120
uniform float g_pixelEnabled;
uniform float g_eye;
uniform float g_eye_separation;
uniform float g_convergence;
uniform vec4 g_custom_params;
uniform vec4 g_screeninfo;
#define FRAGMENT
#ifdef VERTEX
uniform vec4 unity_LightShadowBias;
void main ()
{
vec4 opos_1;
vec4 tmpvar_2;
tmpvar_2.w = 1.0;
tmpvar_2.xyz = gl_Vertex.xyz;
opos_1 = (gl_ModelViewProjectionMatrix * tmpvar_2);
vec4 clipPos_3;
clipPos_3.xyw = opos_1.xyw;
clipPos_3.z = (opos_1.z + clamp ((unity_LightShadowBias.x / opos_1.w), 0.0, 1.0));
clipPos_3.z = mix (clipPos_3.z, max (clipPos_3.z, -(opos_1.w)), unity_LightShadowBias.y);
opos_1 = clipPos_3;
gl_Position = clipPos_3;
}
#endif
#ifdef FRAGMENT
void main ()
{
gl_FragData[0] = vec4(0.0, 0.0, 0.0, 0.0);
}
#endif
[/code]
I tried a couple of things here, but nothing worked. What is interesting is that at convergence 0 (infinity) everything looks perfect;)) But increasing the convergence breaks the rendering:-s
This is another shader responsible for some reflections:
[code]
#version 120
uniform float g_pixelEnabled;
uniform float g_eye;
uniform float g_eye_separation;
uniform float g_convergence;
uniform vec4 g_custom_params;
uniform vec4 g_screeninfo;
#define FRAGMENT
#ifdef VERTEX
uniform vec4 _ProjectionParams;
uniform float _LightAsQuad;
varying vec4 xlv_TEXCOORD0;
varying vec3 xlv_TEXCOORD1;
void main ()
{
vec4 tmpvar_1;
vec3 tmpvar_2;
tmpvar_1 = (gl_ModelViewProjectionMatrix * gl_Vertex);
vec4 o_3;
vec4 tmpvar_4;
tmpvar_4 = (tmpvar_1 * 0.5);
vec2 tmpvar_5;
tmpvar_5.x = tmpvar_4.x;
tmpvar_5.y = (tmpvar_4.y * _ProjectionParams.x);
o_3.xy = (tmpvar_5 + tmpvar_4.w);
o_3.zw = tmpvar_1.zw;
tmpvar_2 = ((gl_ModelViewMatrix * gl_Vertex).xyz * vec3(-1.0, -1.0, 1.0));
vec3 tmpvar_6;
tmpvar_6 = mix (tmpvar_2, gl_Normal, vec3(_LightAsQuad));
tmpvar_2 = tmpvar_6;
gl_Position = tmpvar_1;
xlv_TEXCOORD0 = o_3;
xlv_TEXCOORD1 = tmpvar_6;
}
#endif
#ifdef FRAGMENT
uniform vec3 _WorldSpaceCameraPos;
uniform vec4 _ProjectionParams;
uniform vec4 _ZBufferParams;
uniform sampler2D _CameraDepthTexture;
uniform vec4 _LightPos;
uniform vec4 _LightColor;
uniform mat4 _CameraToWorld;
uniform sampler2D _LightTextureB0;
uniform vec4 unity_LightGammaCorrectionConsts;
uniform sampler2D _CameraGBufferTexture0;
uniform sampler2D _CameraGBufferTexture1;
uniform sampler2D _CameraGBufferTexture2;
uniform float _GlobalSpecularScale;
uniform float _InsideMirrorLighting;
varying vec4 xlv_TEXCOORD0;
varying vec3 xlv_TEXCOORD1;
void main ()
{
vec4 res_1;
vec3 specColor_2;
vec3 tmpvar_3;
vec2 tmpvar_4;
tmpvar_4 = (xlv_TEXCOORD0.xy / xlv_TEXCOORD0.w);
vec4 tmpvar_5;
tmpvar_5.w = 1.0;
tmpvar_5.xyz = ((xlv_TEXCOORD1 * (_ProjectionParams.z / xlv_TEXCOORD1.z)) * (1.0/((
(_ZBufferParams.x * texture2D (_CameraDepthTexture, tmpvar_4).x)
+ _ZBufferParams.y))));
vec3 tmpvar_6;
tmpvar_6 = (_CameraToWorld * tmpvar_5).xyz;
vec3 tmpvar_7;
tmpvar_7 = (tmpvar_6 - _LightPos.xyz);
vec3 tmpvar_8;
tmpvar_8 = -(normalize(tmpvar_7));
vec4 tmpvar_9;
tmpvar_9 = texture2D (_CameraGBufferTexture0, tmpvar_4);
vec4 tmpvar_10;
tmpvar_10 = texture2D (_CameraGBufferTexture1, tmpvar_4);
tmpvar_3 = (_LightColor.xyz * texture2D (_LightTextureB0, vec2((dot (tmpvar_7, tmpvar_7) * _LightPos.w))).w);
vec3 tmpvar_11;
tmpvar_11 = normalize(((texture2D (_CameraGBufferTexture2, tmpvar_4).xyz * 2.0) - 1.0));
float tmpvar_12;
tmpvar_12 = max (0.0, dot (tmpvar_11, tmpvar_8));
specColor_2 = (tmpvar_10.xyz * (_GlobalSpecularScale * mix (1.0, 0.5, _InsideMirrorLighting)));
vec3 viewDir_13;
viewDir_13 = -(normalize((tmpvar_6 - _WorldSpaceCameraPos)));
float tmpvar_14;
tmpvar_14 = (1.0 - tmpvar_10.w);
vec3 tmpvar_15;
vec3 inVec_16;
inVec_16 = (tmpvar_8 + viewDir_13);
tmpvar_15 = (inVec_16 * inversesqrt(max (0.001,
dot (inVec_16, inVec_16)
)));
float tmpvar_17;
tmpvar_17 = max (0.0, dot (tmpvar_11, viewDir_13));
float tmpvar_18;
tmpvar_18 = max (0.0, dot (tmpvar_8, tmpvar_15));
float tmpvar_19;
tmpvar_19 = ((tmpvar_14 * tmpvar_14) * unity_LightGammaCorrectionConsts.w);
float tmpvar_20;
float tmpvar_21;
tmpvar_21 = (10.0 / log2((
((1.0 - tmpvar_14) * 0.968)
+ 0.03)));
tmpvar_20 = (tmpvar_21 * tmpvar_21);
float x_22;
x_22 = (1.0 - tmpvar_12);
float x_23;
x_23 = (1.0 - tmpvar_17);
float tmpvar_24;
tmpvar_24 = (0.5 + ((2.0 * tmpvar_18) * (tmpvar_18 * tmpvar_14)));
float x_25;
x_25 = (1.0 - tmpvar_18);
vec3 tmpvar_26;
tmpvar_26 = ((tmpvar_9.xyz * (tmpvar_3 *
(((1.0 + (
(tmpvar_24 - 1.0)
*
((x_22 * x_22) * ((x_22 * x_22) * x_22))
)) * (1.0 + (
(tmpvar_24 - 1.0)
*
((x_23 * x_23) * ((x_23 * x_23) * x_23))
))) * tmpvar_12)
)) + ((
max (0.0, (((
(1.0/((((
(tmpvar_12 * (1.0 - tmpvar_19))
+ tmpvar_19) * (
(tmpvar_17 * (1.0 - tmpvar_19))
+ tmpvar_19)) + 0.0001)))
*
(pow (max (0.0, dot (tmpvar_11, tmpvar_15)), tmpvar_20) * ((tmpvar_20 + 1.0) * unity_LightGammaCorrectionConsts.y))
) * tmpvar_12) * unity_LightGammaCorrectionConsts.x))
* tmpvar_3) * (specColor_2 +
((1.0 - specColor_2) * ((x_25 * x_25) * ((x_25 * x_25) * x_25)))
)));
vec4 tmpvar_27;
tmpvar_27.w = 1.0;
tmpvar_27.xyz = tmpvar_26;
res_1 = tmpvar_27;
if ((tmpvar_9.w < 0.9960784)) {
res_1.xyz = (tmpvar_26 + ((
((tmpvar_9.xyz * tmpvar_3) * max (0.0, dot (-(tmpvar_11), tmpvar_8)))
*
(1.0 - tmpvar_9.w)
) * vec3(0.8, 1.0, 0.6)));
};
gl_FragData[0] = res_1;
}
#endif
[/code]
I compared them with your shaders (HLSL) and indeed the only thing that is "written" in the shaders is a MVP. However, like I said above, I also have access to the built-in variable holding the Projection Matrix;)
Any ideas ?:)) Thanks in advance
DarkStarSword said:
I can likely make extract_unity_shaders.py duplicate the hash that you use in your OpenGL wrapper as well - what do you use?
The way I dump the shaders:
- When the source code is loaded (before compilation) I grab it. I then calculate a CRC32 value over the whole information in the shader and write it in the filename. I do not know how 3DMigoto does it, but I expect is something like this? (Maybe not a CRC32 but a MD5 sum or equivalent?).
DarkStarSword said:Most shaders in Unity only use a combined MVP matrix - does OpenGL give you the projection matrix directly, or are you using the idea I was thinking about in the Mad Max thread (if you are, I'm really interested to know if it works)?
The OpenGL renderer in Unity5 is a bit weird;) It combines shaders with fixed pipeline.
The way I modify the projection matrix is like this:
I just hook the "glMatrixMode(GL_PROJECTION)" and the next "glLoadMatrixf" and apply the stereo directly in the matrix that is sent to the shaders. (kinda like MadMax but this is done automatically from the wrapper not the shader side).
Now OpenGL has built in-variables;) One of them is the "gl_ProjectionMatrix" variable that should contain the Projection matrix that is modified above ;) (So this projection Matrix is already in Stereo)
Calculating the inverse of this matrix can't be done with the inverse() function since is not available in GLSL #version 1.20 (but we can manually calculate it). Now I am unsure if I need the inverse of the Stereo-ProjectionMatrix or just the inverse of the Mono-ProjectionMatrix?
For example, this is a shader that is responsible for some shadows. (notice the #IFDEF blocks, it contains both the Vertex and Pixel source code).
#endif
#ifdef FRAGMENT
void main ()
{
gl_FragData[0] = vec4(0.0, 0.0, 0.0, 0.0);
}
#endif
I tried a couple of things here, but nothing worked. What is interesting is that at convergence 0 (infinity) everything looks perfect;)) But increasing the convergence breaks the rendering:-s
This is another shader responsible for some reflections:
I compared them with your shaders (HLSL) and indeed the only thing that is "written" in the shaders is a MVP. However, like I said above, I also have access to the built-in variable holding the Projection Matrix;)
Any ideas ?:)) Thanks in advance
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
[quote="helifax"]Now OpenGL has built in-variables;) One of them is the "gl_ProjectionMatrix" variable that should contain the Projection matrix that is modified above ;) (So this projection Matrix is already in Stereo)[/quote]
ok, I've seen that referenced in a bunch of Unity shaders. Be aware that for shadows that might only be valid for point and spot lights that are drawn in the world since directional and physical lights draw a full screen quad and so the active projection matrix will just be the identity matrix (I think - never did confirm if it was actually an identity matrix, but it definitely wasn't anything useful). The directional lighting shader will have the real inverse projection matrix in unity_CameraInvProjection instead, but the physical lighting shader won't have access to it anywhere.
[quote]Now I am unsure if I need the inverse of the Stereo-ProjectionMatrix or just the inverse of the Mono-ProjectionMatrix?[/quote]You only need to determine the horizontal FOV, so either will work.
[quote]For example, this is a shader that is responsible for some shadows. (notice the #IFDEF blocks, it contains both the Vertex and Pixel source code).[/quote]This can't be the right shader, because:
[quote]
[code]
#ifdef FRAGMENT
void main ()
{
gl_FragData[0] = vec4(0.0, 0.0, 0.0, 0.0);
}
[/code]
[/quote]
That's just outputting 0 from the pixel shader.
I looked at the unity shaders Cg source code (it's on their download page) for where unity_LightShadowBias is used like that, and following the trail through a few include files and pre-processor macros that shader is one of the unity 5 "Standard" shaders, and that particular instance appears to just be a shadow caster, and is not the shader that actually draws the shadows to the screen that you need to fix.
If you find the directional shadow shader, you should see code like this (this is Cg, so the code you see will look a bit different as it has been through a Cg to GLSL compiler):
[code]
fixed4 frag_hard (v2f i) : SV_Target
{
float zdepth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv);
// 0..1 linear depth, 0 at near plane, 1 at far plane.
float depth = lerp (Linear01Depth(zdepth), zdepth, unity_OrthoParams.w);
// view position calculation for perspective & ortho cases
float3 vposPersp = i.ray * depth;
float3 vposOrtho = i.orthoPos.xyz;
vposOrtho.z = lerp(i.orthoPos.z, i.orthoPos.w, zdepth);
// pick the perspective or ortho position as needed
float3 vpos = lerp (vposPersp, vposOrtho, unity_OrthoParams.w);
float4 wpos = mul (_CameraToWorld, float4(vpos,1));
fixed4 cascadeWeights = GET_CASCADE_WEIGHTS (wpos, vpos.z);
half shadow = unity_sampleShadowmap( GET_SHADOW_COORDINATES(wpos, cascadeWeights) );
shadow += GET_SHADOW_FADE(wpos, vpos.z);
fixed4 res = shadow;
return res;
}
[/code]
And you will need to add a view-space uncorrection (that is, subtract the usual formula multiplied by unity_CameraInvProjection._m00) to vpos before it is used to calculate wpos.
[quote]I tried a couple of things here, but nothing worked. What is interesting is that at convergence 0 (infinity) everything looks perfect;)) But increasing the convergence breaks the rendering:-s[/quote]
That's not terribly surprising - when convergence is 0 everything is on a flat plane out at infinity. I found the same thing in the early days when I was trying to fix shadows in World of Diving - later when I looked at Dreamfall I realised it was because I was trying to fix them in the complete wrong spot (you can probably still find this in the early history of my 3d-fixes repository).
[quote]This is another shader responsible for some reflections:[/quote]Well, I don't think this is a standard Unity shader at all - I can't find any references to "_InsideMirrorLighting" at all in the Unity shaders, so it must be a custom or third party shader. At a quick glance it looks like it might need the same type of correction as the lighting shaders (i.e. a view-space [un]correction to tmpvar_5 before the _CameraToWorld matrix is applied), but I'd need to study it in more detail to be sure.
helifax said:Now OpenGL has built in-variables;) One of them is the "gl_ProjectionMatrix" variable that should contain the Projection matrix that is modified above ;) (So this projection Matrix is already in Stereo)
ok, I've seen that referenced in a bunch of Unity shaders. Be aware that for shadows that might only be valid for point and spot lights that are drawn in the world since directional and physical lights draw a full screen quad and so the active projection matrix will just be the identity matrix (I think - never did confirm if it was actually an identity matrix, but it definitely wasn't anything useful). The directional lighting shader will have the real inverse projection matrix in unity_CameraInvProjection instead, but the physical lighting shader won't have access to it anywhere.
Now I am unsure if I need the inverse of the Stereo-ProjectionMatrix or just the inverse of the Mono-ProjectionMatrix?
You only need to determine the horizontal FOV, so either will work.
For example, this is a shader that is responsible for some shadows. (notice the #IFDEF blocks, it contains both the Vertex and Pixel source code).
This can't be the right shader, because:
#ifdef FRAGMENT
void main ()
{
gl_FragData[0] = vec4(0.0, 0.0, 0.0, 0.0);
}
That's just outputting 0 from the pixel shader.
I looked at the unity shaders Cg source code (it's on their download page) for where unity_LightShadowBias is used like that, and following the trail through a few include files and pre-processor macros that shader is one of the unity 5 "Standard" shaders, and that particular instance appears to just be a shadow caster, and is not the shader that actually draws the shadows to the screen that you need to fix.
If you find the directional shadow shader, you should see code like this (this is Cg, so the code you see will look a bit different as it has been through a Cg to GLSL compiler):
And you will need to add a view-space uncorrection (that is, subtract the usual formula multiplied by unity_CameraInvProjection._m00) to vpos before it is used to calculate wpos.
I tried a couple of things here, but nothing worked. What is interesting is that at convergence 0 (infinity) everything looks perfect;)) But increasing the convergence breaks the rendering:-s
That's not terribly surprising - when convergence is 0 everything is on a flat plane out at infinity. I found the same thing in the early days when I was trying to fix shadows in World of Diving - later when I looked at Dreamfall I realised it was because I was trying to fix them in the complete wrong spot (you can probably still find this in the early history of my 3d-fixes repository).
This is another shader responsible for some reflections:
Well, I don't think this is a standard Unity shader at all - I can't find any references to "_InsideMirrorLighting" at all in the Unity shaders, so it must be a custom or third party shader. At a quick glance it looks like it might need the same type of correction as the lighting shaders (i.e. a view-space [un]correction to tmpvar_5 before the _CameraToWorld matrix is applied), but I'd need to study it in more detail to be sure.
2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit
[quote="helifax"][quote="DarkStarSword"]
I can likely make extract_unity_shaders.py duplicate the hash that you use in your OpenGL wrapper as well - what do you use?
[/quote]
The way I dump the shaders:
- When the source code is loaded (before compilation) I grab it. I then calculate a CRC32 value over the whole information in the shader and write it in the filename.[/quote]
ok, that should be fairly straight forward since that's calculated before compilation, and that's the form Unity stores the OpenGL shaders in :)
Only thing I'm not sure on is the difference / significance of the "opengl", "glcore", "gles" and "gles3" variants that Unity stores... any idea?
[quote]I do not know how 3DMigoto does it, but I expect is something like this? (Maybe not a CRC32 but a MD5 sum or equivalent?).[/quote]
3DMigoto uses an FNV-like hash of the shader bytecode (I say FNV-like because it's not seeded correctly, but whatever - it works). Since Unity stores the DX11 bytecode directly in their compiled .shader files (more or less - they use a simple encoding to turn the upper & lower 4 bits of each byte into the letters 'a' to 'p') I can easily calculate the same hash from that. Helix Mod also calculates the CRC32 of the bytecode, but Unity stores it as assembly so I call out to an external tool to assemble the shaders then I calculate the CRC32.
DarkStarSword said:
I can likely make extract_unity_shaders.py duplicate the hash that you use in your OpenGL wrapper as well - what do you use?
The way I dump the shaders:
- When the source code is loaded (before compilation) I grab it. I then calculate a CRC32 value over the whole information in the shader and write it in the filename.
ok, that should be fairly straight forward since that's calculated before compilation, and that's the form Unity stores the OpenGL shaders in :)
Only thing I'm not sure on is the difference / significance of the "opengl", "glcore", "gles" and "gles3" variants that Unity stores... any idea?
I do not know how 3DMigoto does it, but I expect is something like this? (Maybe not a CRC32 but a MD5 sum or equivalent?).
3DMigoto uses an FNV-like hash of the shader bytecode (I say FNV-like because it's not seeded correctly, but whatever - it works). Since Unity stores the DX11 bytecode directly in their compiled .shader files (more or less - they use a simple encoding to turn the upper & lower 4 bits of each byte into the letters 'a' to 'p') I can easily calculate the same hash from that. Helix Mod also calculates the CRC32 of the bytecode, but Unity stores it as assembly so I call out to an external tool to assemble the shaders then I calculate the CRC32.
2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit
Big thanks for the help;)
So taking the second shader as example;) That one has the _CameraToWorld:
[code]
vec4 res_1;
vec3 specColor_2;
vec3 tmpvar_3;
vec2 tmpvar_4;
// I bet this is a clip coord
tmpvar_4 = (xlv_TEXCOORD0.xy / xlv_TEXCOORD0.w);
vec4 tmpvar_5;
// Position in View based on all the parameters and order of operation
tmpvar_5.w = 1.0;
tmpvar_5.xyz = ((xlv_TEXCOORD1 * (_ProjectionParams.z / xlv_TEXCOORD1.z)) * (1.0/((
(_ZBufferParams.x * texture2D (_CameraDepthTexture, tmpvar_4).x)
+ _ZBufferParams.y))));
// Position in World ?
vec3 tmpvar_6;
tmpvar_6 = (_CameraToWorld * tmpvar_5).xyz;
[/code]
So, basically what I need here to do is:
[code]
// Apply the view-space uncorrection
tmpvar_5.x -= g_eye * g_eye_separation *(tmpvar_5.z - g_convergence) * gl_ProjectionMatrix[0].x;
// Now we use it to get the world Pos
vec3 tmpvar_6;
tmpvar_6 = (_CameraToWorld * tmpvar_5).xyz;
[/code]
However, I have a few doubts:
- Should we first multiply tmpvar_5 with gl_ProjectionMatrix ?
- Then apply the above correction:
tmpvar_5.x -= g_eye * g_eye_separation *(tmpvar_5.z - g_convergence) * gl_ProjectionMatrix[0].x;
- "Cast" it back on view by tmpvar_5 * inverse(gl_ProjectionMatrix) ?
Also, I had this kinda of issue in SOMA as well;)
The way I fixed it there was:
[code]
float AspectRatio = g_screeninfo.x / g_screeninfo.y;
// Calculate the correct factor to apply to shadows/lights.
// This is the FOVy that we need
float FOVy = gl_ProjectionMatrix[1].y;
float FOVx = gl_ProjectionMatrix[0].x;
float fovFactor = (FOVy / FOVx) /AspectRatio;
// Stereo Correction
vec4 temp = vec4(tmpvar_5, 1);
temp = temp * gl_ProjectionMatrix;
temp.x += g_eye * g_eye_separation * -(temp.z - g_convergence) * fovFactor;
temp = temp * inverse(gl_ProjectionMatrix);
tmpvar_5 = temp.xyz;
[/code]
Basically I needed to calculate a factor that takes into account both Vertical & Horizontal FOV in regards to Aspect Ratio. The Projection Matrix has both FOVs and the Aspect Ratio is easy to calculate as well.
I wonder if I don't need to do the same thing here;)
In any case I still think I need to multiply by Projection before applying the correction. Is this correct ?
// I bet this is a clip coord
tmpvar_4 = (xlv_TEXCOORD0.xy / xlv_TEXCOORD0.w);
vec4 tmpvar_5;
// Position in View based on all the parameters and order of operation
tmpvar_5.w = 1.0;
tmpvar_5.xyz = ((xlv_TEXCOORD1 * (_ProjectionParams.z / xlv_TEXCOORD1.z)) * (1.0/((
(_ZBufferParams.x * texture2D (_CameraDepthTexture, tmpvar_4).x)
+ _ZBufferParams.y))));
// Position in World ?
vec3 tmpvar_6;
tmpvar_6 = (_CameraToWorld * tmpvar_5).xyz;
// Now we use it to get the world Pos
vec3 tmpvar_6;
tmpvar_6 = (_CameraToWorld * tmpvar_5).xyz;
However, I have a few doubts:
- Should we first multiply tmpvar_5 with gl_ProjectionMatrix ?
- Then apply the above correction:
tmpvar_5.x -= g_eye * g_eye_separation *(tmpvar_5.z - g_convergence) * gl_ProjectionMatrix[0].x;
- "Cast" it back on view by tmpvar_5 * inverse(gl_ProjectionMatrix) ?
Also, I had this kinda of issue in SOMA as well;)
The way I fixed it there was:
// Calculate the correct factor to apply to shadows/lights.
// This is the FOVy that we need
float FOVy = gl_ProjectionMatrix[1].y;
float FOVx = gl_ProjectionMatrix[0].x;
float fovFactor = (FOVy / FOVx) /AspectRatio;
Basically I needed to calculate a factor that takes into account both Vertical & Horizontal FOV in regards to Aspect Ratio. The Projection Matrix has both FOVs and the Aspect Ratio is easy to calculate as well.
I wonder if I don't need to do the same thing here;)
In any case I still think I need to multiply by Projection before applying the correction. Is this correct ?
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
I've pushed up a change to extract_unity_shaers.py to hopefully match the CRC32 that your wrapper uses, but I need you to check if it does actually match. This will also mean that OpenGL shaders (at least of type "opengl", "glcore", "gles" and "gles3") will be extracted without having to specify --deep-dir.
I notice your filenames have a number prefixing the shaders which I haven't duplicated - I'm guessing that's something like a shader index number? Does it matter if it is missing?
I also noticed in the Wolfenstein fix that not all the CRC32s were padded to 8 characters, so I haven't either.
Unity's .shader files store vertex and pixel shaders separately, but it looks like the OpenGL "pixel" shaders are always blank as they are actually combined with the vertex shaders... the result is that all the shaders extracted by this tool are currently marked as vertex shaders. How does this work in OpenGL if both are combined - I see in your fixes you do have both vertex and pixel shaders, so I assume that after compilation they are two separate things (which is the same in HLSL, but we typically don't see that since we are working with shaders that have already been compiled)? Are the CRCs for corresponding vertex and pixel shaders going to be the same?
Here's a list of shaders that it extracted from the directional shadow shader of World of Diving, which uses the standard Unity shaders - I'm hoping if I got it right you might see some of these CRCs dumped from the game. You should generally run the tool yourself though, as some games do replace the lighting shaders with custom ones, and these have changed a bit in different Unity versions:
[code]
ShaderGL/Hidden_Internal-PrePassCollectShadows/glcore:
Vertex_22b512e2.glsl
Vertex_307e1744.glsl
Vertex_6ee8798b.glsl
Vertex_7b34f968.glsl
Vertex_b05d2bf9.glsl
Vertex_bb5b8f0b.glsl
Vertex_ca5629c8.glsl
Vertex_d10d87e7.glsl
ShaderGL/Hidden_Internal-PrePassCollectShadows/gles:
Vertex_1a2502c7.glsl
Vertex_1cc51bbd.glsl
Vertex_1e4b65c1.glsl
Vertex_2533f40c.glsl
Vertex_3a519007.glsl
Vertex_474afb07.glsl
Vertex_4bd8d47a.glsl
Vertex_662be4c2.glsl
Vertex_6f8092dc.glsl
Vertex_920b3644.glsl
Vertex_96e09717.glsl
Vertex_a38300bc.glsl
Vertex_d10ecd29.glsl
Vertex_e62913dd.glsl
Vertex_e8ccb9c7.glsl
Vertex_ea70d690.glsl
ShaderGL/Hidden_Internal-PrePassCollectShadows/gles3:
Vertex_48c0d436.glsl
Vertex_61826e7d.glsl
Vertex_6bf773a.glsl
Vertex_6ca2c71.glsl
Vertex_a164ec69.glsl
Vertex_aa84279b.glsl
Vertex_ab13d044.glsl
Vertex_eabcd65.glsl
ShaderGL/Hidden_Internal-PrePassCollectShadows/opengl:
Vertex_2fdad958.glsl
Vertex_329b442d.glsl
Vertex_375502ec.glsl
Vertex_8f922199.glsl
Vertex_91dc4da9.glsl
Vertex_b9a6a507.glsl
Vertex_d714f39b.glsl
Vertex_f09d3c67.glsl
[/code]
I've pushed up a change to extract_unity_shaers.py to hopefully match the CRC32 that your wrapper uses, but I need you to check if it does actually match. This will also mean that OpenGL shaders (at least of type "opengl", "glcore", "gles" and "gles3") will be extracted without having to specify --deep-dir.
I notice your filenames have a number prefixing the shaders which I haven't duplicated - I'm guessing that's something like a shader index number? Does it matter if it is missing?
I also noticed in the Wolfenstein fix that not all the CRC32s were padded to 8 characters, so I haven't either.
Unity's .shader files store vertex and pixel shaders separately, but it looks like the OpenGL "pixel" shaders are always blank as they are actually combined with the vertex shaders... the result is that all the shaders extracted by this tool are currently marked as vertex shaders. How does this work in OpenGL if both are combined - I see in your fixes you do have both vertex and pixel shaders, so I assume that after compilation they are two separate things (which is the same in HLSL, but we typically don't see that since we are working with shaders that have already been compiled)? Are the CRCs for corresponding vertex and pixel shaders going to be the same?
Here's a list of shaders that it extracted from the directional shadow shader of World of Diving, which uses the standard Unity shaders - I'm hoping if I got it right you might see some of these CRCs dumped from the game. You should generally run the tool yourself though, as some games do replace the lighting shaders with custom ones, and these have changed a bit in different Unity versions:
[quote="helifax"]So, basically what I need here to do is:
[code]
// Apply the view-space uncorrection
tmpvar_5.x -= g_eye * g_eye_separation *(tmpvar_5.z - g_convergence) * gl_ProjectionMatrix[0].x;
// Now we use it to get the world Pos
vec3 tmpvar_6;
tmpvar_6 = (_CameraToWorld * tmpvar_5).xyz;
[/code]
[/quote]
Not quite - you need to divide by gl_ProjectionMatrix[0].x since that's the projection matrix and not the inverse projection matrix.
Of course, since this isn't one of the lighting shaders I've fixed previously I can't be sure this is the right pattern, and it also depends on gl_ProjectionMatrix being valid, though that is a reasonable assumption given you said this is a reflection.
[quote]
However, I have a few doubts:
- Should we first multiply tmpvar_5 with gl_ProjectionMatrix ?
- Then apply the above correction:
tmpvar_5.x -= g_eye * g_eye_separation *(tmpvar_5.z - g_convergence) * gl_ProjectionMatrix[0].x;
- "Cast" it back on view by tmpvar_5 * inverse(gl_ProjectionMatrix) ?
[/quote]
If you converted it to projection space coordinates and back again, you would use the normal formula, not the view-space variant with the extra multiply.
But we shouldn't need to, because the multiplication by the inverse projection matrix[0].x (or division by projection matrix[0].x) is mathematically a simplification of that (provided we are working in view-space coordinates and not world-space).
// Now we use it to get the world Pos
vec3 tmpvar_6;
tmpvar_6 = (_CameraToWorld * tmpvar_5).xyz;
Not quite - you need to divide by gl_ProjectionMatrix[0].x since that's the projection matrix and not the inverse projection matrix.
Of course, since this isn't one of the lighting shaders I've fixed previously I can't be sure this is the right pattern, and it also depends on gl_ProjectionMatrix being valid, though that is a reasonable assumption given you said this is a reflection.
However, I have a few doubts:
- Should we first multiply tmpvar_5 with gl_ProjectionMatrix ?
- Then apply the above correction:
tmpvar_5.x -= g_eye * g_eye_separation *(tmpvar_5.z - g_convergence) * gl_ProjectionMatrix[0].x;
- "Cast" it back on view by tmpvar_5 * inverse(gl_ProjectionMatrix) ?
If you converted it to projection space coordinates and back again, you would use the normal formula, not the view-space variant with the extra multiply.
But we shouldn't need to, because the multiplication by the inverse projection matrix[0].x (or division by projection matrix[0].x) is mathematically a simplification of that (provided we are working in view-space coordinates and not world-space).
2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit
The way OpenGL in which you can handle the shaders can be soo different that will "produce" many variants.
According to the "Best Practice Rules":
- Create Shader Program => OGL gives ProgId 100
- Create Shader Vertex => OGL gives ProgId 101
- Create Shader Pixel => OGL gives ProgId 102
- Get Shader Source Code and load it in the Vertex/Pixel
- Compile Vertex/Pixel Shader
- Link Vertex/Pixel Shaders with Program. In our case 100.
According to the Rules, once the Program is COMPLETE delete the shaders:
- Detach Vertex/Pixel Shaders.
- Delete Vertex/Pixel Shaders. They now are in ASM for and attached to the Program.
- This will release IDS 101 and IDS 102.
Now, different engines respect more or less the rules:
- Rage -> Respects the rule. That's why everytime you create a shader you get the same ID for Vertex/Pixel as it was freed.
- Soma-> Dosn't delete the shaders afterwards. So the ID increases everytime. After compiled, it uses the same Existing shaders (vertex+pixel) to create new programs with different
combinations of Vertex+Pixel IDs. This was a pain in the ass as I couldn't rely only on the CRC of the current shader (The same pixel shader needed correction in a combination but not in all of them). Instead, I had to make a link between Vertex_CRC+ Pixel_CRC.
(If you look in the fix you will see some shaders with double CRC : 10_Pixel_CRC-Pixel_CRC-Of-attached-Vertex).
- Unity 5:
- They keep the sourceCode in one FILE. But when they attach the source code to the shader they filter based on type and right the correct "#define" in the sourceCode of that Shader;)
That is why you just dump the vertex shaders (they are probably generated the first). My wrapper dumps both Vertex + Pixel (with different CRC as the "#define" differs in the code).
Since I dump them as seprate files (Ps, Vs) when I swap the source code I look for the correct file to read;)) So it shouldn't be a problem.
In Wolfenstein, I had a bug in the wrapper that got fixed. Basically I was calculating the CRC32 over the modified source code (instead of the unaltered source code). That got fixed now;)
It doesn't matter that the CRC is not 8 digits exact;) I didn't even realise is not padded to 8 digits:)) It shouldn't matter as as a number 01 == 1 ;))
Big thx for the crc dump;)) I'll give it a look at home and see if I can find out something or how to fix the shadows;))
The way OpenGL in which you can handle the shaders can be soo different that will "produce" many variants.
According to the "Best Practice Rules":
- Create Shader Program => OGL gives ProgId 100
- Create Shader Vertex => OGL gives ProgId 101
- Create Shader Pixel => OGL gives ProgId 102
- Get Shader Source Code and load it in the Vertex/Pixel
- Compile Vertex/Pixel Shader
- Link Vertex/Pixel Shaders with Program. In our case 100.
According to the Rules, once the Program is COMPLETE delete the shaders:
- Detach Vertex/Pixel Shaders.
- Delete Vertex/Pixel Shaders. They now are in ASM for and attached to the Program.
- This will release IDS 101 and IDS 102.
Now, different engines respect more or less the rules:
- Rage -> Respects the rule. That's why everytime you create a shader you get the same ID for Vertex/Pixel as it was freed.
- Soma-> Dosn't delete the shaders afterwards. So the ID increases everytime. After compiled, it uses the same Existing shaders (vertex+pixel) to create new programs with different
combinations of Vertex+Pixel IDs. This was a pain in the ass as I couldn't rely only on the CRC of the current shader (The same pixel shader needed correction in a combination but not in all of them). Instead, I had to make a link between Vertex_CRC+ Pixel_CRC.
(If you look in the fix you will see some shaders with double CRC : 10_Pixel_CRC-Pixel_CRC-Of-attached-Vertex).
- Unity 5:
- They keep the sourceCode in one FILE. But when they attach the source code to the shader they filter based on type and right the correct "#define" in the sourceCode of that Shader;)
That is why you just dump the vertex shaders (they are probably generated the first). My wrapper dumps both Vertex + Pixel (with different CRC as the "#define" differs in the code).
Since I dump them as seprate files (Ps, Vs) when I swap the source code I look for the correct file to read;)) So it shouldn't be a problem.
In Wolfenstein, I had a bug in the wrapper that got fixed. Basically I was calculating the CRC32 over the modified source code (instead of the unaltered source code). That got fixed now;)
It doesn't matter that the CRC is not 8 digits exact;) I didn't even realise is not padded to 8 digits:)) It shouldn't matter as as a number 01 == 1 ;))
Big thx for the crc dump;)) I'll give it a look at home and see if I can find out something or how to fix the shadows;))
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
[quote="DarkStarSword"][quote="helifax"]So, basically what I need here to do is:
[code]
// Apply the view-space uncorrection
tmpvar_5.x -= g_eye * g_eye_separation *(tmpvar_5.z - g_convergence) * gl_ProjectionMatrix[0].x;
// Now we use it to get the world Pos
vec3 tmpvar_6;
tmpvar_6 = (_CameraToWorld * tmpvar_5).xyz;
[/code]
[/quote]
Not quite - you need to divide by gl_ProjectionMatrix[0].x since that's the projection matrix and not the inverse projection matrix.
Of course, since this isn't one of the lighting shaders I've fixed previously I can't be sure this is the right pattern, and it also depends on gl_ProjectionMatrix being valid, though that is a reasonable assumption given you said this is a reflection.
[quote]
However, I have a few doubts:
- Should we first multiply tmpvar_5 with gl_ProjectionMatrix ?
- Then apply the above correction:
tmpvar_5.x -= g_eye * g_eye_separation *(tmpvar_5.z - g_convergence) * gl_ProjectionMatrix[0].x;
- "Cast" it back on view by tmpvar_5 * inverse(gl_ProjectionMatrix) ?
[/quote]
If you converted it to projection space coordinates and back again, you would use the normal formula, not the view-space variant with the extra multiply.
But we shouldn't need to, because the multiplication by the inverse projection matrix[0].x (or division by projection matrix[0].x) is mathematically a simplification of that (provided we are working in view-space coordinates and not world-space).
[/quote]
RIGHT!!! That makes perfect sense! Now I remember, why we do that one;)) I was a bit confused for a second there;))
Big thx! Will defo check it out and see if it works;))
// Now we use it to get the world Pos
vec3 tmpvar_6;
tmpvar_6 = (_CameraToWorld * tmpvar_5).xyz;
Not quite - you need to divide by gl_ProjectionMatrix[0].x since that's the projection matrix and not the inverse projection matrix.
Of course, since this isn't one of the lighting shaders I've fixed previously I can't be sure this is the right pattern, and it also depends on gl_ProjectionMatrix being valid, though that is a reasonable assumption given you said this is a reflection.
However, I have a few doubts:
- Should we first multiply tmpvar_5 with gl_ProjectionMatrix ?
- Then apply the above correction:
tmpvar_5.x -= g_eye * g_eye_separation *(tmpvar_5.z - g_convergence) * gl_ProjectionMatrix[0].x;
- "Cast" it back on view by tmpvar_5 * inverse(gl_ProjectionMatrix) ?
If you converted it to projection space coordinates and back again, you would use the normal formula, not the view-space variant with the extra multiply.
But we shouldn't need to, because the multiplication by the inverse projection matrix[0].x (or division by projection matrix[0].x) is mathematically a simplification of that (provided we are working in view-space coordinates and not world-space).
RIGHT!!! That makes perfect sense! Now I remember, why we do that one;)) I was a bit confused for a second there;))
Big thx! Will defo check it out and see if it works;))
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
[quote="helifax"]
- Unity 5:
- They keep the sourceCode in one FILE. But when they attach the source code to the shader they filter based on type and right the correct "#define" in the sourceCode of that Shader;)
That is why you just dump the vertex shaders (they are probably generated the first). My wrapper dumps both Vertex + Pixel (with different CRC as the "#define" differs in the code).
Since I dump them as seprate files (Ps, Vs) when I swap the source code I look for the correct file to read;)) So it shouldn't be a problem.[/quote]Oh, I see
Looking at the shader you posted above, I see this:
[code]
#version 120
uniform float g_pixelEnabled;
uniform float g_eye;
uniform float g_eye_separation;
uniform float g_convergence;
uniform vec4 g_custom_params;
uniform vec4 g_screeninfo;
#define FRAGMENT
[/code]
I've seen the "#version 120" in the .shader files, and from what you are saying I'll need to insert the "#define FRAGMENT". I'm guessing everything else between those two are lines inserted by your wrapper that I won't need to worry about? There's also a !!GLES3 (or !!GLES, or !!GLSL, or !!GL2x) in the .shader files that I have already stripped out.
Notably the "glcore" and "gles3" variants have "#version 100" (or "#version 300 es" for gles3) after the "#ifdef VERTEX" / "#ifdef FRAGMENT" instead of at the top of the shader, so I guess they would be a little different, but I'm not sure if they are even relevant for PC?
That "#version 120" string seems to match the "opengl" variants I'm seeing, and the "gles" variants use "#version 100" at the top.
helifax said:
- Unity 5:
- They keep the sourceCode in one FILE. But when they attach the source code to the shader they filter based on type and right the correct "#define" in the sourceCode of that Shader;)
That is why you just dump the vertex shaders (they are probably generated the first). My wrapper dumps both Vertex + Pixel (with different CRC as the "#define" differs in the code).
Since I dump them as seprate files (Ps, Vs) when I swap the source code I look for the correct file to read;)) So it shouldn't be a problem.
Oh, I see
Looking at the shader you posted above, I see this:
I've seen the "#version 120" in the .shader files, and from what you are saying I'll need to insert the "#define FRAGMENT". I'm guessing everything else between those two are lines inserted by your wrapper that I won't need to worry about? There's also a !!GLES3 (or !!GLES, or !!GLSL, or !!GL2x) in the .shader files that I have already stripped out.
Notably the "glcore" and "gles3" variants have "#version 100" (or "#version 300 es" for gles3) after the "#ifdef VERTEX" / "#ifdef FRAGMENT" instead of at the top of the shader, so I guess they would be a little different, but I'm not sure if they are even relevant for PC?
That "#version 120" string seems to match the "opengl" variants I'm seeing, and the "gles" variants use "#version 100" at the top.
2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit
[quote="DarkStarSword"][quote="helifax"]
- Unity 5:
- They keep the sourceCode in one FILE. But when they attach the source code to the shader they filter based on type and right the correct "#define" in the sourceCode of that Shader;)
That is why you just dump the vertex shaders (they are probably generated the first). My wrapper dumps both Vertex + Pixel (with different CRC as the "#define" differs in the code).
Since I dump them as seprate files (Ps, Vs) when I swap the source code I look for the correct file to read;)) So it shouldn't be a problem.[/quote]Oh, I see
Looking at the shader you posted above, I see this:
[code]
#version 120
uniform float g_pixelEnabled;
uniform float g_eye;
uniform float g_eye_separation;
uniform float g_convergence;
uniform vec4 g_custom_params;
uniform vec4 g_screeninfo;
#define FRAGMENT
[/code]
I've seen the "#version 120" in the .shader files, and from what you are saying I'll need to insert the "#define FRAGMENT". I'm guessing everything else between those two are lines inserted by your wrapper that I won't need to worry about? There's also a !!GLES3 (or !!GLES, or !!GLSL, or !!GL2x) in the .shader files that I have already stripped out.
Notably the "glcore" and "gles3" variants have "#version 100" (or "#version 300 es" for gles3) after the "#ifdef VERTEX" / "#ifdef FRAGMENT" instead of at the top of the shader, so I guess they would be a little different, but I'm not sure if they are even relevant for PC?
That "#version 120" string seems to match the "opengl" variants I'm seeing, and the "gles" variants use "#version 100" at the top.
[/quote]
Yes, all the uniforms there are inserted by my wrapper. I use them to basically stereorize the shader;) Normally you should see the "Stereo" formula after the last gl_Position found in the file. In this case however, is not there. This is because I stereorize the Projection from the CPU side;)
Yes, gles variants sound like OpenGL ES and are intended for Mobile devices. You can discard those on PC;)
The #version 100 tells what version of GLSL to use. #100 is basically the first and very very old.
Even version 120 is very old now: https://en.wikipedia.org/wiki/OpenGL_Shading_Language
Basically their OpenGL renderer is very very old;) or maybe is best said: made to work with very old hardware;)
helifax said:
- Unity 5:
- They keep the sourceCode in one FILE. But when they attach the source code to the shader they filter based on type and right the correct "#define" in the sourceCode of that Shader;)
That is why you just dump the vertex shaders (they are probably generated the first). My wrapper dumps both Vertex + Pixel (with different CRC as the "#define" differs in the code).
Since I dump them as seprate files (Ps, Vs) when I swap the source code I look for the correct file to read;)) So it shouldn't be a problem.
Oh, I see
Looking at the shader you posted above, I see this:
I've seen the "#version 120" in the .shader files, and from what you are saying I'll need to insert the "#define FRAGMENT". I'm guessing everything else between those two are lines inserted by your wrapper that I won't need to worry about? There's also a !!GLES3 (or !!GLES, or !!GLSL, or !!GL2x) in the .shader files that I have already stripped out.
Notably the "glcore" and "gles3" variants have "#version 100" (or "#version 300 es" for gles3) after the "#ifdef VERTEX" / "#ifdef FRAGMENT" instead of at the top of the shader, so I guess they would be a little different, but I'm not sure if they are even relevant for PC?
That "#version 120" string seems to match the "opengl" variants I'm seeing, and the "gles" variants use "#version 100" at the top.
Yes, all the uniforms there are inserted by my wrapper. I use them to basically stereorize the shader;) Normally you should see the "Stereo" formula after the last gl_Position found in the file. In this case however, is not there. This is because I stereorize the Projection from the CPU side;)
Yes, gles variants sound like OpenGL ES and are intended for Mobile devices. You can discard those on PC;)
The #version 100 tells what version of GLSL to use. #100 is basically the first and very very old.
Even version 120 is very old now: https://en.wikipedia.org/wiki/OpenGL_Shading_Language
Basically their OpenGL renderer is very very old;) or maybe is best said: made to work with very old hardware;)
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
1. http://unity3d.com/get-unity/download/archive - Downloaded built in unity shaders.
2. Disable D3D11 support in the player settings and set D3D9 to use exclusive fullscreen.
I guess my first step is to modify these shaders with your changes. See if that fixes the shadows. I have a very basic scene, a few cubes and terrain, that's it. Hopefully I can fix the shaders at this basic level based upon your info you provided.
I do not want to rely on helix or 3dmigoto if possible.
- For Unity 5 games do you still need to hunt the shaders manually? or do you use a script on top of them? I see you have quite some scripts there for handling shaders.
I started taking a look at Unity 5 games but from the OpenGL point of view;)
Every Unity game can be forced to render in OpenGL;)
So far I made some progress on this:
- Had to write more in the wrapper to allow a hybrid rendering mode.
- The OpenGL renderer is pretty oldish style. Is a mix of modern with Pre-OGL 3.0 style. This means that I am now able to inject stereocopy in the game in 2 ways:
- Normal mode through shaders.
- Modifying the Projection Matrix directly that is sent to the shaders. This way the shaders don't need to be modified anymore.
- The OpenGL renderer is pretty slow for some reason... Might be because I am using "Layers of Fear" game for testing.
The problem that I currently encountered and have no idea how to fix currently is:
- Everything looks perfect in 3D (using the Proj_Matrix stereofication method) at low convergence (approx 0.1). The moment I am increasing the convergence to something like 1.5 the shadows break. (They kinda look like they are rendered from the wrong eye perspective). I looked through you shader fixes and noticed this one:
I am wandering why is this necessary or how to tackle this problem;)
Thank you in advance!
PS: I'll post the shaders in a few minutes;)
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
This may not be necessary any more - recent versions of Unity now support exclusive fullscreen mode in DX11, though they have some issues with Alt+Tab, so DX9 mode is still more reliable.
If you can see the broken shaodws it should be enough, which IIRC in Unity 4 needed deferred rendering.
ok, in that case we will need to write a plugin for Unity to inject a texture that holds the stereo parameters from nvapi to the shaders. nvstereo.h has code to do this - it is available *somewhere* on the nvidia site, but can be hard to find (IIRC it was only ever shipped publically in one of their samples). Probably the easiest place to find it is in 3DMigoto, though keep in mind that it is somewhat modified from the original (to support DX11 and to inject some extra info):
https://github.com/bo3b/3Dmigoto/blob/master/nvstereo.h
Of course that is C++ code - I assume for Unity we would need to translate that to C# (or does Unity have some support for C++ plugins?)
Those were designed to work under some constraints that you won't have to worry about - namely, that I couldn't add any additional inputs from the game / Unity.
This part declares the textures holding the stereo parameters as injected by Helix mod - you will still need these, but may need to adjust the sampler register number depending on what you do in the plugin:
You won't need this part:
And this part (there's two of these) you can simply replace with unity_CameraInvProjection._m00:
And of course these were from Unity 4.5.5, but you can look at the commit history to see what I actually changed and make the equivalent changes to the Unity 5 shaders.
Both Internal-DeferredShading.shader and Internal-PrePassLighting.shader follow the same pattern to fix, while Internal-PrePassCollectShadows.shader only needs the adjustment in the pixel shader and no changed in the vertex shader.
Here's an example of fixing Internal-PrePassCollectShadows.shader, but this is a decompiled variant so it won't look quite the same as the ones you are looking at (and I'm still making changes to the vertex shader to simplify things, but that won't be necessary for you):
https://github.com/DarkStarSword/3d-fixes/commit/7e4a46874ec08b438716acd5706cacf5fe8dce14
2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit
Alienware M17x R4 w/ built in 3D, Intel i7 3740QM, GTX 680m 2GB, 16GB DDR3 1600MHz RAM, Win7 64bit, 1TB SSD, 1TB HDD, 750GB HDD
Pre-release 3D fixes, shadertool.py and other goodies: http://github.com/DarkStarSword/3d-fixes
Support me on Patreon: https://www.patreon.com/DarkStarSword or PayPal: https://www.paypal.me/DarkStarSword
I use a combination. Here's an example of my typical workflow:
For DX9 I'd then follow that up by using shadertool.py to auto-fix halo issues in the vertex shaders, and apply the tedious part of the shadow fix in the pixel shaders (the vertex shader part I just grab from my template). I haven't scripted this for DX11 yet, but DX11 Unity games tend to be Unity 5, where there are less broken effects than Unity 4... I'll get to scripting that some day.
By default, extract_unity_shaders.py will only extract d3d9 and d3d11 shaders and will match the hashes used by Helix Mod (under ShaderCRCs, requires shaderasm.exe to have been compiled) and 3DMigoto (under ShaderFNVs). You can make it extract all the variants including OpenGL shaders ("opengl", "glcore", "gles" and "gles3" variants) by passing --deep-dir like this, but be warned that this can easily exceed the path character limit of Windows, which may prevent the extracted files working if you try to open or delete them from explorer (this is purely an application bug, as cygwin has no problem with these no matter how long the filename ends up):
For any effects I did not fix by script, I'll hunt them in Helix Mod / 3DMigoto and then find the matching hash in the ShaderCRCs or ShaderFNVs directory and copy the extra headers in (the ShaderHeaders.json file that extract_unity_shaders.py produces is used by shadertool.py --lookup-header-json to do this for DX9 shaders, but that's purely for convienience).
I can likely make extract_unity_shaders.py duplicate the hash that you use in your OpenGL wrapper as well - what do you use?
Cool :)
Most shaders in Unity only use a combined MVP matrix - does OpenGL give you the projection matrix directly, or are you using the idea I was thinking about in the Mad Max thread (if you are, I'm really interested to know if it works)?
I guess initially I'd suggest following the same patterns I use to fix the shaders since we know that works. In Unity 5 the directional lighting shader (Hidden/Internal-PrePassCollectShadows.shader) has direct access to the inverse projection matrix so it can be fixed quite easily without needing to copy anything from other shaders, so it's probably worth tackling that first.
The point/spot/physical lighting shaders (Hidden/Internal-DeferredShading.shader and/or Hidden/Internal-PrePassLighting.shader) don't have it and either need it to be copied from the directional lighting shader, or derived like we did in Unity 4 (however, deriving it without copying the matrices from elsewhere will only work for point & spot lights - when it's used for physical lighting the matrices won't be valid).
haha, no you won't need this. This was for a specific effect (DoF blur) in the Viking Village demo, and I have never seen a real game use it where we couldn't simply disable the effect with no loss. If you're interested though, the problem was that the effect was using an append structured buffer to store a list of very bright pixels that would be highlighted by a later shader, but the structured buffer was shared between both eyes, so I needed to come up with a way to tag each entry with which eye they were from.
2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit
Alienware M17x R4 w/ built in 3D, Intel i7 3740QM, GTX 680m 2GB, 16GB DDR3 1600MHz RAM, Win7 64bit, 1TB SSD, 1TB HDD, 750GB HDD
Pre-release 3D fixes, shadertool.py and other goodies: http://github.com/DarkStarSword/3d-fixes
Support me on Patreon: https://www.patreon.com/DarkStarSword or PayPal: https://www.paypal.me/DarkStarSword
The way I dump the shaders:
- When the source code is loaded (before compilation) I grab it. I then calculate a CRC32 value over the whole information in the shader and write it in the filename. I do not know how 3DMigoto does it, but I expect is something like this? (Maybe not a CRC32 but a MD5 sum or equivalent?).
The OpenGL renderer in Unity5 is a bit weird;) It combines shaders with fixed pipeline.
The way I modify the projection matrix is like this:
Snippet from GL instructions (traced).
I just hook the "glMatrixMode(GL_PROJECTION)" and the next "glLoadMatrixf" and apply the stereo directly in the matrix that is sent to the shaders. (kinda like MadMax but this is done automatically from the wrapper not the shader side).
Now OpenGL has built in-variables;) One of them is the "gl_ProjectionMatrix" variable that should contain the Projection matrix that is modified above ;) (So this projection Matrix is already in Stereo)
Calculating the inverse of this matrix can't be done with the inverse() function since is not available in GLSL #version 1.20 (but we can manually calculate it). Now I am unsure if I need the inverse of the Stereo-ProjectionMatrix or just the inverse of the Mono-ProjectionMatrix?
For example, this is a shader that is responsible for some shadows. (notice the #IFDEF blocks, it contains both the Vertex and Pixel source code).
I tried a couple of things here, but nothing worked. What is interesting is that at convergence 0 (infinity) everything looks perfect;)) But increasing the convergence breaks the rendering:-s
This is another shader responsible for some reflections:
I compared them with your shaders (HLSL) and indeed the only thing that is "written" in the shaders is a MVP. However, like I said above, I also have access to the built-in variable holding the Projection Matrix;)
Any ideas ?:)) Thanks in advance
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
ok, I've seen that referenced in a bunch of Unity shaders. Be aware that for shadows that might only be valid for point and spot lights that are drawn in the world since directional and physical lights draw a full screen quad and so the active projection matrix will just be the identity matrix (I think - never did confirm if it was actually an identity matrix, but it definitely wasn't anything useful). The directional lighting shader will have the real inverse projection matrix in unity_CameraInvProjection instead, but the physical lighting shader won't have access to it anywhere.
You only need to determine the horizontal FOV, so either will work.
This can't be the right shader, because:
That's just outputting 0 from the pixel shader.
I looked at the unity shaders Cg source code (it's on their download page) for where unity_LightShadowBias is used like that, and following the trail through a few include files and pre-processor macros that shader is one of the unity 5 "Standard" shaders, and that particular instance appears to just be a shadow caster, and is not the shader that actually draws the shadows to the screen that you need to fix.
If you find the directional shadow shader, you should see code like this (this is Cg, so the code you see will look a bit different as it has been through a Cg to GLSL compiler):
And you will need to add a view-space uncorrection (that is, subtract the usual formula multiplied by unity_CameraInvProjection._m00) to vpos before it is used to calculate wpos.
That's not terribly surprising - when convergence is 0 everything is on a flat plane out at infinity. I found the same thing in the early days when I was trying to fix shadows in World of Diving - later when I looked at Dreamfall I realised it was because I was trying to fix them in the complete wrong spot (you can probably still find this in the early history of my 3d-fixes repository).
Well, I don't think this is a standard Unity shader at all - I can't find any references to "_InsideMirrorLighting" at all in the Unity shaders, so it must be a custom or third party shader. At a quick glance it looks like it might need the same type of correction as the lighting shaders (i.e. a view-space [un]correction to tmpvar_5 before the _CameraToWorld matrix is applied), but I'd need to study it in more detail to be sure.
2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit
Alienware M17x R4 w/ built in 3D, Intel i7 3740QM, GTX 680m 2GB, 16GB DDR3 1600MHz RAM, Win7 64bit, 1TB SSD, 1TB HDD, 750GB HDD
Pre-release 3D fixes, shadertool.py and other goodies: http://github.com/DarkStarSword/3d-fixes
Support me on Patreon: https://www.patreon.com/DarkStarSword or PayPal: https://www.paypal.me/DarkStarSword
ok, that should be fairly straight forward since that's calculated before compilation, and that's the form Unity stores the OpenGL shaders in :)
Only thing I'm not sure on is the difference / significance of the "opengl", "glcore", "gles" and "gles3" variants that Unity stores... any idea?
3DMigoto uses an FNV-like hash of the shader bytecode (I say FNV-like because it's not seeded correctly, but whatever - it works). Since Unity stores the DX11 bytecode directly in their compiled .shader files (more or less - they use a simple encoding to turn the upper & lower 4 bits of each byte into the letters 'a' to 'p') I can easily calculate the same hash from that. Helix Mod also calculates the CRC32 of the bytecode, but Unity stores it as assembly so I call out to an external tool to assemble the shaders then I calculate the CRC32.
2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit
Alienware M17x R4 w/ built in 3D, Intel i7 3740QM, GTX 680m 2GB, 16GB DDR3 1600MHz RAM, Win7 64bit, 1TB SSD, 1TB HDD, 750GB HDD
Pre-release 3D fixes, shadertool.py and other goodies: http://github.com/DarkStarSword/3d-fixes
Support me on Patreon: https://www.patreon.com/DarkStarSword or PayPal: https://www.paypal.me/DarkStarSword
So taking the second shader as example;) That one has the _CameraToWorld:
So, basically what I need here to do is:
However, I have a few doubts:
- Should we first multiply tmpvar_5 with gl_ProjectionMatrix ?
- Then apply the above correction:
tmpvar_5.x -= g_eye * g_eye_separation *(tmpvar_5.z - g_convergence) * gl_ProjectionMatrix[0].x;
- "Cast" it back on view by tmpvar_5 * inverse(gl_ProjectionMatrix) ?
Also, I had this kinda of issue in SOMA as well;)
The way I fixed it there was:
Basically I needed to calculate a factor that takes into account both Vertical & Horizontal FOV in regards to Aspect Ratio. The Projection Matrix has both FOVs and the Aspect Ratio is easy to calculate as well.
I wonder if I don't need to do the same thing here;)
In any case I still think I need to multiply by Projection before applying the correction. Is this correct ?
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
I notice your filenames have a number prefixing the shaders which I haven't duplicated - I'm guessing that's something like a shader index number? Does it matter if it is missing?
I also noticed in the Wolfenstein fix that not all the CRC32s were padded to 8 characters, so I haven't either.
Unity's .shader files store vertex and pixel shaders separately, but it looks like the OpenGL "pixel" shaders are always blank as they are actually combined with the vertex shaders... the result is that all the shaders extracted by this tool are currently marked as vertex shaders. How does this work in OpenGL if both are combined - I see in your fixes you do have both vertex and pixel shaders, so I assume that after compilation they are two separate things (which is the same in HLSL, but we typically don't see that since we are working with shaders that have already been compiled)? Are the CRCs for corresponding vertex and pixel shaders going to be the same?
Here's a list of shaders that it extracted from the directional shadow shader of World of Diving, which uses the standard Unity shaders - I'm hoping if I got it right you might see some of these CRCs dumped from the game. You should generally run the tool yourself though, as some games do replace the lighting shaders with custom ones, and these have changed a bit in different Unity versions:
2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit
Alienware M17x R4 w/ built in 3D, Intel i7 3740QM, GTX 680m 2GB, 16GB DDR3 1600MHz RAM, Win7 64bit, 1TB SSD, 1TB HDD, 750GB HDD
Pre-release 3D fixes, shadertool.py and other goodies: http://github.com/DarkStarSword/3d-fixes
Support me on Patreon: https://www.patreon.com/DarkStarSword or PayPal: https://www.paypal.me/DarkStarSword
Not quite - you need to divide by gl_ProjectionMatrix[0].x since that's the projection matrix and not the inverse projection matrix.
Of course, since this isn't one of the lighting shaders I've fixed previously I can't be sure this is the right pattern, and it also depends on gl_ProjectionMatrix being valid, though that is a reasonable assumption given you said this is a reflection.
If you converted it to projection space coordinates and back again, you would use the normal formula, not the view-space variant with the extra multiply.
But we shouldn't need to, because the multiplication by the inverse projection matrix[0].x (or division by projection matrix[0].x) is mathematically a simplification of that (provided we are working in view-space coordinates and not world-space).
2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit
Alienware M17x R4 w/ built in 3D, Intel i7 3740QM, GTX 680m 2GB, 16GB DDR3 1600MHz RAM, Win7 64bit, 1TB SSD, 1TB HDD, 750GB HDD
Pre-release 3D fixes, shadertool.py and other goodies: http://github.com/DarkStarSword/3d-fixes
Support me on Patreon: https://www.patreon.com/DarkStarSword or PayPal: https://www.paypal.me/DarkStarSword
According to the "Best Practice Rules":
- Create Shader Program => OGL gives ProgId 100
- Create Shader Vertex => OGL gives ProgId 101
- Create Shader Pixel => OGL gives ProgId 102
- Get Shader Source Code and load it in the Vertex/Pixel
- Compile Vertex/Pixel Shader
- Link Vertex/Pixel Shaders with Program. In our case 100.
According to the Rules, once the Program is COMPLETE delete the shaders:
- Detach Vertex/Pixel Shaders.
- Delete Vertex/Pixel Shaders. They now are in ASM for and attached to the Program.
- This will release IDS 101 and IDS 102.
Now, different engines respect more or less the rules:
- Rage -> Respects the rule. That's why everytime you create a shader you get the same ID for Vertex/Pixel as it was freed.
- Soma-> Dosn't delete the shaders afterwards. So the ID increases everytime. After compiled, it uses the same Existing shaders (vertex+pixel) to create new programs with different
combinations of Vertex+Pixel IDs. This was a pain in the ass as I couldn't rely only on the CRC of the current shader (The same pixel shader needed correction in a combination but not in all of them). Instead, I had to make a link between Vertex_CRC+ Pixel_CRC.
(If you look in the fix you will see some shaders with double CRC : 10_Pixel_CRC-Pixel_CRC-Of-attached-Vertex).
- Unity 5:
- They keep the sourceCode in one FILE. But when they attach the source code to the shader they filter based on type and right the correct "#define" in the sourceCode of that Shader;)
That is why you just dump the vertex shaders (they are probably generated the first). My wrapper dumps both Vertex + Pixel (with different CRC as the "#define" differs in the code).
Since I dump them as seprate files (Ps, Vs) when I swap the source code I look for the correct file to read;)) So it shouldn't be a problem.
In Wolfenstein, I had a bug in the wrapper that got fixed. Basically I was calculating the CRC32 over the modified source code (instead of the unaltered source code). That got fixed now;)
It doesn't matter that the CRC is not 8 digits exact;) I didn't even realise is not padded to 8 digits:)) It shouldn't matter as as a number 01 == 1 ;))
Big thx for the crc dump;)) I'll give it a look at home and see if I can find out something or how to fix the shadows;))
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
RIGHT!!! That makes perfect sense! Now I remember, why we do that one;)) I was a bit confused for a second there;))
Big thx! Will defo check it out and see if it works;))
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
Looking at the shader you posted above, I see this:
I've seen the "#version 120" in the .shader files, and from what you are saying I'll need to insert the "#define FRAGMENT". I'm guessing everything else between those two are lines inserted by your wrapper that I won't need to worry about? There's also a !!GLES3 (or !!GLES, or !!GLSL, or !!GL2x) in the .shader files that I have already stripped out.
Notably the "glcore" and "gles3" variants have "#version 100" (or "#version 300 es" for gles3) after the "#ifdef VERTEX" / "#ifdef FRAGMENT" instead of at the top of the shader, so I guess they would be a little different, but I'm not sure if they are even relevant for PC?
That "#version 120" string seems to match the "opengl" variants I'm seeing, and the "gles" variants use "#version 100" at the top.
2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit
Alienware M17x R4 w/ built in 3D, Intel i7 3740QM, GTX 680m 2GB, 16GB DDR3 1600MHz RAM, Win7 64bit, 1TB SSD, 1TB HDD, 750GB HDD
Pre-release 3D fixes, shadertool.py and other goodies: http://github.com/DarkStarSword/3d-fixes
Support me on Patreon: https://www.patreon.com/DarkStarSword or PayPal: https://www.paypal.me/DarkStarSword
Yes, all the uniforms there are inserted by my wrapper. I use them to basically stereorize the shader;) Normally you should see the "Stereo" formula after the last gl_Position found in the file. In this case however, is not there. This is because I stereorize the Projection from the CPU side;)
Yes, gles variants sound like OpenGL ES and are intended for Mobile devices. You can discard those on PC;)
The #version 100 tells what version of GLSL to use. #100 is basically the first and very very old.
Even version 120 is very old now: https://en.wikipedia.org/wiki/OpenGL_Shading_Language
Basically their OpenGL renderer is very very old;) or maybe is best said: made to work with very old hardware;)
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)