ok, last update for the night - I've pushed an update that inserts the "#define VERTEX" and "#define FRAGMENT" in each of the shaders and duplicates the vertex shaders to create pixel shaders.
So, all going well these hashes will hopefully match now - let me know if they do or not:
[code]
ShaderGL/Hidden_Internal-PrePassCollectShadows/glcore:
Pixel_167bdb4a.glsl
Pixel_52a787bc.glsl
Pixel_5691321a.glsl
Pixel_9bacacbf.glsl
Pixel_a0a1c638.glsl
Pixel_b5a1aa13.glsl
Pixel_c760b92b.glsl
Pixel_fd61408b.glsl
Vertex_18f141fd.glsl
Vertex_1f44cd95.glsl
Vertex_20027ff2.glsl
Vertex_2e69d72c.glsl
Vertex_be9882cc.glsl
Vertex_c44e271b.glsl
Vertex_cb137dbd.glsl
Vertex_e16539bf.glsl
ShaderGL/Hidden_Internal-PrePassCollectShadows/opengl:
Pixel_278f4f9b.glsl
Pixel_2b060900.glsl
Pixel_37aaff1f.glsl
Pixel_3ed25f2.glsl
Pixel_63f3561f.glsl
Pixel_72172486.glsl
Pixel_d9875163.glsl
Pixel_e637d653.glsl
Vertex_1295b1a9.glsl
Vertex_3316701f.glsl
Vertex_4879c83.glsl
Vertex_7a124c88.glsl
Vertex_b250ee52.glsl
Vertex_d9af0b0d.glsl
Vertex_e24f04a9.glsl
Vertex_e90a74e2.glsl
[/code]
I dropped the gles and gles3 variants since they aren't relevant for us (you can of course still extract them with --deep-dir, but they won't be hashed).
Still not sure what the difference is between glcore and opengl.
[quote] Basically their OpenGL renderer is very very old;) or maybe is best said: made to work with very old hardware;)[/quote]Or maybe it's for mac support? I've heard from various devs that Apple basically stopped updating OpenGL one version before it transformed into a decent API, so it's apparently a really horrible platform to support.
ok, last update for the night - I've pushed an update that inserts the "#define VERTEX" and "#define FRAGMENT" in each of the shaders and duplicates the vertex shaders to create pixel shaders.
So, all going well these hashes will hopefully match now - let me know if they do or not:
I dropped the gles and gles3 variants since they aren't relevant for us (you can of course still extract them with --deep-dir, but they won't be hashed).
Still not sure what the difference is between glcore and opengl.
Basically their OpenGL renderer is very very old;) or maybe is best said: made to work with very old hardware;)
Or maybe it's for mac support? I've heard from various devs that Apple basically stopped updating OpenGL one version before it transformed into a decent API, so it's apparently a really horrible platform to support.
2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit
[quote="DarkStarSword"]ok, last update for the night - I've pushed an update that inserts the "#define VERTEX" and "#define FRAGMENT" in each of the shaders and duplicates the vertex shaders to create pixel shaders.
So, all going well these hashes will hopefully match now - let me know if they do or not:
[code]
ShaderGL/Hidden_Internal-PrePassCollectShadows/glcore:
Pixel_167bdb4a.glsl
Pixel_52a787bc.glsl
Pixel_5691321a.glsl
Pixel_9bacacbf.glsl
Pixel_a0a1c638.glsl
Pixel_b5a1aa13.glsl
Pixel_c760b92b.glsl
Pixel_fd61408b.glsl
Vertex_18f141fd.glsl
Vertex_1f44cd95.glsl
Vertex_20027ff2.glsl
Vertex_2e69d72c.glsl
Vertex_be9882cc.glsl
Vertex_c44e271b.glsl
Vertex_cb137dbd.glsl
Vertex_e16539bf.glsl
ShaderGL/Hidden_Internal-PrePassCollectShadows/opengl:
Pixel_278f4f9b.glsl
Pixel_2b060900.glsl
Pixel_37aaff1f.glsl
Pixel_3ed25f2.glsl
Pixel_63f3561f.glsl
Pixel_72172486.glsl
Pixel_d9875163.glsl
Pixel_e637d653.glsl
Vertex_1295b1a9.glsl
Vertex_3316701f.glsl
Vertex_4879c83.glsl
Vertex_7a124c88.glsl
Vertex_b250ee52.glsl
Vertex_d9af0b0d.glsl
Vertex_e24f04a9.glsl
Vertex_e90a74e2.glsl
[/code]
I dropped the gles and gles3 variants since they aren't relevant for us (you can of course still extract them with --deep-dir, but they won't be hashed).
Still not sure what the difference is between glcore and opengl.
[quote] Basically their OpenGL renderer is very very old;) or maybe is best said: made to work with very old hardware;)[/quote]Or maybe it's for mac support? I've heard from various devs that Apple basically stopped updating OpenGL one version before it transformed into a decent API, so it's apparently a really horrible platform to support.
[/quote]
Awesome!!! I'll have to check it out and see;))
The diff between glcore and opengl:
- GL Core specifies that the shaders MUST adhere to OpenGL Core Profile. (so no funny ARB or EXT variants of the functions that are manufacturer specific).
- Opengl is the core profile + extended profile;)
On the game I tried there aren't any GLCORE variants, only the OpenGL (and the renderer does use the ARB extension ^_^). Opengl is the best one for compatibility across the board and vendors;)
DarkStarSword said:ok, last update for the night - I've pushed an update that inserts the "#define VERTEX" and "#define FRAGMENT" in each of the shaders and duplicates the vertex shaders to create pixel shaders.
So, all going well these hashes will hopefully match now - let me know if they do or not:
I dropped the gles and gles3 variants since they aren't relevant for us (you can of course still extract them with --deep-dir, but they won't be hashed).
Still not sure what the difference is between glcore and opengl.
Basically their OpenGL renderer is very very old;) or maybe is best said: made to work with very old hardware;)
Or maybe it's for mac support? I've heard from various devs that Apple basically stopped updating OpenGL one version before it transformed into a decent API, so it's apparently a really horrible platform to support.
Awesome!!! I'll have to check it out and see;))
The diff between glcore and opengl:
- GL Core specifies that the shaders MUST adhere to OpenGL Core Profile. (so no funny ARB or EXT variants of the functions that are manufacturer specific).
- Opengl is the core profile + extended profile;)
On the game I tried there aren't any GLCORE variants, only the OpenGL (and the renderer does use the ARB extension ^_^). Opengl is the best one for compatibility across the board and vendors;)
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
Hmm..
I was able to extract the resources. However when I run "extract_unity_shaders.py" I get this error:
[code]
D:\extracted\resources>d:\extract_unity_shaders.py *.shader
Parsing BlendModesOverlay.shader...
Traceback (most recent call last):
File "D:\extract_unity_shaders.py", line 916, in <module>
sys.exit(main())
File "D:\extract_unity_shaders.py", line 870, in main
tree = parse_keywords(tree, filename=os.path.basename(filename), args=args)
File "D:\extract_unity_shaders.py", line 262, in parse_keywords
item = keywords[token](token, tokens, parent, args)
File "D:\extract_unity_shaders.py", line 103, in __init__
return self.parse(tokens, parent, args)
File "D:\extract_unity_shaders.py", line 128, in parse
Tree.__init__(self, parse_keywords(next_interesting(tokens), parent=self, args=args))
File "D:\extract_unity_shaders.py", line 262, in parse_keywords
item = keywords[token](token, tokens, parent, args)
File "D:\extract_unity_shaders.py", line 103, in __init__
return self.parse(tokens, parent, args)
File "D:\extract_unity_shaders.py", line 153, in parse
Tree.__init__(self, parse_keywords(next_interesting(tokens), parent=self, args=args))
File "D:\extract_unity_shaders.py", line 262, in parse_keywords
item = keywords[token](token, tokens, parent, args)
File "D:\extract_unity_shaders.py", line 103, in __init__
return self.parse(tokens, parent, args)
File "D:\extract_unity_shaders.py", line 153, in parse
Tree.__init__(self, parse_keywords(next_interesting(tokens), parent=self, args=args))
File "D:\extract_unity_shaders.py", line 262, in parse_keywords
item = keywords[token](token, tokens, parent, args)
File "D:\extract_unity_shaders.py", line 103, in __init__
return self.parse(tokens, parent, args)
File "D:\extract_unity_shaders.py", line 128, in parse
Tree.__init__(self, parse_keywords(next_interesting(tokens), parent=self, args=args))
File "D:\extract_unity_shaders.py", line 262, in parse_keywords
item = keywords[token](token, tokens, parent, args)
File "D:\extract_unity_shaders.py", line 103, in __init__
return self.parse(tokens, parent, args)
File "D:\extract_unity_shaders.py", line 128, in parse
Tree.__init__(self, parse_keywords(next_interesting(tokens), parent=self, args=args))
File "D:\extract_unity_shaders.py", line 255, in parse_keywords
handle_shader_asm(token, parent, strip_quotes(token))
File "D:\extract_unity_shaders.py", line 213, in handle_shader_asm
add_shader_hash(parent)
File "D:\extract_unity_shaders.py", line 641, in add_shader_hash
return add_shader_hash_fnv(sub_program)
File "D:\extract_unity_shaders.py", line 607, in add_shader_hash_fnv
bin = decode_unity_d3d11_shader(sub_program.shader_asm)
File "D:\extract_unity_shaders.py", line 715, in decode_unity_d3d11_shader
print('root12: 0x%08x' % struct.unpack('<I', root12)[0])
struct.error: unpack requires a string argument of length 4
[/code]
Any idea why? This is on Windows + Python 2.7 installed:) No Cywing;) Maybe that's why??
On a separate note:
This didn't work:
So, basically what I need here to do is:
[code]
// Apply the view-space uncorrection
tmpvar_5.x -= g_eye * g_eye_separation *(tmpvar_5.z - g_convergence) / gl_ProjectionMatrix[0].x;
// Now we use it to get the world Pos
vec3 tmpvar_6;
tmpvar_6 = (_CameraToWorld * tmpvar_5).xyz;
[/code]
It does something but doesn't correct the problem:(
I still basically have this issue:
- At low convergence (up to 0.3) everything is rendered perfectly.
- At higher convergence things start to break:
- If you are far away the shadows render correctly
- If you get close at some point it will basically separate into 2 (like HUGe separation + convergence is applied).
- If you get even closer, it will then SNAP into the correct position again.
I remember seeing the same thing in Dark Souls 2, where the light will look ok far, will break in between and it will look OK close.
Also, the gl_ProjectionMatrix is valid in the shader;)
I made a quick test:
- Multiply the Pos * gl_ProjectionMatrix => everything was broken
- Multiply the Result with inverse(gl_ProjectionMatrix) => everything was back;)
So definitely not an Identity matrix and not zero;)
I was able to extract the resources. However when I run "extract_unity_shaders.py" I get this error:
D:\extracted\resources>d:\extract_unity_shaders.py *.shader
Parsing BlendModesOverlay.shader...
Traceback (most recent call last):
File "D:\extract_unity_shaders.py", line 916, in <module>
sys.exit(main())
File "D:\extract_unity_shaders.py", line 870, in main
tree = parse_keywords(tree, filename=os.path.basename(filename), args=args)
File "D:\extract_unity_shaders.py", line 262, in parse_keywords
item = keywords[token](token, tokens, parent, args)
File "D:\extract_unity_shaders.py", line 103, in __init__
return self.parse(tokens, parent, args)
File "D:\extract_unity_shaders.py", line 128, in parse
Tree.__init__(self, parse_keywords(next_interesting(tokens), parent=self, args=args))
File "D:\extract_unity_shaders.py", line 262, in parse_keywords
item = keywords[token](token, tokens, parent, args)
File "D:\extract_unity_shaders.py", line 103, in __init__
return self.parse(tokens, parent, args)
File "D:\extract_unity_shaders.py", line 153, in parse
Tree.__init__(self, parse_keywords(next_interesting(tokens), parent=self, args=args))
File "D:\extract_unity_shaders.py", line 262, in parse_keywords
item = keywords[token](token, tokens, parent, args)
File "D:\extract_unity_shaders.py", line 103, in __init__
return self.parse(tokens, parent, args)
File "D:\extract_unity_shaders.py", line 153, in parse
Tree.__init__(self, parse_keywords(next_interesting(tokens), parent=self, args=args))
File "D:\extract_unity_shaders.py", line 262, in parse_keywords
item = keywords[token](token, tokens, parent, args)
File "D:\extract_unity_shaders.py", line 103, in __init__
return self.parse(tokens, parent, args)
File "D:\extract_unity_shaders.py", line 128, in parse
Tree.__init__(self, parse_keywords(next_interesting(tokens), parent=self, args=args))
File "D:\extract_unity_shaders.py", line 262, in parse_keywords
item = keywords[token](token, tokens, parent, args)
File "D:\extract_unity_shaders.py", line 103, in __init__
return self.parse(tokens, parent, args)
File "D:\extract_unity_shaders.py", line 128, in parse
Tree.__init__(self, parse_keywords(next_interesting(tokens), parent=self, args=args))
File "D:\extract_unity_shaders.py", line 255, in parse_keywords
handle_shader_asm(token, parent, strip_quotes(token))
File "D:\extract_unity_shaders.py", line 213, in handle_shader_asm
add_shader_hash(parent)
File "D:\extract_unity_shaders.py", line 641, in add_shader_hash
return add_shader_hash_fnv(sub_program)
File "D:\extract_unity_shaders.py", line 607, in add_shader_hash_fnv
bin = decode_unity_d3d11_shader(sub_program.shader_asm)
File "D:\extract_unity_shaders.py", line 715, in decode_unity_d3d11_shader
print('root12: 0x%08x' % struct.unpack('<I', root12)[0])
struct.error: unpack requires a string argument of length 4
Any idea why? This is on Windows + Python 2.7 installed:) No Cywing;) Maybe that's why??
On a separate note:
This didn't work:
So, basically what I need here to do is:
// Now we use it to get the world Pos
vec3 tmpvar_6;
tmpvar_6 = (_CameraToWorld * tmpvar_5).xyz;
It does something but doesn't correct the problem:(
I still basically have this issue:
- At low convergence (up to 0.3) everything is rendered perfectly.
- At higher convergence things start to break:
- If you are far away the shadows render correctly
- If you get close at some point it will basically separate into 2 (like HUGe separation + convergence is applied).
- If you get even closer, it will then SNAP into the correct position again.
I remember seeing the same thing in Dark Souls 2, where the light will look ok far, will break in between and it will look OK close.
Also, the gl_ProjectionMatrix is valid in the shader;)
I made a quick test:
- Multiply the Pos * gl_ProjectionMatrix => everything was broken
- Multiply the Result with inverse(gl_ProjectionMatrix) => everything was back;)
So definitely not an Identity matrix and not zero;)
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
[quote="helifax"]I was able to extract the resources. However when I run "extract_unity_shaders.py" I get this error:
<snip>
Any idea why? This is on Windows + Python 2.7 installed:) No Cywing;) Maybe that's why??
[/quote]That script requires Python 3 (tested with 3.4), but should work fine without cygwin
[quote]It does something but doesn't correct the problem:(
I still basically have this issue:
- At low convergence (up to 0.3) everything is rendered perfectly.
- At higher convergence things start to break:
- If you are far away the shadows render correctly
- If you get close at some point it will basically separate into 2 (like HUGe separation + convergence is applied).
- If you get even closer, it will then SNAP into the correct position again.
I remember seeing the same thing in Dark Souls 2, where the light will look ok far, will break in between and it will look OK close.[/quote]I think the snapping happens when the game switches from one resolution shadow map to another if the shadows are still misaligned. Once they are aligned accurately there shouldn't be any snapping and if there is that suggests the adjustment is not 100% accurate. I saw this type of thing happening with some shadows in Montague's Mount early on before I had worked out the accurate adjustment... IIRC the missing part was the adjustment to the ray in the vertex shader for point & spot lights.
[quote]
Also, the gl_ProjectionMatrix is valid in the shader;)
I made a quick test:
- Multiply the Pos * gl_ProjectionMatrix => everything was broken
- Multiply the Result with inverse(gl_ProjectionMatrix) => everything was back;)
So definitely not an Identity matrix and not zero;)
[/quote]Remind me which shader you are looking at - was this the reflection shader, or one of the lighting shaders?
helifax said:I was able to extract the resources. However when I run "extract_unity_shaders.py" I get this error:
<snip>
Any idea why? This is on Windows + Python 2.7 installed:) No Cywing;) Maybe that's why??
That script requires Python 3 (tested with 3.4), but should work fine without cygwin
It does something but doesn't correct the problem:(
I still basically have this issue:
- At low convergence (up to 0.3) everything is rendered perfectly.
- At higher convergence things start to break:
- If you are far away the shadows render correctly
- If you get close at some point it will basically separate into 2 (like HUGe separation + convergence is applied).
- If you get even closer, it will then SNAP into the correct position again.
I remember seeing the same thing in Dark Souls 2, where the light will look ok far, will break in between and it will look OK close.
I think the snapping happens when the game switches from one resolution shadow map to another if the shadows are still misaligned. Once they are aligned accurately there shouldn't be any snapping and if there is that suggests the adjustment is not 100% accurate. I saw this type of thing happening with some shadows in Montague's Mount early on before I had worked out the accurate adjustment... IIRC the missing part was the adjustment to the ray in the vertex shader for point & spot lights.
Also, the gl_ProjectionMatrix is valid in the shader;)
I made a quick test:
- Multiply the Pos * gl_ProjectionMatrix => everything was broken
- Multiply the Result with inverse(gl_ProjectionMatrix) => everything was back;)
So definitely not an Identity matrix and not zero;)
Remind me which shader you are looking at - was this the reflection shader, or one of the lighting shaders?
2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit
[quote="DarkStarSword"][quote="helifax"]I was able to extract the resources. However when I run "extract_unity_shaders.py" I get this error:
<snip>
Any idea why? This is on Windows + Python 2.7 installed:) No Cywing;) Maybe that's why??
[/quote]That script requires Python 3 (tested with 3.4), but should work fine without cygwin
[quote]It does something but doesn't correct the problem:(
I still basically have this issue:
- At low convergence (up to 0.3) everything is rendered perfectly.
- At higher convergence things start to break:
- If you are far away the shadows render correctly
- If you get close at some point it will basically separate into 2 (like HUGe separation + convergence is applied).
- If you get even closer, it will then SNAP into the correct position again.
I remember seeing the same thing in Dark Souls 2, where the light will look ok far, will break in between and it will look OK close.[/quote]I think the snapping happens when the game switches from one resolution shadow map to another if the shadows are still misaligned. Once they are aligned accurately there shouldn't be any snapping and if there is that suggests the adjustment is not 100% accurate. I saw this type of thing happening with some shadows in Montague's Mount early on before I had worked out the accurate adjustment... IIRC the missing part was the adjustment to the ray in the vertex shader for point & spot lights.
[quote]
Also, the gl_ProjectionMatrix is valid in the shader;)
I made a quick test:
- Multiply the Pos * gl_ProjectionMatrix => everything was broken
- Multiply the Result with inverse(gl_ProjectionMatrix) => everything was back;)
So definitely not an Identity matrix and not zero;)
[/quote]Remind me which shader you are looking at - was this the reflection shader, or one of the lighting shaders?
[/quote]
Aha;) THX I'll update the Python version;))
Very interesting stuff I came across (all built in variables that can be accessed!):
http://docs.unity3d.com/462/Documentation/Manual/SL-BuiltinValues.html
All the variables can be accessed. In theory if a shader is missing one, you can define it as a uniform _type _variable and then use it;)) need to test this;)
In any case this is the one that I was looking for:
_ProjectionParams | float4 | x is 1.0 (or –1.0 if currently rendering with a flipped projection matrix), y is the camera’s near plane, z is the camera’s far plane and w is 1/FarPlane.
I see it in my reflection shader and I WONDER if this isn't the reason of the SNAPPING;) I wonder if it changes based on the distance or if is constant per shader....
The shader that I currently trying to fix is (the lighting one)
[code]
#ifdef FRAGMENT
uniform vec3 _WorldSpaceCameraPos;
uniform vec4 _ProjectionParams;
uniform vec4 _ZBufferParams;
uniform sampler2D _CameraDepthTexture;
uniform vec4 _LightPos;
uniform vec4 _LightColor;
uniform mat4 _CameraToWorld;
uniform sampler2D _LightTextureB0;
uniform vec4 unity_LightGammaCorrectionConsts;
uniform sampler2D _CameraGBufferTexture0;
uniform sampler2D _CameraGBufferTexture1;
uniform sampler2D _CameraGBufferTexture2;
uniform float _GlobalSpecularScale;
uniform float _InsideMirrorLighting;
varying vec4 xlv_TEXCOORD0;
varying vec3 xlv_TEXCOORD1;
void main ()
{
vec4 res_1;
vec3 specColor_2;
vec3 tmpvar_3;
vec2 tmpvar_4;
tmpvar_4 = (xlv_TEXCOORD0.xy / xlv_TEXCOORD0.w);
// View Space Position
vec4 tmpvar_5;
// Why is w == 1 and not the w from below???
tmpvar_5.w = 1.0;
// Funky math to calculate the position.
// _ProjectionParams.z is the FAR plane.
tmpvar_5.xyz = ((xlv_TEXCOORD1 * (_ProjectionParams.z / xlv_TEXCOORD1.z)) * (1.0/((
(_ZBufferParams.x * texture2D (_CameraDepthTexture, tmpvar_4).x)
+ _ZBufferParams.y))));
// World Space Position
// Correction should take place here
vec3 tmpvar_6;
tmpvar_6 = (_CameraToWorld * tmpvar_5).xyz;
vec3 tmpvar_7;
tmpvar_7 = (tmpvar_6 - _LightPos.xyz);
vec3 tmpvar_8;
tmpvar_8 = -(normalize(tmpvar_7));
vec4 tmpvar_9;
tmpvar_9 = texture2D (_CameraGBufferTexture0, tmpvar_4);
vec4 tmpvar_10;
tmpvar_10 = texture2D (_CameraGBufferTexture1, tmpvar_4);
tmpvar_3 = (_LightColor.xyz * texture2D (_LightTextureB0, vec2((dot (tmpvar_7, tmpvar_7) * _LightPos.w))).w);
vec3 tmpvar_11;
tmpvar_11 = normalize(((texture2D (_CameraGBufferTexture2, tmpvar_4).xyz * 2.0) - 1.0));
float tmpvar_12;
tmpvar_12 = max (0.0, dot (tmpvar_11, tmpvar_8));
specColor_2 = (tmpvar_10.xyz * (_GlobalSpecularScale * mix (1.0, 0.5, _InsideMirrorLighting)));
vec3 viewDir_13;
viewDir_13 = -(normalize((tmpvar_6 - _WorldSpaceCameraPos)));
float tmpvar_14;
tmpvar_14 = (1.0 - tmpvar_10.w);
vec3 tmpvar_15;
vec3 inVec_16;
inVec_16 = (tmpvar_8 + viewDir_13);
tmpvar_15 = (inVec_16 * inversesqrt(max (0.001,
dot (inVec_16, inVec_16)
)));
float tmpvar_17;
tmpvar_17 = max (0.0, dot (tmpvar_11, viewDir_13));
float tmpvar_18;
tmpvar_18 = max (0.0, dot (tmpvar_8, tmpvar_15));
float tmpvar_19;
tmpvar_19 = ((tmpvar_14 * tmpvar_14) * unity_LightGammaCorrectionConsts.w);
float tmpvar_20;
float tmpvar_21;
tmpvar_21 = (10.0 / log2((
((1.0 - tmpvar_14) * 0.968)
+ 0.03)));
tmpvar_20 = (tmpvar_21 * tmpvar_21);
float x_22;
x_22 = (1.0 - tmpvar_12);
float x_23;
x_23 = (1.0 - tmpvar_17);
float tmpvar_24;
tmpvar_24 = (0.5 + ((2.0 * tmpvar_18) * (tmpvar_18 * tmpvar_14)));
float x_25;
x_25 = (1.0 - tmpvar_18);
vec3 tmpvar_26;
tmpvar_26 = ((tmpvar_9.xyz * (tmpvar_3 *
(((1.0 + (
(tmpvar_24 - 1.0)
*
((x_22 * x_22) * ((x_22 * x_22) * x_22))
)) * (1.0 + (
(tmpvar_24 - 1.0)
*
((x_23 * x_23) * ((x_23 * x_23) * x_23))
))) * tmpvar_12)
)) + ((
max (0.0, (((
(1.0/((((
(tmpvar_12 * (1.0 - tmpvar_19))
+ tmpvar_19) * (
(tmpvar_17 * (1.0 - tmpvar_19))
+ tmpvar_19)) + 0.0001)))
*
(pow (max (0.0, dot (tmpvar_11, tmpvar_15)), tmpvar_20) * ((tmpvar_20 + 1.0) * unity_LightGammaCorrectionConsts.y))
) * tmpvar_12) * unity_LightGammaCorrectionConsts.x))
* tmpvar_3) * (specColor_2 +
((1.0 - specColor_2) * ((x_25 * x_25) * ((x_25 * x_25) * x_25)))
)));
vec4 tmpvar_27;
tmpvar_27.w = 1.0;
tmpvar_27.xyz = tmpvar_26;
res_1 = tmpvar_27;
if ((tmpvar_9.w < 0.9960784)) {
res_1.xyz = (tmpvar_26 + ((
((tmpvar_9.xyz * tmpvar_3) * max (0.0, dot (-(tmpvar_11), tmpvar_8)))
*
(1.0 - tmpvar_9.w)
) * vec3(0.8, 1.0, 0.6)));
};
gl_FragData[0] = res_1;
}
#endif
[/code]
helifax said:I was able to extract the resources. However when I run "extract_unity_shaders.py" I get this error:
<snip>
Any idea why? This is on Windows + Python 2.7 installed:) No Cywing;) Maybe that's why??
That script requires Python 3 (tested with 3.4), but should work fine without cygwin
It does something but doesn't correct the problem:(
I still basically have this issue:
- At low convergence (up to 0.3) everything is rendered perfectly.
- At higher convergence things start to break:
- If you are far away the shadows render correctly
- If you get close at some point it will basically separate into 2 (like HUGe separation + convergence is applied).
- If you get even closer, it will then SNAP into the correct position again.
I remember seeing the same thing in Dark Souls 2, where the light will look ok far, will break in between and it will look OK close.
I think the snapping happens when the game switches from one resolution shadow map to another if the shadows are still misaligned. Once they are aligned accurately there shouldn't be any snapping and if there is that suggests the adjustment is not 100% accurate. I saw this type of thing happening with some shadows in Montague's Mount early on before I had worked out the accurate adjustment... IIRC the missing part was the adjustment to the ray in the vertex shader for point & spot lights.
Also, the gl_ProjectionMatrix is valid in the shader;)
I made a quick test:
- Multiply the Pos * gl_ProjectionMatrix => everything was broken
- Multiply the Result with inverse(gl_ProjectionMatrix) => everything was back;)
So definitely not an Identity matrix and not zero;)
Remind me which shader you are looking at - was this the reflection shader, or one of the lighting shaders?
All the variables can be accessed. In theory if a shader is missing one, you can define it as a uniform _type _variable and then use it;)) need to test this;)
In any case this is the one that I was looking for:
_ProjectionParams | float4 | x is 1.0 (or –1.0 if currently rendering with a flipped projection matrix), y is the camera’s near plane, z is the camera’s far plane and w is 1/FarPlane.
I see it in my reflection shader and I WONDER if this isn't the reason of the SNAPPING;) I wonder if it changes based on the distance or if is constant per shader....
The shader that I currently trying to fix is (the lighting one)
// Why is w == 1 and not the w from below???
tmpvar_5.w = 1.0;
// Funky math to calculate the position.
// _ProjectionParams.z is the FAR plane.
tmpvar_5.xyz = ((xlv_TEXCOORD1 * (_ProjectionParams.z / xlv_TEXCOORD1.z)) * (1.0/((
(_ZBufferParams.x * texture2D (_CameraDepthTexture, tmpvar_4).x)
+ _ZBufferParams.y))));
// World Space Position
// Correction should take place here
vec3 tmpvar_6;
tmpvar_6 = (_CameraToWorld * tmpvar_5).xyz;
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
[quote="helifax"]Very interesting stuff I came across (all built in variables that can be accessed!):
http://docs.unity3d.com/462/Documentation/Manual/SL-BuiltinValues.html
All the variables can be accessed. In theory if a shader is missing one, you can define it as a uniform _type _variable and then use it;)) need to test this;)[/quote]I've already been down that path, and it will only work if we replace the appropriate .shader resource in the .asset files (not sure if there are any tools to do that), and won't work if we replace the shader with any of our wrappers. When Unity compiles a shader from Cg into GLSL/DX11 bytecode/DX9 asm it notes which were used and records it in the .shader file and at runtime it will only bind the parameters that are explicitly listed - that was a big motivation for extract_unity_shaders.py to find out what Unity would have bound (if you're lucky something might still be in a register used by the previous shader, but that's unreliable as you can't be sure the draw order will always be consistent - been down that path as well).
[quote]In any case this is the one that I was looking for:
_ProjectionParams | float4 | x is 1.0 (or –1.0 if currently rendering with a flipped projection matrix), y is the camera’s near plane, z is the camera’s far plane and w is 1/FarPlane.[/quote]You can also derive the near & far clipping planes from _ZBufferParams with a bit of algebra (I do this for the auto crosshair depth since I need to copy _ZBufferParams to the UI shaders anyway). It's not documented, but the Unity devs posted details of it here:
http://forum.unity3d.com/threads/_zbufferparams-values.39332/
So, 1/_ZBufferParams.w will be the near clipping plane, then the far clipping plane will be 1/(_ZbufferParams.z + _ZbufferParams.w)
[quote]I see it in my reflection shader and I WONDER if this isn't the reason of the SNAPPING;) I wonder if it changes based on the distance or if is constant per shader....[/quote]I doubt it - the clipping planes might change for a cutscene or a different level, but I wouldnt't expect them to change while just walking around, and they don't come into the usual patterns we use to fix Unity games (one exception - I did use them in The Last Tinker).
[quote]The shader that I currently trying to fix is (the lighting one)[/quote]I'd be interested to know which shader that matches once we get extract_unity_shaders working, but looking back at the vertex shader it looks like it's closer to the point/spot/physical lighting shaders and may also need an adjustment to the vertex shader. If it's a non-full screen point or spot light it should be something like:
[code]
void main ()
{
vec4 tmpvar_1;
vec3 tmpvar_2;
tmpvar_1 = (gl_ModelViewProjectionMatrix * gl_Vertex);
// Relocated from below:
gl_Position = tmpvar_1;
// Adjust o.uv, which is in clip-space coordinates:
// This should only be done for in-world effects, not full-screen quads
// In DX I check tmpvar_1.w == 1 to indicate it's full screen
// In GL you might be able to use the fact you get separate vertex shaders
tmpvar_1.x += g_eye * g_eye_separation * (tmpvar_1.w - g_convergence);
vec4 o_3;
vec4 tmpvar_4;
tmpvar_4 = (tmpvar_1 * 0.5);
vec2 tmpvar_5;
tmpvar_5.x = tmpvar_4.x;
tmpvar_5.y = (tmpvar_4.y * _ProjectionParams.x);
o_3.xy = (tmpvar_5 + tmpvar_4.w);
o_3.zw = tmpvar_1.zw;
tmpvar_2 = ((gl_ModelViewMatrix * gl_Vertex).xyz * vec3(-1.0, -1.0, 1.0));
// Adjust o.ray, which is in view-space coordinates:
// Again, this must only be applied when the shader is drawing an in-world
// effect, not a full-screen quad.
// Use the same depth as above from tmpvar_1.w, not tmpvar_2.z!
tmpvar_2.x += g_eye * g_eye_separation * (tmpvar_1.w - g_convergence) / gl_ProjectionMatrix[0].x;
vec3 tmpvar_6;
tmpvar_6 = mix (tmpvar_2, gl_Normal, vec3(_LightAsQuad));
tmpvar_2 = tmpvar_6;
//gl_Position = tmpvar_1;
xlv_TEXCOORD0 = o_3;
xlv_TEXCOORD1 = tmpvar_6;
}
[/code]
And the pixel shader should have the adjustment we discussed earlier.
All the variables can be accessed. In theory if a shader is missing one, you can define it as a uniform _type _variable and then use it;)) need to test this;)
I've already been down that path, and it will only work if we replace the appropriate .shader resource in the .asset files (not sure if there are any tools to do that), and won't work if we replace the shader with any of our wrappers. When Unity compiles a shader from Cg into GLSL/DX11 bytecode/DX9 asm it notes which were used and records it in the .shader file and at runtime it will only bind the parameters that are explicitly listed - that was a big motivation for extract_unity_shaders.py to find out what Unity would have bound (if you're lucky something might still be in a register used by the previous shader, but that's unreliable as you can't be sure the draw order will always be consistent - been down that path as well).
In any case this is the one that I was looking for:
_ProjectionParams | float4 | x is 1.0 (or –1.0 if currently rendering with a flipped projection matrix), y is the camera’s near plane, z is the camera’s far plane and w is 1/FarPlane.
You can also derive the near & far clipping planes from _ZBufferParams with a bit of algebra (I do this for the auto crosshair depth since I need to copy _ZBufferParams to the UI shaders anyway). It's not documented, but the Unity devs posted details of it here:
http://forum.unity3d.com/threads/_zbufferparams-values.39332/
So, 1/_ZBufferParams.w will be the near clipping plane, then the far clipping plane will be 1/(_ZbufferParams.z + _ZbufferParams.w)
I see it in my reflection shader and I WONDER if this isn't the reason of the SNAPPING;) I wonder if it changes based on the distance or if is constant per shader....
I doubt it - the clipping planes might change for a cutscene or a different level, but I wouldnt't expect them to change while just walking around, and they don't come into the usual patterns we use to fix Unity games (one exception - I did use them in The Last Tinker).
The shader that I currently trying to fix is (the lighting one)
I'd be interested to know which shader that matches once we get extract_unity_shaders working, but looking back at the vertex shader it looks like it's closer to the point/spot/physical lighting shaders and may also need an adjustment to the vertex shader. If it's a non-full screen point or spot light it should be something like:
// Adjust o.uv, which is in clip-space coordinates:
// This should only be done for in-world effects, not full-screen quads
// In DX I check tmpvar_1.w == 1 to indicate it's full screen
// In GL you might be able to use the fact you get separate vertex shaders
tmpvar_1.x += g_eye * g_eye_separation * (tmpvar_1.w - g_convergence);
// Adjust o.ray, which is in view-space coordinates:
// Again, this must only be applied when the shader is drawing an in-world
// effect, not a full-screen quad.
// Use the same depth as above from tmpvar_1.w, not tmpvar_2.z!
tmpvar_2.x += g_eye * g_eye_separation * (tmpvar_1.w - g_convergence) / gl_ProjectionMatrix[0].x;
[quote="DarkStarSword"][quote="helifax"]Very interesting stuff I came across (all built in variables that can be accessed!):
http://docs.unity3d.com/462/Documentation/Manual/SL-BuiltinValues.html
All the variables can be accessed. In theory if a shader is missing one, you can define it as a uniform _type _variable and then use it;)) need to test this;)[/quote]I've already been down that path, and it will only work if we replace the appropriate .shader resource in the .asset files (not sure if there are any tools to do that), and won't work if we replace the shader with any of our wrappers. When Unity compiles a shader from Cg into GLSL/DX11 bytecode/DX9 asm it notes which were used and records it in the .shader file and at runtime it will only bind the parameters that are explicitly listed - that was a big motivation for extract_unity_shaders.py to find out what Unity would have bound (if you're lucky something might still be in a register used by the previous shader, but that's unreliable as you can't be sure the draw order will always be consistent - been down that path as well).
[quote]In any case this is the one that I was looking for:
_ProjectionParams | float4 | x is 1.0 (or –1.0 if currently rendering with a flipped projection matrix), y is the camera’s near plane, z is the camera’s far plane and w is 1/FarPlane.[/quote]You can also derive the near & far clipping planes from _ZBufferParams with a bit of algebra (I do this for the auto crosshair depth since I need to copy _ZBufferParams to the UI shaders anyway). It's not documented, but the Unity devs posted details of it here:
http://forum.unity3d.com/threads/_zbufferparams-values.39332/
So, 1/_ZBufferParams.w will be the near clipping plane, then the far clipping plane will be 1/(_ZbufferParams.z + _ZbufferParams.w)
[quote]I see it in my reflection shader and I WONDER if this isn't the reason of the SNAPPING;) I wonder if it changes based on the distance or if is constant per shader....[/quote]I doubt it - the clipping planes might change for a cutscene or a different level, but I wouldnt't expect them to change while just walking around, and they don't come into the usual patterns we use to fix Unity games (one exception - I did use them in The Last Tinker).
[quote]The shader that I currently trying to fix is (the lighting one)[/quote]I'd be interested to know which shader that matches once we get extract_unity_shaders working, but looking back at the vertex shader it looks like it's closer to the point/spot/physical lighting shaders and may also need an adjustment to the vertex shader. If it's a non-full screen point or spot light it should be something like:
[code]
void main ()
{
vec4 tmpvar_1;
vec3 tmpvar_2;
tmpvar_1 = (gl_ModelViewProjectionMatrix * gl_Vertex);
// Relocated from below:
gl_Position = tmpvar_1;
// Adjust o.uv, which is in clip-space coordinates:
// This should only be done for in-world effects, not full-screen quads
// In DX I check tmpvar_1.w == 1 to indicate it's full screen
// In GL you might be able to use the fact you get separate vertex shaders
tmpvar_1.x += g_eye * g_eye_separation * (tmpvar_1.w - g_convergence);
vec4 o_3;
vec4 tmpvar_4;
tmpvar_4 = (tmpvar_1 * 0.5);
vec2 tmpvar_5;
tmpvar_5.x = tmpvar_4.x;
tmpvar_5.y = (tmpvar_4.y * _ProjectionParams.x);
o_3.xy = (tmpvar_5 + tmpvar_4.w);
o_3.zw = tmpvar_1.zw;
tmpvar_2 = ((gl_ModelViewMatrix * gl_Vertex).xyz * vec3(-1.0, -1.0, 1.0));
// Adjust o.ray, which is in view-space coordinates:
// Again, this must only be applied when the shader is drawing an in-world
// effect, not a full-screen quad.
// Use the same depth as above from tmpvar_1.w, not tmpvar_2.z!
tmpvar_2.x += g_eye * g_eye_separation * (tmpvar_1.w - g_convergence) / gl_ProjectionMatrix[0].x;
vec3 tmpvar_6;
tmpvar_6 = mix (tmpvar_2, gl_Normal, vec3(_LightAsQuad));
tmpvar_2 = tmpvar_6;
//gl_Position = tmpvar_1;
xlv_TEXCOORD0 = o_3;
xlv_TEXCOORD1 = tmpvar_6;
}
[/code]
And the pixel shader should have the adjustment we discussed earlier.
[/quote]
BIG BIG THANK YOU!!! (Sorry for the long silence;)) )
IT WORKED;)) although a bit different but you were on to it right there;))
So, basically the way to correct the UNITY 5 SHADERS is like this:
VERTEX:
[code]
void main ()
{
vec4 tmpvar_1;
vec3 tmpvar_2;
tmpvar_1 = (gl_ModelViewProjectionMatrix * gl_Vertex);
// NO NEED TO make "gl_Position = tmpvar_1;" here;) Still need to test this a bit more;)) But it looks like there is no need.
vec4 o_3;
vec4 tmpvar_4;
tmpvar_4 = (tmpvar_1 * 0.5);
vec2 tmpvar_5;
tmpvar_5.x = tmpvar_4.x;
tmpvar_5.y = (tmpvar_4.y * _ProjectionParams.x);
o_3.xy = (tmpvar_5 + tmpvar_4.w);
o_3.zw = tmpvar_1.zw;
tmpvar_2 = ((gl_ModelViewMatrix * gl_Vertex).xyz * vec3(-1.0, -1.0, 1.0));
// Use the same depth as above from tmpvar_1.w, not tmpvar_2.z!
// We need to REMOVE the Stereo Correction here.
tmpvar_2.x -= g_eye * g_eye_separation * (tmpvar_1.w - g_convergence) / gl_ProjectionMatrix[0].x;
vec3 tmpvar_6;
tmpvar_6 = mix (tmpvar_2, gl_Normal, vec3(_LightAsQuad));
tmpvar_2 = tmpvar_6;
gl_Position = tmpvar_1;
xlv_TEXCOORD0 = o_3;
xlv_TEXCOORD1 = tmpvar_6;
}
[/code]
PIXEL:
[code]
void main ()
{
vec4 res_1;
vec3 specColor_2;
vec3 tmpvar_3;
vec2 tmpvar_4;
tmpvar_4 = (xlv_TEXCOORD0.xy / xlv_TEXCOORD0.w);
vec4 tmpvar_5;
tmpvar_5.w = 1.0;
tmpvar_5.xyz = ((xlv_TEXCOORD1 * (_ProjectionParams.z / xlv_TEXCOORD1.z)) * (1.0/((
(_ZBufferParams.x * texture2D (_CameraDepthTexture, tmpvar_4).x)
+ _ZBufferParams.y))));
// LIGHTING
// Need to REMOVE the additional Stereo Correction here!
tmpvar_5.x -= g_eye * g_eye_separation *(tmpvar_5.z - g_convergence) / gl_ProjectionMatrix[0].x;
vec3 tmpvar_6;
tmpvar_6 = (_CameraToWorld * tmpvar_5).xyz;
vec3 tmpvar_7;
tmpvar_7 = (tmpvar_6 - _LightPos.xyz);
vec3 tmpvar_8;
tmpvar_8 = -(normalize(tmpvar_7));
....
....
[/code]
Note: Not all shaders have the "tmpvar_5" variable that needs correcting. It can vary from tmpvar_1 to tmpvar_7.
If you want to check it out and how it looks you can see the screenshots that I posted here:
https://forums.geforce.com/default/topic/891735/3d-vision/layers-of-fear-3d-vision-support-thread-unity-5-opengl-/
But most importantly we can also correct the Unity 5 from the OPENGL renderer now;)) (I had to update my wrapper to support this engine as well. Definitely NOT the standard engine you might believe, as I explained above;)) )
Again, BIG BIG BIG THANK YOU!!!
All the variables can be accessed. In theory if a shader is missing one, you can define it as a uniform _type _variable and then use it;)) need to test this;)
I've already been down that path, and it will only work if we replace the appropriate .shader resource in the .asset files (not sure if there are any tools to do that), and won't work if we replace the shader with any of our wrappers. When Unity compiles a shader from Cg into GLSL/DX11 bytecode/DX9 asm it notes which were used and records it in the .shader file and at runtime it will only bind the parameters that are explicitly listed - that was a big motivation for extract_unity_shaders.py to find out what Unity would have bound (if you're lucky something might still be in a register used by the previous shader, but that's unreliable as you can't be sure the draw order will always be consistent - been down that path as well).
In any case this is the one that I was looking for:
_ProjectionParams | float4 | x is 1.0 (or –1.0 if currently rendering with a flipped projection matrix), y is the camera’s near plane, z is the camera’s far plane and w is 1/FarPlane.
You can also derive the near & far clipping planes from _ZBufferParams with a bit of algebra (I do this for the auto crosshair depth since I need to copy _ZBufferParams to the UI shaders anyway). It's not documented, but the Unity devs posted details of it here:
So, 1/_ZBufferParams.w will be the near clipping plane, then the far clipping plane will be 1/(_ZbufferParams.z + _ZbufferParams.w)
I see it in my reflection shader and I WONDER if this isn't the reason of the SNAPPING;) I wonder if it changes based on the distance or if is constant per shader....
I doubt it - the clipping planes might change for a cutscene or a different level, but I wouldnt't expect them to change while just walking around, and they don't come into the usual patterns we use to fix Unity games (one exception - I did use them in The Last Tinker).
The shader that I currently trying to fix is (the lighting one)
I'd be interested to know which shader that matches once we get extract_unity_shaders working, but looking back at the vertex shader it looks like it's closer to the point/spot/physical lighting shaders and may also need an adjustment to the vertex shader. If it's a non-full screen point or spot light it should be something like:
// Adjust o.uv, which is in clip-space coordinates:
// This should only be done for in-world effects, not full-screen quads
// In DX I check tmpvar_1.w == 1 to indicate it's full screen
// In GL you might be able to use the fact you get separate vertex shaders
tmpvar_1.x += g_eye * g_eye_separation * (tmpvar_1.w - g_convergence);
// Adjust o.ray, which is in view-space coordinates:
// Again, this must only be applied when the shader is drawing an in-world
// effect, not a full-screen quad.
// Use the same depth as above from tmpvar_1.w, not tmpvar_2.z!
tmpvar_2.x += g_eye * g_eye_separation * (tmpvar_1.w - g_convergence) / gl_ProjectionMatrix[0].x;
// Use the same depth as above from tmpvar_1.w, not tmpvar_2.z!
// We need to REMOVE the Stereo Correction here.
tmpvar_2.x -= g_eye * g_eye_separation * (tmpvar_1.w - g_convergence) / gl_ProjectionMatrix[0].x;
But most importantly we can also correct the Unity 5 from the OPENGL renderer now;)) (I had to update my wrapper to support this engine as well. Definitely NOT the standard engine you might believe, as I explained above;)) )
Again, BIG BIG BIG THANK YOU!!!
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
// SWAP WITH CORRECT OUTPUT
xlv_TEXCOORD2 = PositionCOORD;
/////////////////////////////////////
ORIGINAL CODE IS HERE
//xlv_TEXCOORD2 = ((gl_ModelViewMatrix * tmpvar_7).xyz * vec3(-1.0, -1.0, 1.0));
Until now, I never thought about using this trick to correct the Texture Coords ;)) Lesson learnt!
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
[quote="helifax"]But most importantly we can also correct the Unity 5 from the OPENGL renderer now;)) (I had to update my wrapper to support this engine as well. Definitely NOT the standard engine you might believe, as I explained above;)) )[/quote]
So how are you implementing the rendering switch to OpenGL? Are you doing it via your wrapper or in the game's config file/console?
The reason I ask is because I recall you saying that your wrapper would load in The Evil Within. So perhaps it might be fixable now? I recently bought the full version (only had demo before) and looked through the cvarlist but couldn't very many useful options, everything useful is locked out in the retail version just like Wolfenstein. Anyhow, my guess would be that Doom 4 will also use dx11 on it's id engine.
helifax said:But most importantly we can also correct the Unity 5 from the OPENGL renderer now;)) (I had to update my wrapper to support this engine as well. Definitely NOT the standard engine you might believe, as I explained above;)) )
So how are you implementing the rendering switch to OpenGL? Are you doing it via your wrapper or in the game's config file/console?
The reason I ask is because I recall you saying that your wrapper would load in The Evil Within. So perhaps it might be fixable now? I recently bought the full version (only had demo before) and looked through the cvarlist but couldn't very many useful options, everything useful is locked out in the retail version just like Wolfenstein. Anyhow, my guess would be that Doom 4 will also use dx11 on it's id engine.
[quote="D-Man11"][quote="helifax"]But most importantly we can also correct the Unity 5 from the OPENGL renderer now;)) (I had to update my wrapper to support this engine as well. Definitely NOT the standard engine you might believe, as I explained above;)) )[/quote]
So how are you implementing the rendering switch to OpenGL? Are you doing it via your wrapper or in the game's config file/console?
The reason I ask is because I recall you saying that your wrapper would load in The Evil Within. So perhaps it might be fixable now? I recently bought the full version (only had demo before) and looked through the cvarlist but couldn't very many useful options, everything useful is locked out in the retail version just like Wolfenstein. Anyhow, my guess would be that Doom 4 will also use dx11 on it's id engine.[/quote]
The Unity 5 Engine has all three variants of rendering inside basically:
- DX11
- DX9
- OpenGL.
Unfortunately, I don't know of a way to force an App to select one rendering engine over the other:( For Unity 5 I use a command line argument: "-force-opengl". This makes the renderer work in OpenGL mode;))
The game that I was using and wanted to fix is: Layers of Fear (which by default starts in DX11). I could have gone with 3DMigoto and DX11 version for the fix;) but I always wanted to get a crack at Unity5 in OpenGL mode;)) that's why I went down this route;))
Regarding the Evil Within: If you look on Steam Page the game is only available for Windows Platform (no SteamOS, meaning no Linux and no OpenGL renderer). Layers of Fears is available on all platforms (including SteamOS) and that's how I knew it has the OpenGL renderer inside (and not stripped out at build time) ;)
I'll have to give that game another go and see, maybe I find something new this time around?:))
helifax said:But most importantly we can also correct the Unity 5 from the OPENGL renderer now;)) (I had to update my wrapper to support this engine as well. Definitely NOT the standard engine you might believe, as I explained above;)) )
So how are you implementing the rendering switch to OpenGL? Are you doing it via your wrapper or in the game's config file/console?
The reason I ask is because I recall you saying that your wrapper would load in The Evil Within. So perhaps it might be fixable now? I recently bought the full version (only had demo before) and looked through the cvarlist but couldn't very many useful options, everything useful is locked out in the retail version just like Wolfenstein. Anyhow, my guess would be that Doom 4 will also use dx11 on it's id engine.
The Unity 5 Engine has all three variants of rendering inside basically:
- DX11
- DX9
- OpenGL.
Unfortunately, I don't know of a way to force an App to select one rendering engine over the other:( For Unity 5 I use a command line argument: "-force-opengl". This makes the renderer work in OpenGL mode;))
The game that I was using and wanted to fix is: Layers of Fear (which by default starts in DX11). I could have gone with 3DMigoto and DX11 version for the fix;) but I always wanted to get a crack at Unity5 in OpenGL mode;)) that's why I went down this route;))
Regarding the Evil Within: If you look on Steam Page the game is only available for Windows Platform (no SteamOS, meaning no Linux and no OpenGL renderer). Layers of Fears is available on all platforms (including SteamOS) and that's how I knew it has the OpenGL renderer inside (and not stripped out at build time) ;)
I'll have to give that game another go and see, maybe I find something new this time around?:))
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
Thanks for the reply
I wish that MSI Afterburner would display the OpenGL version being used, like they do with Direct X.
Your mentioning the command -force-opengl reminded me that World of Warcraft had an openGL version that could be enabled via SET gxApi "opengl" in the config folder. OpenGL in WoW is used for internal testing use only and not for gameplay. So I tried it, just to try it. On Ultra, the graphics looked great (minus the dx11/Nvidia effects), on low they were very washed out. Framerates took about a 40% hit, I doubt that they've updated or optimized it very much, since it's just a test API.
Anyhow,I was curious if the in game stereo settings would still be accessible, they weren't.
I wish that MSI Afterburner would display the OpenGL version being used, like they do with Direct X.
Your mentioning the command -force-opengl reminded me that World of Warcraft had an openGL version that could be enabled via SET gxApi "opengl" in the config folder. OpenGL in WoW is used for internal testing use only and not for gameplay. So I tried it, just to try it. On Ultra, the graphics looked great (minus the dx11/Nvidia effects), on low they were very washed out. Framerates took about a 40% hit, I doubt that they've updated or optimized it very much, since it's just a test API.
Anyhow,I was curious if the in game stereo settings would still be accessible, they weren't.
[quote="D-Man11"]
I wish that MSI Afterburner would display the OpenGL version being used, like they do with Direct X.
[/quote]
MSI Afterburner does display there renderer next to the framerate:
D3D9 : FPS
D3D11 : FPS
OGL : FPS
It tells you exactly what renderer is being used;))
I wish that MSI Afterburner would display the OpenGL version being used, like they do with Direct X.
MSI Afterburner does display there renderer next to the framerate:
D3D9 : FPS
D3D11 : FPS
OGL : FPS
It tells you exactly what renderer is being used;))
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
Exactly, it tells you which dx version is being used.(provided that it's dx8 or newer)
But it does not tell you if it's OpenGl 1.1, 1.2, 1.3, 1.4, 1.5, 2.0, 2.1, 3.0, 3.1, 3.2, 3.3, 4.0, 4.1, 4.2 4.3, 4.4 or Mantle or Vulkan.
Mantle was only used in a few games and was a joint venture between AMD and DICE. AFAIK, it was only used on "consoles" in Dragon Age Inquisition, Battlefield Hardline and Plants vs Zombies: Garden Warfare.
Vulkan isn't being used yet AFAIK, unless maybe DOTA 2, since Valve's Source 2 Engine was the driving force behind the new API.
If I understand correctly, your wrapper is basically two flavors.
Legacy, which is for when a "fixed pipeline" is used, which I think is anything prior to OpenGL 3.2?
Then the newer version that is for anything with a programmable pipeline.
So how do you tell if it's using a programmable pipeline in OpenGL?
Oh and congratulations on your Unity 5 proof of concept. Very Nice!!
EDIT: OpenGL 3.2 is the version where fixed pipeline support was completely dropped, not where a programmable pipeline was introduced.
https://www.opengl.org/wiki/Fixed_Function_Pipeline
Exactly, it tells you which dx version is being used.(provided that it's dx8 or newer)
But it does not tell you if it's OpenGl 1.1, 1.2, 1.3, 1.4, 1.5, 2.0, 2.1, 3.0, 3.1, 3.2, 3.3, 4.0, 4.1, 4.2 4.3, 4.4 or Mantle or Vulkan.
Mantle was only used in a few games and was a joint venture between AMD and DICE. AFAIK, it was only used on "consoles" in Dragon Age Inquisition, Battlefield Hardline and Plants vs Zombies: Garden Warfare.
Vulkan isn't being used yet AFAIK, unless maybe DOTA 2, since Valve's Source 2 Engine was the driving force behind the new API.
If I understand correctly, your wrapper is basically two flavors.
Legacy, which is for when a "fixed pipeline" is used, which I think is anything prior to OpenGL 3.2?
Then the newer version that is for anything with a programmable pipeline.
So how do you tell if it's using a programmable pipeline in OpenGL?
Oh and congratulations on your Unity 5 proof of concept. Very Nice!!
[quote="D-Man11"]Exactly, it tells you which dx version is being used.(provided that it's dx8 or newer)
But it does not tell you if it's OpenGl 1.1, 1.2, 1.3, 1.4, 1.5, 2.0, 2.1, 3.0, 3.1, 3.2, 3.3, 4.0, 4.1, 4.2 4.3, 4.4 or Mantle or Vulkan.
Mantle was only used in a few games and was a joint venture between AMD and DICE. AFAIK, it was only used on "consoles" in Dragon Age Inquisition, Battlefield Hardline and Plants vs Zombies: Garden Warfare.
Vulkan isn't being used yet AFAIK, unless maybe DOTA 2, since Valve's Source 2 Engine was the driving force behind the new API.
If I understand correctly, your wrapper is basically two flavors.
Legacy, which is for when a "fixed pipeline" is used, which I think is anything prior to OpenGL 3.2?
Then the newer version that is for anything with a programmable pipeline.
So how do you tell if it's using a programmable pipeline in OpenGL?
Oh and congratulations on your Unity 5 proof of concept. Very Nice!!
EDIT: OpenGL 3.2 is the version where fixed pipeline support was completely dropped, not where a programmable pipeline was introduced.
https://www.opengl.org/wiki/Fixed_Function_Pipeline[/quote]
Exactly;))
OpenGL is a mixed bag;)) and is way more "ahead of time" than DirectX. Don't get me wrong DirectX is awesome from MANY points of view (being an object oriented API first of all, rather than a "state machine" -> C vs C++). But, OpenGL (historically) was the first one to get any new features. Tessellation? OGL had it years before it was brought to DirectX (same for a lot of other things... You don't really think Valve is CRAZY with all that talk that OpenGL is better than DirectX ^_^).
In any case! The fixed Pipeline didn't rely on shaders!
The best way to see if an OpenGL is based on the "OLD" or "NEW" API is to look for "glBegin() and glEnd()" function being called.
This is the FIXED pipeline. Then they added shaders (before DirectX even knew the concept of shaders) in the ARB format (aka Assembly) format... Sounds familiar? ^_^. Initially they were just an extension in ARB format. Later, they became mainstream. So, If "glBegin() and glEnd()" is not present and "glUseProgram" (or EXT or ARB variants) are present they are using the programmable pipe-line (using shaders that is).
Now, Unity 5 Is a MIX: They use shaders but also use some part of the OLD pipeline:
- The old Pipeline used to send matrices from the CPU to the GPU using special functions like "glMtrixMode(MODE) and glMatrix4f()" (or variant). The programmable pipeline specifies that these functions should not be used but instead they should be sent as "Uniform" variables to the shaders directly.
- Unity5 sends the matrices via the "glMtrixMode()" and uses shaders;))
A little more background:
- When shaders first arrived, they relied on pre-defined variables inside;) Thus, special functions were used from the CPU side to set those variables. (Like glMatrixMode()).
- When the Programmable pipeline was completed, they said not to rely on the old functions, but rather send the data directly to the shader via uniforms variables.
In Short:
- Calling "glMtrixMode(GL_PROJECTION)" + "glMatrix4f()" will send a mat4 to the shaders and is stored in the "gl_Projection" variable (predefined). This is part of the "Legacy" way;))
- Using the programmable pipeline you can have a variable called "mat4 MyProjectionMatrix" and using the "send uniform method" you can set it up;))
In the end, you achieve the same;) BUT! When you try to reverse-engineer this thing it provides a big difference;)
According to Nvidia documents on Stereo3D: There are 2 ways of making a Stereo Projection:
1. Modify the Position on X coord when in World Space (using the formula)
2. Modify the Projection Matrix so that is in Stereo (based on the viewing eye).
1) Works for Programmable pipeline. But, what if there are no shaders;)) ?? Then, you need to do it according to point 2). (Legacy stuff, AKA fixed pipeline ^_^).
Hope this explained a bit more about the difference;))
D-Man11 said:Exactly, it tells you which dx version is being used.(provided that it's dx8 or newer)
But it does not tell you if it's OpenGl 1.1, 1.2, 1.3, 1.4, 1.5, 2.0, 2.1, 3.0, 3.1, 3.2, 3.3, 4.0, 4.1, 4.2 4.3, 4.4 or Mantle or Vulkan.
Mantle was only used in a few games and was a joint venture between AMD and DICE. AFAIK, it was only used on "consoles" in Dragon Age Inquisition, Battlefield Hardline and Plants vs Zombies: Garden Warfare.
Vulkan isn't being used yet AFAIK, unless maybe DOTA 2, since Valve's Source 2 Engine was the driving force behind the new API.
If I understand correctly, your wrapper is basically two flavors.
Legacy, which is for when a "fixed pipeline" is used, which I think is anything prior to OpenGL 3.2?
Then the newer version that is for anything with a programmable pipeline.
So how do you tell if it's using a programmable pipeline in OpenGL?
Oh and congratulations on your Unity 5 proof of concept. Very Nice!!
OpenGL is a mixed bag;)) and is way more "ahead of time" than DirectX. Don't get me wrong DirectX is awesome from MANY points of view (being an object oriented API first of all, rather than a "state machine" -> C vs C++). But, OpenGL (historically) was the first one to get any new features. Tessellation? OGL had it years before it was brought to DirectX (same for a lot of other things... You don't really think Valve is CRAZY with all that talk that OpenGL is better than DirectX ^_^).
In any case! The fixed Pipeline didn't rely on shaders!
The best way to see if an OpenGL is based on the "OLD" or "NEW" API is to look for "glBegin() and glEnd()" function being called.
This is the FIXED pipeline. Then they added shaders (before DirectX even knew the concept of shaders) in the ARB format (aka Assembly) format... Sounds familiar? ^_^. Initially they were just an extension in ARB format. Later, they became mainstream. So, If "glBegin() and glEnd()" is not present and "glUseProgram" (or EXT or ARB variants) are present they are using the programmable pipe-line (using shaders that is).
Now, Unity 5 Is a MIX: They use shaders but also use some part of the OLD pipeline:
- The old Pipeline used to send matrices from the CPU to the GPU using special functions like "glMtrixMode(MODE) and glMatrix4f()" (or variant). The programmable pipeline specifies that these functions should not be used but instead they should be sent as "Uniform" variables to the shaders directly.
- Unity5 sends the matrices via the "glMtrixMode()" and uses shaders;))
A little more background:
- When shaders first arrived, they relied on pre-defined variables inside;) Thus, special functions were used from the CPU side to set those variables. (Like glMatrixMode()).
- When the Programmable pipeline was completed, they said not to rely on the old functions, but rather send the data directly to the shader via uniforms variables.
In Short:
- Calling "glMtrixMode(GL_PROJECTION)" + "glMatrix4f()" will send a mat4 to the shaders and is stored in the "gl_Projection" variable (predefined). This is part of the "Legacy" way;))
- Using the programmable pipeline you can have a variable called "mat4 MyProjectionMatrix" and using the "send uniform method" you can set it up;))
In the end, you achieve the same;) BUT! When you try to reverse-engineer this thing it provides a big difference;)
According to Nvidia documents on Stereo3D: There are 2 ways of making a Stereo Projection:
1. Modify the Position on X coord when in World Space (using the formula)
2. Modify the Projection Matrix so that is in Stereo (based on the viewing eye).
1) Works for Programmable pipeline. But, what if there are no shaders;)) ?? Then, you need to do it according to point 2). (Legacy stuff, AKA fixed pipeline ^_^).
Hope this explained a bit more about the difference;))
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
Thanks for the reply.
Yah, it seems so confusing, because when I look at games that were released somewhat more recently (let's say after Doom 3) I still see a lot of depreciated functions, that I thought were no longer used. So it seems that this is just a backward compatibility in the API and game engine for older hardware.
Yah, it seems so confusing, because when I look at games that were released somewhat more recently (let's say after Doom 3) I still see a lot of depreciated functions, that I thought were no longer used. So it seems that this is just a backward compatibility in the API and game engine for older hardware.
So, all going well these hashes will hopefully match now - let me know if they do or not:
I dropped the gles and gles3 variants since they aren't relevant for us (you can of course still extract them with --deep-dir, but they won't be hashed).
Still not sure what the difference is between glcore and opengl.
Or maybe it's for mac support? I've heard from various devs that Apple basically stopped updating OpenGL one version before it transformed into a decent API, so it's apparently a really horrible platform to support.
2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit
Alienware M17x R4 w/ built in 3D, Intel i7 3740QM, GTX 680m 2GB, 16GB DDR3 1600MHz RAM, Win7 64bit, 1TB SSD, 1TB HDD, 750GB HDD
Pre-release 3D fixes, shadertool.py and other goodies: http://github.com/DarkStarSword/3d-fixes
Support me on Patreon: https://www.patreon.com/DarkStarSword or PayPal: https://www.paypal.me/DarkStarSword
Awesome!!! I'll have to check it out and see;))
The diff between glcore and opengl:
- GL Core specifies that the shaders MUST adhere to OpenGL Core Profile. (so no funny ARB or EXT variants of the functions that are manufacturer specific).
- Opengl is the core profile + extended profile;)
On the game I tried there aren't any GLCORE variants, only the OpenGL (and the renderer does use the ARB extension ^_^). Opengl is the best one for compatibility across the board and vendors;)
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
I was able to extract the resources. However when I run "extract_unity_shaders.py" I get this error:
Any idea why? This is on Windows + Python 2.7 installed:) No Cywing;) Maybe that's why??
On a separate note:
This didn't work:
So, basically what I need here to do is:
It does something but doesn't correct the problem:(
I still basically have this issue:
- At low convergence (up to 0.3) everything is rendered perfectly.
- At higher convergence things start to break:
- If you are far away the shadows render correctly
- If you get close at some point it will basically separate into 2 (like HUGe separation + convergence is applied).
- If you get even closer, it will then SNAP into the correct position again.
I remember seeing the same thing in Dark Souls 2, where the light will look ok far, will break in between and it will look OK close.
Also, the gl_ProjectionMatrix is valid in the shader;)
I made a quick test:
- Multiply the Pos * gl_ProjectionMatrix => everything was broken
- Multiply the Result with inverse(gl_ProjectionMatrix) => everything was back;)
So definitely not an Identity matrix and not zero;)
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
I think the snapping happens when the game switches from one resolution shadow map to another if the shadows are still misaligned. Once they are aligned accurately there shouldn't be any snapping and if there is that suggests the adjustment is not 100% accurate. I saw this type of thing happening with some shadows in Montague's Mount early on before I had worked out the accurate adjustment... IIRC the missing part was the adjustment to the ray in the vertex shader for point & spot lights.
Remind me which shader you are looking at - was this the reflection shader, or one of the lighting shaders?
2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit
Alienware M17x R4 w/ built in 3D, Intel i7 3740QM, GTX 680m 2GB, 16GB DDR3 1600MHz RAM, Win7 64bit, 1TB SSD, 1TB HDD, 750GB HDD
Pre-release 3D fixes, shadertool.py and other goodies: http://github.com/DarkStarSword/3d-fixes
Support me on Patreon: https://www.patreon.com/DarkStarSword or PayPal: https://www.paypal.me/DarkStarSword
Aha;) THX I'll update the Python version;))
Very interesting stuff I came across (all built in variables that can be accessed!):
http://docs.unity3d.com/462/Documentation/Manual/SL-BuiltinValues.html
All the variables can be accessed. In theory if a shader is missing one, you can define it as a uniform _type _variable and then use it;)) need to test this;)
In any case this is the one that I was looking for:
_ProjectionParams | float4 | x is 1.0 (or –1.0 if currently rendering with a flipped projection matrix), y is the camera’s near plane, z is the camera’s far plane and w is 1/FarPlane.
I see it in my reflection shader and I WONDER if this isn't the reason of the SNAPPING;) I wonder if it changes based on the distance or if is constant per shader....
The shader that I currently trying to fix is (the lighting one)
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
You can also derive the near & far clipping planes from _ZBufferParams with a bit of algebra (I do this for the auto crosshair depth since I need to copy _ZBufferParams to the UI shaders anyway). It's not documented, but the Unity devs posted details of it here:
http://forum.unity3d.com/threads/_zbufferparams-values.39332/
So, 1/_ZBufferParams.w will be the near clipping plane, then the far clipping plane will be 1/(_ZbufferParams.z + _ZbufferParams.w)
I doubt it - the clipping planes might change for a cutscene or a different level, but I wouldnt't expect them to change while just walking around, and they don't come into the usual patterns we use to fix Unity games (one exception - I did use them in The Last Tinker).
I'd be interested to know which shader that matches once we get extract_unity_shaders working, but looking back at the vertex shader it looks like it's closer to the point/spot/physical lighting shaders and may also need an adjustment to the vertex shader. If it's a non-full screen point or spot light it should be something like:
And the pixel shader should have the adjustment we discussed earlier.
2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit
Alienware M17x R4 w/ built in 3D, Intel i7 3740QM, GTX 680m 2GB, 16GB DDR3 1600MHz RAM, Win7 64bit, 1TB SSD, 1TB HDD, 750GB HDD
Pre-release 3D fixes, shadertool.py and other goodies: http://github.com/DarkStarSword/3d-fixes
Support me on Patreon: https://www.patreon.com/DarkStarSword or PayPal: https://www.paypal.me/DarkStarSword
BIG BIG THANK YOU!!! (Sorry for the long silence;)) )
IT WORKED;)) although a bit different but you were on to it right there;))
So, basically the way to correct the UNITY 5 SHADERS is like this:
VERTEX:
PIXEL:
Note: Not all shaders have the "tmpvar_5" variable that needs correcting. It can vary from tmpvar_1 to tmpvar_7.
If you want to check it out and how it looks you can see the screenshots that I posted here:
https://forums.geforce.com/default/topic/891735/3d-vision/layers-of-fear-3d-vision-support-thread-unity-5-opengl-/
But most importantly we can also correct the Unity 5 from the OPENGL renderer now;)) (I had to update my wrapper to support this engine as well. Definitely NOT the standard engine you might believe, as I explained above;)) )
Again, BIG BIG BIG THANK YOU!!!
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
Until now, I never thought about using this trick to correct the Texture Coords ;)) Lesson learnt!
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
So how are you implementing the rendering switch to OpenGL? Are you doing it via your wrapper or in the game's config file/console?
The reason I ask is because I recall you saying that your wrapper would load in The Evil Within. So perhaps it might be fixable now? I recently bought the full version (only had demo before) and looked through the cvarlist but couldn't very many useful options, everything useful is locked out in the retail version just like Wolfenstein. Anyhow, my guess would be that Doom 4 will also use dx11 on it's id engine.
The Unity 5 Engine has all three variants of rendering inside basically:
- DX11
- DX9
- OpenGL.
Unfortunately, I don't know of a way to force an App to select one rendering engine over the other:( For Unity 5 I use a command line argument: "-force-opengl". This makes the renderer work in OpenGL mode;))
The game that I was using and wanted to fix is: Layers of Fear (which by default starts in DX11). I could have gone with 3DMigoto and DX11 version for the fix;) but I always wanted to get a crack at Unity5 in OpenGL mode;)) that's why I went down this route;))
Regarding the Evil Within: If you look on Steam Page the game is only available for Windows Platform (no SteamOS, meaning no Linux and no OpenGL renderer). Layers of Fears is available on all platforms (including SteamOS) and that's how I knew it has the OpenGL renderer inside (and not stripped out at build time) ;)
I'll have to give that game another go and see, maybe I find something new this time around?:))
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
I wish that MSI Afterburner would display the OpenGL version being used, like they do with Direct X.
Your mentioning the command -force-opengl reminded me that World of Warcraft had an openGL version that could be enabled via SET gxApi "opengl" in the config folder. OpenGL in WoW is used for internal testing use only and not for gameplay. So I tried it, just to try it. On Ultra, the graphics looked great (minus the dx11/Nvidia effects), on low they were very washed out. Framerates took about a 40% hit, I doubt that they've updated or optimized it very much, since it's just a test API.
Anyhow,I was curious if the in game stereo settings would still be accessible, they weren't.
MSI Afterburner does display there renderer next to the framerate:
D3D9 : FPS
D3D11 : FPS
OGL : FPS
It tells you exactly what renderer is being used;))
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
But it does not tell you if it's OpenGl 1.1, 1.2, 1.3, 1.4, 1.5, 2.0, 2.1, 3.0, 3.1, 3.2, 3.3, 4.0, 4.1, 4.2 4.3, 4.4 or Mantle or Vulkan.
Mantle was only used in a few games and was a joint venture between AMD and DICE. AFAIK, it was only used on "consoles" in Dragon Age Inquisition, Battlefield Hardline and Plants vs Zombies: Garden Warfare.
Vulkan isn't being used yet AFAIK, unless maybe DOTA 2, since Valve's Source 2 Engine was the driving force behind the new API.
If I understand correctly, your wrapper is basically two flavors.
Legacy, which is for when a "fixed pipeline" is used, which I think is anything prior to OpenGL 3.2?
Then the newer version that is for anything with a programmable pipeline.
So how do you tell if it's using a programmable pipeline in OpenGL?
Oh and congratulations on your Unity 5 proof of concept. Very Nice!!
EDIT: OpenGL 3.2 is the version where fixed pipeline support was completely dropped, not where a programmable pipeline was introduced.
https://www.opengl.org/wiki/Fixed_Function_Pipeline
Exactly;))
OpenGL is a mixed bag;)) and is way more "ahead of time" than DirectX. Don't get me wrong DirectX is awesome from MANY points of view (being an object oriented API first of all, rather than a "state machine" -> C vs C++). But, OpenGL (historically) was the first one to get any new features. Tessellation? OGL had it years before it was brought to DirectX (same for a lot of other things... You don't really think Valve is CRAZY with all that talk that OpenGL is better than DirectX ^_^).
In any case! The fixed Pipeline didn't rely on shaders!
The best way to see if an OpenGL is based on the "OLD" or "NEW" API is to look for "glBegin() and glEnd()" function being called.
This is the FIXED pipeline. Then they added shaders (before DirectX even knew the concept of shaders) in the ARB format (aka Assembly) format... Sounds familiar? ^_^. Initially they were just an extension in ARB format. Later, they became mainstream. So, If "glBegin() and glEnd()" is not present and "glUseProgram" (or EXT or ARB variants) are present they are using the programmable pipe-line (using shaders that is).
Now, Unity 5 Is a MIX: They use shaders but also use some part of the OLD pipeline:
- The old Pipeline used to send matrices from the CPU to the GPU using special functions like "glMtrixMode(MODE) and glMatrix4f()" (or variant). The programmable pipeline specifies that these functions should not be used but instead they should be sent as "Uniform" variables to the shaders directly.
- Unity5 sends the matrices via the "glMtrixMode()" and uses shaders;))
A little more background:
- When shaders first arrived, they relied on pre-defined variables inside;) Thus, special functions were used from the CPU side to set those variables. (Like glMatrixMode()).
- When the Programmable pipeline was completed, they said not to rely on the old functions, but rather send the data directly to the shader via uniforms variables.
In Short:
- Calling "glMtrixMode(GL_PROJECTION)" + "glMatrix4f()" will send a mat4 to the shaders and is stored in the "gl_Projection" variable (predefined). This is part of the "Legacy" way;))
- Using the programmable pipeline you can have a variable called "mat4 MyProjectionMatrix" and using the "send uniform method" you can set it up;))
In the end, you achieve the same;) BUT! When you try to reverse-engineer this thing it provides a big difference;)
According to Nvidia documents on Stereo3D: There are 2 ways of making a Stereo Projection:
1. Modify the Position on X coord when in World Space (using the formula)
2. Modify the Projection Matrix so that is in Stereo (based on the viewing eye).
1) Works for Programmable pipeline. But, what if there are no shaders;)) ?? Then, you need to do it according to point 2). (Legacy stuff, AKA fixed pipeline ^_^).
Hope this explained a bit more about the difference;))
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
Yah, it seems so confusing, because when I look at games that were released somewhat more recently (let's say after Doom 3) I still see a lot of depreciated functions, that I thought were no longer used. So it seems that this is just a backward compatibility in the API and game engine for older hardware.