Hmm... I'm getting an error in domain shader
X3000: unrecognized identifier 'dcl_input_control_point_count'
edit1: silly me, I forgot to change the filename by deleting _replace string. But still the shader in asm is not loading, am I missing something obvious?
edit2: yeahh....this happens if you mess with the 3DM code without proper testing. It was my directory crawler loading only hlsl shaders. Fixed now.
I need a little help in understanding the asm code for the tesselation in domain shader. viewPosition is working fine, but I'm having problems with normals. Do I also have to multiply normals by vDomain? When I do this I get a checkerboard pattern.
[code]// viewPosition
mul r1.xyzw, vicp[0][0].xyzw, vDomain.zzzz
mad r1.xyzw, vicp[1][0].xyzw, vDomain.xxxx, r1.xyzw
mad r1.xyzw, vicp[2][0].xyzw, vDomain.yyyy, r1.xyzw
mov r0.xyz, r1.xyzw
mul r2.xyzw, cb0[39].xyzw, r0.xxxx
mad r2.xyzw, cb0[40].xyzw, r0.yyyy, r2.xyzw
mad r2.xyzw, cb0[41].xyzw, r0.zzzz, r2.xyzw
mad r2.xyzw, cb0[42].xyzw, r1.wwww, r2.xyzw
mov o10.xyzw, r2.xyzw
// normalVS
mul r0.xyz, cb0[39].xyzx, vicp[0][1].xxxx
mad r0.xyz, cb0[40].xyzx, vicp[1][1].yyyy, r0.xyzx
mad o11.xyz, cb0[41].xyzx, vicp[2][1].zzzz, r0.xyzx[/code]
I need a little help in understanding the asm code for the tesselation in domain shader. viewPosition is working fine, but I'm having problems with normals. Do I also have to multiply normals by vDomain? When I do this I get a checkerboard pattern.
If the shader does not react to changing the o0.w register it means that the blending state of the output merger is not Alpha. You would need to copy it as a custom shader and modify it's OM blending state. There is an article of how to do it on the 3DM wiki page.
If the shader does not react to changing the o0.w register it means that the blending state of the output merger is not Alpha. You would need to copy it as a custom shader and modify it's OM blending state. There is an article of how to do it on the 3DM wiki page.
[quote="Oomek"]If the shader does not react to changing the o0.w register it means that the blending state of the output merger is not Alpha. You would need to copy it as a custom shader and modify it's OM blending state. There is an article of how to do it on the 3DM wiki page.[/quote]
Thanks!!! I will try it tomorrow.
Oomek said:If the shader does not react to changing the o0.w register it means that the blending state of the output merger is not Alpha. You would need to copy it as a custom shader and modify it's OM blending state. There is an article of how to do it on the 3DM wiki page.
Sorry for doing it late, bo3b. I have tried 3Dmigoto-1.0.1 with PCSX2. It doesn't hook in any way. And this time it doesn't even generate a log file even when trying to load it through Kaldaien's SpecialK. debug=1 and unbuffered=1 didn't help.
Sorry for doing it late, bo3b. I have tried 3Dmigoto-1.0.1 with PCSX2. It doesn't hook in any way. And this time it doesn't even generate a log file even when trying to load it through Kaldaien's SpecialK. debug=1 and unbuffered=1 didn't help.
Could anyone here make 3Dmigoto work with Media Player Classic? I'm using MadVR and I selected the DX11 renderer for fullscreen. 3D Vision triggers and the glasses work, but same situation as PCSX2: it doesn't seem to hook. No logs generated either.
I want to make it work to convert SBS and TAB videos to 3D Vision, so I won't depend on those other stereoscopic players that I don't like.
Could anyone here make 3Dmigoto work with Media Player Classic? I'm using MadVR and I selected the DX11 renderer for fullscreen. 3D Vision triggers and the glasses work, but same situation as PCSX2: it doesn't seem to hook. No logs generated either.
I want to make it work to convert SBS and TAB videos to 3D Vision, so I won't depend on those other stereoscopic players that I don't like.
How can I use 3Dmigoto Upscaling + Reshade + SuperDepth ?
I can't understand how I can run 3Dmigoto Line Interlaced at the moment with upscaling. I changed couple of line that are upscaling, I added resolution . I choose custom3D 6 Line Interlaced, trying to switch 3D mode with F11 doesn't work, it says Stereo Enabled but there is no blur . using Acer EDID with LG55C6 TV
Trying BladeStorm Nightmare
How can I use 3Dmigoto Upscaling + Reshade + SuperDepth ?
I can't understand how I can run 3Dmigoto Line Interlaced at the moment with upscaling. I changed couple of line that are upscaling, I added resolution . I choose custom3D 6 Line Interlaced, trying to switch 3D mode with F11 doesn't work, it says Stereo Enabled but there is no blur . using Acer EDID with LG55C6 TV
void main(float4 pos : SV_Position0, out float result : SV_Target0)
{
float x = pos.x * 2;
float y = pos.y * 2;
result = t100.Load(float3(x + 0, y + 0, 0));
result = min(result,t100.Load(float3(x + 1, y + 0, 0)));
result = min(result,t100.Load(float3(x + 0, y + 1, 0)));
result = min(result,t100.Load(float3(x + 1, y + 1, 0)));
}
I'm looking at the fix for the Witcher 3, but I can not understand where the [ResourceWBuffer] resource is geting the depth buffer from. It's not initialised anywhere.
I'm looking at the fix for the Witcher 3, but I can not understand where the [ResourceWBuffer] resource is geting the depth buffer from. It's not initialised anywhere.
I think the problem here may be that X24X8 formats are not compatible with being used as a render target? The original resource' bind flags indicate it was only used as a shader resource, but your new resource wants to be used as both a shader resource and a render target, but is otherwise unchanged. DirectX has all these undocumented quirks that we only find out through trial and error and this smells like yet another of those.
I think your best bet is to try using the resource as a depth target instead of a render target, which should set bind flags that DirectX is happy with - you can still use oDepth to write to it from a pixel shader.
I doubt it will work, but you might try making the resource fully typed just to see if that satisfies DirectX - try adding format=R24_UNORM_X8_TYPELESS to your [CustomResource] section. I'd be quite surprised if that works, but you never know - DirectX is weird.
Or, maybe it will work if you use separate resources for the render target vs shader resource and copy between them as needed - I'm not positive that will work (I kind of doubt it will), it depends on exactly what DirectX isn't happy about.
A bit more explanation of what 3DMigoto is doing under the hood:
The type 3DMigoto selects automatically depends on how it is used - assign it as a depth target and 3DMigoto will use D24S8 to access both depth and stencil buffers, assign it as a render target, shader resource or UAV and 3DMigoto will use R24X8 to access the depth buffer (there is currently no way to have 3DMigoto use X24G8 to access the stencil buffer in that scenario... send a feature request if you ever need that).
To enable a single resource to be used as both a depth/stencil buffer and shader resource/render target/UAV (at different times), 3DMigoto tries to use fully TYPELESS formats (R24G8_TYPELESS in this case) for the underlying resources, and only assigns a real type when it is bound to the pipeline, depending on where it was bound - if 3DMigoto used a real type for the underlying resource it would be restricted to assigning it in only slots that support that specific format, making it impossible to assign depth buffers into texture slots without copying them into another resource first.
I think the problem here may be that X24X8 formats are not compatible with being used as a render target? The original resource' bind flags indicate it was only used as a shader resource, but your new resource wants to be used as both a shader resource and a render target, but is otherwise unchanged. DirectX has all these undocumented quirks that we only find out through trial and error and this smells like yet another of those.
I think your best bet is to try using the resource as a depth target instead of a render target, which should set bind flags that DirectX is happy with - you can still use oDepth to write to it from a pixel shader.
I doubt it will work, but you might try making the resource fully typed just to see if that satisfies DirectX - try adding format=R24_UNORM_X8_TYPELESS to your [CustomResource] section. I'd be quite surprised if that works, but you never know - DirectX is weird.
Or, maybe it will work if you use separate resources for the render target vs shader resource and copy between them as needed - I'm not positive that will work (I kind of doubt it will), it depends on exactly what DirectX isn't happy about.
A bit more explanation of what 3DMigoto is doing under the hood:
The type 3DMigoto selects automatically depends on how it is used - assign it as a depth target and 3DMigoto will use D24S8 to access both depth and stencil buffers, assign it as a render target, shader resource or UAV and 3DMigoto will use R24X8 to access the depth buffer (there is currently no way to have 3DMigoto use X24G8 to access the stencil buffer in that scenario... send a feature request if you ever need that).
To enable a single resource to be used as both a depth/stencil buffer and shader resource/render target/UAV (at different times), 3DMigoto tries to use fully TYPELESS formats (R24G8_TYPELESS in this case) for the underlying resources, and only assigns a real type when it is bound to the pipeline, depending on where it was bound - if 3DMigoto used a real type for the underlying resource it would be restricted to assigning it in only slots that support that specific format, making it impossible to assign depth buffers into texture slots without copying them into another resource first.
2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit
[quote="Oomek"]I'm looking at the fix for the Witcher 3, but I can not understand where the [ResourceWBuffer] resource is geting the depth buffer from. It's not initialised anywhere.[/quote]
[code]
[CustomShaderDownscaleWBuffer]
...
ResourceWBuffer = ref o0
...
[ShaderOverrideHBAODepthPass]
; First post-processing shader used. Good place to turn on extra dumping
; options as most broken effects will be after this point.
Hash = 170486ed36efcc9e
; This can be expensive in SLI at high resolutions:
;post ResourceWBufferStereo2Mono = stereo2mono o0
; So we use this instead to downscale the WBuffer first:
post run = CustomShaderDownscaleWBuffer
[/code]
That's what's in the git repo at the moment - I haven't checked if Helifax has made any changes since then.
Worth noting though that the "W Buffer" is *NOT* a depth buffer. It is in the logical sense that it has depth values, but not in the DirectX sense that it is something using a D*S* format assigned to a depth/stencil target that is used for depth and/or stencil tests and holds logarithmic depth values. It is just a generic render target that the game (or more precisely, nvidia's ambient occlusion shader) has written linear depth values to.
Oomek said:I'm looking at the fix for the Witcher 3, but I can not understand where the [ResourceWBuffer] resource is geting the depth buffer from. It's not initialised anywhere.
[ShaderOverrideHBAODepthPass]
; First post-processing shader used. Good place to turn on extra dumping
; options as most broken effects will be after this point.
Hash = 170486ed36efcc9e
; This can be expensive in SLI at high resolutions:
;post ResourceWBufferStereo2Mono = stereo2mono o0
; So we use this instead to downscale the WBuffer first:
post run = CustomShaderDownscaleWBuffer
That's what's in the git repo at the moment - I haven't checked if Helifax has made any changes since then.
Worth noting though that the "W Buffer" is *NOT* a depth buffer. It is in the logical sense that it has depth values, but not in the DirectX sense that it is something using a D*S* format assigned to a depth/stencil target that is used for depth and/or stencil tests and holds logarithmic depth values. It is just a generic render target that the game (or more precisely, nvidia's ambient occlusion shader) has written linear depth values to.
2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit
EVGA GeForce GTX 980 SC
Core i5 2500K
MSI Z77A-G45
8GB DDR3
Windows 10 x64
X3000: unrecognized identifier 'dcl_input_control_point_count'
edit1: silly me, I forgot to change the filename by deleting _replace string. But still the shader in asm is not loading, am I missing something obvious?
edit2: yeahh....this happens if you mess with the 3DM code without proper testing. It was my directory crawler loading only hlsl shaders. Fixed now.
EVGA GeForce GTX 980 SC
Core i5 2500K
MSI Z77A-G45
8GB DDR3
Windows 10 x64
EVGA GeForce GTX 980 SC
Core i5 2500K
MSI Z77A-G45
8GB DDR3
Windows 10 x64
EVGA GeForce GTX 980 SC
Core i5 2500K
MSI Z77A-G45
8GB DDR3
Windows 10 x64
Thanks!!! I will try it tomorrow.
CPU: Intel Core i7 7700K @ 4.9GHz
Motherboard: Gigabyte Aorus GA-Z270X-Gaming 5
RAM: GSKILL Ripjaws Z 16GB 3866MHz CL18
GPU: MSI GeForce RTX 2080Ti Gaming X Trio
Monitor: Asus PG278QR
Speakers: Logitech Z506
Donations account: masterotakusuko@gmail.com
I want to make it work to convert SBS and TAB videos to 3D Vision, so I won't depend on those other stereoscopic players that I don't like.
CPU: Intel Core i7 7700K @ 4.9GHz
Motherboard: Gigabyte Aorus GA-Z270X-Gaming 5
RAM: GSKILL Ripjaws Z 16GB 3866MHz CL18
GPU: MSI GeForce RTX 2080Ti Gaming X Trio
Monitor: Asus PG278QR
Speakers: Logitech Z506
Donations account: masterotakusuko@gmail.com
1080 Ti - i7 5820k - 16Gb RAM - Win 10 version 1607 - ASUS VG236H (1920x1080@120Hz)
I can't understand how I can run 3Dmigoto Line Interlaced at the moment with upscaling. I changed couple of line that are upscaling, I added resolution . I choose custom3D 6 Line Interlaced, trying to switch 3D mode with F11 doesn't work, it says Stereo Enabled but there is no blur . using Acer EDID with LG55C6 TV
Trying BladeStorm Nightmare
AMD TR4 1950x @ 4.0
GA X399 AORUS Gaming 7 F12e
Gskill FlareX 32GB 4x 3200 14-14-14
INNO3D Nvidia GeForce GTX1080 8G D5 1759Mhz iChill X3 CSM Enabled
Creative SXFI AMP / USB SoundCard
SanDisk Extreme PRO SSD 480Gb
Samsung x2 1TB 7200
WD Gold 6TB Enterprise 128MB Cache (WD6002FRYZ)
Seagate 5TB Enterprise 128MB Cache (ST5000NM0024)
LG OLED55E6 3D TV 50.30.30
Logitech G502 Proteus - LGS Installed
Logitech G903
Logitech G502 Hero
Corsair Gaming K70 RED RAPIDFIRE Mechanical Keyboard
Windows 10 x64 1809 UEFI Boot
DreamScreenTV LEDs https://www.dreamscreentv.com/
https://github.com/bo3b/3Dmigoto/issues/53#issuecomment-263806573
It does not seem to work properly, It downscales but the red channel of a colour buffer instead of a DB
Here's the error I'm getting on each frame:
EVGA GeForce GTX 980 SC
Core i5 2500K
MSI Z77A-G45
8GB DDR3
Windows 10 x64
EVGA GeForce GTX 980 SC
Core i5 2500K
MSI Z77A-G45
8GB DDR3
Windows 10 x64
I think your best bet is to try using the resource as a depth target instead of a render target, which should set bind flags that DirectX is happy with - you can still use oDepth to write to it from a pixel shader.
I doubt it will work, but you might try making the resource fully typed just to see if that satisfies DirectX - try adding format=R24_UNORM_X8_TYPELESS to your [CustomResource] section. I'd be quite surprised if that works, but you never know - DirectX is weird.
Or, maybe it will work if you use separate resources for the render target vs shader resource and copy between them as needed - I'm not positive that will work (I kind of doubt it will), it depends on exactly what DirectX isn't happy about.
A bit more explanation of what 3DMigoto is doing under the hood:
The type 3DMigoto selects automatically depends on how it is used - assign it as a depth target and 3DMigoto will use D24S8 to access both depth and stencil buffers, assign it as a render target, shader resource or UAV and 3DMigoto will use R24X8 to access the depth buffer (there is currently no way to have 3DMigoto use X24G8 to access the stencil buffer in that scenario... send a feature request if you ever need that).
To enable a single resource to be used as both a depth/stencil buffer and shader resource/render target/UAV (at different times), 3DMigoto tries to use fully TYPELESS formats (R24G8_TYPELESS in this case) for the underlying resources, and only assigns a real type when it is bound to the pipeline, depending on where it was bound - if 3DMigoto used a real type for the underlying resource it would be restricted to assigning it in only slots that support that specific format, making it impossible to assign depth buffers into texture slots without copying them into another resource first.
2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit
Alienware M17x R4 w/ built in 3D, Intel i7 3740QM, GTX 680m 2GB, 16GB DDR3 1600MHz RAM, Win7 64bit, 1TB SSD, 1TB HDD, 750GB HDD
Pre-release 3D fixes, shadertool.py and other goodies: http://github.com/DarkStarSword/3d-fixes
Support me on Patreon: https://www.patreon.com/DarkStarSword or PayPal: https://www.paypal.me/DarkStarSword
That's what's in the git repo at the moment - I haven't checked if Helifax has made any changes since then.
Worth noting though that the "W Buffer" is *NOT* a depth buffer. It is in the logical sense that it has depth values, but not in the DirectX sense that it is something using a D*S* format assigned to a depth/stencil target that is used for depth and/or stencil tests and holds logarithmic depth values. It is just a generic render target that the game (or more precisely, nvidia's ambient occlusion shader) has written linear depth values to.
2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit
Alienware M17x R4 w/ built in 3D, Intel i7 3740QM, GTX 680m 2GB, 16GB DDR3 1600MHz RAM, Win7 64bit, 1TB SSD, 1TB HDD, 750GB HDD
Pre-release 3D fixes, shadertool.py and other goodies: http://github.com/DarkStarSword/3d-fixes
Support me on Patreon: https://www.patreon.com/DarkStarSword or PayPal: https://www.paypal.me/DarkStarSword