3Dmigoto now open-source...
  48 / 143    
[quote="D-Man11"]Hmmm. thanks for the reply. But what about the eye sync issues using your OpenGL wrapper, isn't it some kind of frame stutter?Would this be noticeable using a VR headset?[/quote] Unless we test I can't say. But even now if you keep 60 fps constant you don't see it:) So I imagine 90 FPS will be liquid clear. But more tests are required of course;) That is based on how I do the stereoscopy. And is part of the Stereorize part. This is a limitation from how OpenGL works as an API (state machine) rather than an object oriented environment. This proves to be very hard to reverse engineer and automatize it. In DirectX this is simpler and much more straight-forward, due to being an OO environment. 3D Vision Automatic does this part by duplicating render targets & draw calls (this is content creation part) and they displays it (display conversion part). What we are talking about above is to somehow fool the driver to do the content creation but not the display conversion;) so we can still use the 3DMigoto Fixes.
D-Man11 said:Hmmm. thanks for the reply.

But what about the eye sync issues using your OpenGL wrapper, isn't it some kind of frame stutter?Would this be noticeable using a VR headset?


Unless we test I can't say. But even now if you keep 60 fps constant you don't see it:) So I imagine 90 FPS will be liquid clear. But more tests are required of course;)

That is based on how I do the stereoscopy. And is part of the Stereorize part. This is a limitation from how OpenGL works as an API (state machine) rather than an object oriented environment.
This proves to be very hard to reverse engineer and automatize it.
In DirectX this is simpler and much more straight-forward, due to being an OO environment.

3D Vision Automatic does this part by duplicating render targets & draw calls (this is content creation part)
and they displays it (display conversion part).

What we are talking about above is to somehow fool the driver to do the content creation but not the display conversion;) so we can still use the 3DMigoto Fixes.

1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc


My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com

(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)

Posted 01/08/2016 04:16 PM   
[quote="bo3b"] At a minimum, I think we could make a virtual 3D screen, like a projector size screen to play. Doing the math on the pixels and making some approximations, we'd wind up with a similar pixel ratio as a 720p projector in a virtual screen. That's certainly a good experience today, and would probably work here as well. That would involve faking out 3D TV Play or some sort of EDID override or something to allow the data to be displayed.[/quote] Didnt NVIDIA already add support to DK2 for 3D vision? With virtual screen [and sides are cut off]. "Optimized content: Few applications support VR headsets. So we’re bringing VR support to games that already work with NVIDIA 3D Vision." I could of sworn reading about it but it not being talked about cause not many people really used DK2 past a few days. [url]https://www.reddit.com/r/oculus/comments/2pmhmz/nvidia_34709_first_driver_with_3d_vision_support/[/url]
bo3b said:

At a minimum, I think we could make a virtual 3D screen, like a projector size screen to play. Doing the math on the pixels and making some approximations, we'd wind up with a similar pixel ratio as a 720p projector in a virtual screen. That's certainly a good experience today, and would probably work here as well. That would involve faking out 3D TV Play or some sort of EDID override or something to allow the data to be displayed.


Didnt NVIDIA already add support to DK2 for 3D vision? With virtual screen [and sides are cut off].
"Optimized content: Few applications support VR headsets. So we’re bringing VR support to games that already work with NVIDIA 3D Vision."

I could of sworn reading about it but it not being talked about cause not many people really used DK2 past a few days.
https://www.reddit.com/r/oculus/comments/2pmhmz/nvidia_34709_first_driver_with_3d_vision_support/

Co-founder/Web host of helixmod.blog.com

Donations for web hosting @ paypal -eqzitara@yahoo.com
or
https://www.patreon.com/user?u=791918

Posted 01/08/2016 10:09 PM   
Oh yah, forgot about that. That's VR Direct, which was launched with the GTX 9XX series and is Maxwell exclusive due to the architecture. http://blogs.nvidia.com/blog/2014/09/18/maxwell-virtual-reality/ You probably read about it on the Oculus forums. https://forums.oculus.com/viewtopic.php?f=26&t=18232 Someone could probably PM Cybereality and ask him if he already some hacks going for other headsets.
Oh yah, forgot about that.

That's VR Direct, which was launched with the GTX 9XX series and is Maxwell exclusive due to the architecture. http://blogs.nvidia.com/blog/2014/09/18/maxwell-virtual-reality/

You probably read about it on the Oculus forums.
https://forums.oculus.com/viewtopic.php?f=26&t=18232

Someone could probably PM Cybereality and ask him if he already some hacks going for other headsets.

Posted 01/08/2016 10:34 PM   
[quote="D-Man11"]Oh yah, forgot about that. That's VR Direct, which was launched with the GTX 9XX series and is Maxwell exclusive due to the architecture. http://blogs.nvidia.com/blog/2014/09/18/maxwell-virtual-reality/ You probably read about it on the Oculus forums. https://forums.oculus.com/viewtopic.php?f=26&t=18232 Someone could probably PM Cybereality and ask him if he already some hacks going for other headsets. [/quote] NOW THAT LAST LINE: "Optimized content: Few applications support VR headsets. So we’re bringing VR support to games that already work with NVIDIA 3D Vision." - See more at: http://blogs.nvidia.com/blog/2014/09/18/maxwell-virtual-reality/#sthash.kaemHxU8.dpuf If that LINE proves to be TRUE!!! We are GOLDEN! and I will buy the Oculus RIFT the next day somebody confirms this! (even from E-bay at the ridiculous price). If that is still true... (Going in the corner and praying...)
D-Man11 said:Oh yah, forgot about that.

That's VR Direct, which was launched with the GTX 9XX series and is Maxwell exclusive due to the architecture. http://blogs.nvidia.com/blog/2014/09/18/maxwell-virtual-reality/


You probably read about it on the Oculus forums.

https://forums.oculus.com/viewtopic.php?f=26&t=18232


Someone could probably PM Cybereality and ask him if he already some hacks going for other headsets.



NOW THAT LAST LINE:
"Optimized content: Few applications support VR headsets. So we’re bringing VR support to games that already work with NVIDIA 3D Vision."
- See more at: http://blogs.nvidia.com/blog/2014/09/18/maxwell-virtual-reality/#sthash.kaemHxU8.dpuf

If that LINE proves to be TRUE!!! We are GOLDEN! and I will buy the Oculus RIFT the next day somebody confirms this! (even from E-bay at the ridiculous price).
If that is still true... (Going in the corner and praying...)

1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc


My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com

(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)

Posted 01/08/2016 10:45 PM   
Well, its projected image on screen. Like a big screen but with Rift thats like a lot of wasted space. Like if I were to guess maybe half used pixels? I can't imagine it was that great I seriously only heard about it for one day.
Well, its projected image on screen. Like a big screen but with Rift thats like a lot of wasted space. Like if I were to guess maybe half used pixels?

I can't imagine it was that great I seriously only heard about it for one day.

Co-founder/Web host of helixmod.blog.com

Donations for web hosting @ paypal -eqzitara@yahoo.com
or
https://www.patreon.com/user?u=791918

Posted 01/08/2016 11:10 PM   
Oh yes, I'd forgotten about that announcement. Haven't seen anything since regarding that one-liner about game support, but if it's still true, that would save a lot of effort, and make all HelixMod fixes instantly available with a bright and zero ghosting display. But.. NVidia. I have to assume they lost focus somewhere along the way. Thanks for the reminder though, I'll see if I can find anything relative to the VR ready driver they shipped at Christmas.
Oh yes, I'd forgotten about that announcement. Haven't seen anything since regarding that one-liner about game support, but if it's still true, that would save a lot of effort, and make all HelixMod fixes instantly available with a bright and zero ghosting display.

But.. NVidia. I have to assume they lost focus somewhere along the way. Thanks for the reminder though, I'll see if I can find anything relative to the VR ready driver they shipped at Christmas.

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

Posted 01/09/2016 01:15 AM   
What is the easiest way to pass my own calculated xy coordinates from vertex to pixel shader? I know I can create a new rendertarget and stick the coords into a texture, but I think it's a big waste for just 2 values for the entire frame. Is there any other metod I could use?
What is the easiest way to pass my own calculated xy coordinates from vertex to pixel shader?
I know I can create a new rendertarget and stick the coords into a texture, but I think it's a big waste for just 2 values for the entire frame.

Is there any other metod I could use?

EVGA GeForce GTX 980 SC
Core i5 2500K
MSI Z77A-G45
8GB DDR3
Windows 10 x64

Posted 01/09/2016 09:13 PM   
You can just add a new texcoord output from the vertex shader and an input to the pixel shader. - Currently this only works in HLSL shaders - fxc has a bug that will ignore the texcoord index you specify, so make sure you add it after other texcoords. e.g. https://github.com/DarkStarSword/3d-fixes/blob/master/Unity5/DX11/ShaderFixes/ca5cfc8e4d8b1ce5-vs_replace.txt https://github.com/DarkStarSword/3d-fixes/blob/master/Unity5/DX11/ShaderFixes/ecec7a4935ae0d1e-ps_replace.txt Edit: This is just for passing from VS to PS used in the same draw call. If you need to pass between draw calls the only option is passing a buffer or render target around.
You can just add a new texcoord output from the vertex shader and an input to the pixel shader.

- Currently this only works in HLSL shaders
- fxc has a bug that will ignore the texcoord index you specify, so make sure you add it after other texcoords.


e.g.
https://github.com/DarkStarSword/3d-fixes/blob/master/Unity5/DX11/ShaderFixes/ca5cfc8e4d8b1ce5-vs_replace.txt
https://github.com/DarkStarSword/3d-fixes/blob/master/Unity5/DX11/ShaderFixes/ecec7a4935ae0d1e-ps_replace.txt


Edit: This is just for passing from VS to PS used in the same draw call. If you need to pass between draw calls the only option is passing a buffer or render target around.

2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit

Alienware M17x R4 w/ built in 3D, Intel i7 3740QM, GTX 680m 2GB, 16GB DDR3 1600MHz RAM, Win7 64bit, 1TB SSD, 1TB HDD, 750GB HDD

Pre-release 3D fixes, shadertool.py and other goodies: http://github.com/DarkStarSword/3d-fixes
Support me on Patreon: https://www.patreon.com/DarkStarSword or PayPal: https://www.paypal.me/DarkStarSword

Posted 01/10/2016 03:21 AM   
I must confess that it semed to simple for me, so I haven't even tried :) Thanks once again. Anyway, It would fail as I wasn't aware of that compiler bug.
I must confess that it semed to simple for me, so I haven't even tried :) Thanks once again. Anyway, It would fail as I wasn't aware of that compiler bug.

EVGA GeForce GTX 980 SC
Core i5 2500K
MSI Z77A-G45
8GB DDR3
Windows 10 x64

Posted 01/10/2016 04:12 AM   
[quote="DarkStarSword"]You can just add a new texcoord output from the vertex shader and an input to the pixel shader. - Currently this only works in HLSL shaders - fxc has a bug that will ignore the texcoord index you specify, so make sure you add it after other texcoords. e.g. https://github.com/DarkStarSword/3d-fixes/blob/master/Unity5/DX11/ShaderFixes/ca5cfc8e4d8b1ce5-vs_replace.txt https://github.com/DarkStarSword/3d-fixes/blob/master/Unity5/DX11/ShaderFixes/ecec7a4935ae0d1e-ps_replace.txt Edit: This is just for passing from VS to PS used in the same draw call. If you need to pass between draw calls the only option is passing a buffer or render target around.[/quote] By saying buffer you meant read only constant buffer? I can't write to it, or can I? Edit: let me explain it in a bit more detail what do I need that for. I need to pass a screenspace coordinates of the brightest pixel and it's brightness from one pixelshader to another. It's for godrays shader which does not align properly with the rendered sun when I calculate the coordinates using screenspace transformations from the sun's location vector. (seems like the visible sun's circular gradient and the actual light source do not align, or I screwed something up). Anyway would you tell me what would be the most efficient way in 3DM to pass 3 floats between 2 pixel shaders sitting in separate drawcalls?
DarkStarSword said:You can just add a new texcoord output from the vertex shader and an input to the pixel shader.

- Currently this only works in HLSL shaders
- fxc has a bug that will ignore the texcoord index you specify, so make sure you add it after other texcoords.


e.g.
https://github.com/DarkStarSword/3d-fixes/blob/master/Unity5/DX11/ShaderFixes/ca5cfc8e4d8b1ce5-vs_replace.txt
https://github.com/DarkStarSword/3d-fixes/blob/master/Unity5/DX11/ShaderFixes/ecec7a4935ae0d1e-ps_replace.txt


Edit: This is just for passing from VS to PS used in the same draw call. If you need to pass between draw calls the only option is passing a buffer or render target around.


By saying buffer you meant read only constant buffer? I can't write to it, or can I?

Edit: let me explain it in a bit more detail what do I need that for.

I need to pass a screenspace coordinates of the brightest pixel and it's brightness from one pixelshader to another. It's for godrays shader which does not align properly with the rendered sun when I calculate the coordinates using screenspace transformations from the sun's location vector. (seems like the visible sun's circular gradient and the actual light source do not align, or I screwed something up).

Anyway would you tell me what would be the most efficient way in 3DM to pass 3 floats between 2 pixel shaders sitting in separate drawcalls?

EVGA GeForce GTX 980 SC
Core i5 2500K
MSI Z77A-G45
8GB DDR3
Windows 10 x64

Posted 01/10/2016 05:12 PM   
[quote="Oomek"]By saying buffer you meant read only constant buffer? I can't write to it, or can I?[/quote]Constant buffers can't be written from the shader, but they are really just regular buffers and buffers can be bound somewhere else in the pipeline that can be written from a pixel or compute shader (either a render target if all other render targets are also fully typed buffers, or as a UAV). Buffers can be either typed (array of a single type), structured (array of a structure possibly containing multiple types) or raw (byte access), and they can also have a counter associated with them, and structured buffers can also be used as a FILO append/consume buffer. At the moment there is no good way to create a custom buffer other than copying a buffer from the game, but this support is coming (when I have time). Assigning them as a UAV is more untested code and there is a high chance that it will do something undesirable, like unbind all the render targets (the documentation is not particularly clear on the call to assign UAVs to pixel shaders, so I need to test it and see what it actually does in practice). [quote]I need to pass a screenspace coordinates of the brightest pixel and it's brightness from one pixelshader to another. It's for godrays shader which does not align properly with the rendered sun when I calculate the coordinates using screenspace transformations from the sun's location vector. (seems like the visible sun's circular gradient and the actual light source do not align, or I screwed something up). [/quote]I haven't asked, but I assume you are just targetting this at 2D (3D would of course change the coordinates)? [quote]Anyway would you tell me what would be the most efficient way in 3DM to pass 3 floats between 2 pixel shaders sitting in separate drawcalls?[/quote]If they are separate draw calls you will have to bind another render target or UAV and use it to pass them, and as you already know this support is still quite alpha. I'm working through a bug at the moment in Batman where passing a UAV from a custom compute shader (support added in the upcoming version) to the game is not working as I expect, so you might want to wait until the next version.
Oomek said:By saying buffer you meant read only constant buffer? I can't write to it, or can I?
Constant buffers can't be written from the shader, but they are really just regular buffers and buffers can be bound somewhere else in the pipeline that can be written from a pixel or compute shader (either a render target if all other render targets are also fully typed buffers, or as a UAV).

Buffers can be either typed (array of a single type), structured (array of a structure possibly containing multiple types) or raw (byte access), and they can also have a counter associated with them, and structured buffers can also be used as a FILO append/consume buffer.

At the moment there is no good way to create a custom buffer other than copying a buffer from the game, but this support is coming (when I have time). Assigning them as a UAV is more untested code and there is a high chance that it will do something undesirable, like unbind all the render targets (the documentation is not particularly clear on the call to assign UAVs to pixel shaders, so I need to test it and see what it actually does in practice).

I need to pass a screenspace coordinates of the brightest pixel and it's brightness from one pixelshader to another. It's for godrays shader which does not align properly with the rendered sun when I calculate the coordinates using screenspace transformations from the sun's location vector. (seems like the visible sun's circular gradient and the actual light source do not align, or I screwed something up).
I haven't asked, but I assume you are just targetting this at 2D (3D would of course change the coordinates)?

Anyway would you tell me what would be the most efficient way in 3DM to pass 3 floats between 2 pixel shaders sitting in separate drawcalls?
If they are separate draw calls you will have to bind another render target or UAV and use it to pass them, and as you already know this support is still quite alpha. I'm working through a bug at the moment in Batman where passing a UAV from a custom compute shader (support added in the upcoming version) to the game is not working as I expect, so you might want to wait until the next version.

2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit

Alienware M17x R4 w/ built in 3D, Intel i7 3740QM, GTX 680m 2GB, 16GB DDR3 1600MHz RAM, Win7 64bit, 1TB SSD, 1TB HDD, 750GB HDD

Pre-release 3D fixes, shadertool.py and other goodies: http://github.com/DarkStarSword/3d-fixes
Support me on Patreon: https://www.patreon.com/DarkStarSword or PayPal: https://www.paypal.me/DarkStarSword

Posted 01/11/2016 04:04 AM   
3DMigoto 1.2.22 is out: This release adds support for injecting custom shaders (of all types), before or after another shader, or just before the present call. e.g. In Batman I am using this to inject a new compute shader before the main lighting compute shader (for performance reasons - this is about 10fps better than doing the job of this shader inside the main lighting shader due to effects of cache ping pong): [code] [ResourceLightLists] max_copies_per_frame = 1 ; Custom shader to merge adjacent light lists together: [CustomShaderMergeLightLists] cs = ShaderFixes/merge_tiles.hlsl ; Bit of a hack for now - copy the original list so the buffer type, size, etc ; is right (later when we can create custom resources we should switch to ; that): ResourceLightLists = copy cs-t10 ; Bind our copy to a UAV slot so that the shader can write to it: cs-u2 = ref ResourceLightLists ; Run the compute shader with a single thread group in all dimensions: Dispatch = 1, 1, 1 ; Unbind the copy of the buffer to prevent it being an input + output simultaneously: post cs-u2 = null ; Replace the original list with the merged one: post cs-t10 = ref ResourceLightLists [ShaderOverrideFTileShadingComputeShaderTSCS_Wetness00NVOPS_KeplerOptimised] Hash = eb8c3e5e00a6c476 run = CustomShaderMergeLightLists [/code] Here, Dispatch is the call I'm using to actually run the compute shader. If using vertex+pixel shaders you should use one of the Draw family of calls (search MSDN for usage - all Draw & Dispatch calls except indirect are supported). This support does not (yet) unbind unrelated shaders from the rendering pipeline, so keep in mind that if you inject a shader just before a tesselation shader it will still have hull and domain shaders bound (likewise any effect that uses a geometry shader). I'll probably fix this in the next release, but I had other reasons to get this release out ASAP. The ability to create custom resources is still missing (other than loading textures from file). In practical terms this means you have to copy an appropriate resource from the game (like I did above) to create new ones. If you need custom vertex data consider creating them in the vertex shader based on the SV_VertexID until this support is ready. This release also fixes issues with copying structured buffers by reference using an intermediate resource (makes the two lines above with 'ref ResourceLightLists' work). This release also fixes a bug in the disassembler that could sometimes create a double negative number (was supposed to only be a single negative if you need to fix any manually), or could insert an extra 0 before the number. Logging is also improved if resource copying fails due to an issue creating a resource view (please report these).
3DMigoto 1.2.22 is out:

This release adds support for injecting custom shaders (of all types), before or after another shader, or just before the present call.

e.g. In Batman I am using this to inject a new compute shader before the main lighting compute shader (for performance reasons - this is about 10fps better than doing the job of this shader inside the main lighting shader due to effects of cache ping pong):

[ResourceLightLists]
max_copies_per_frame = 1

; Custom shader to merge adjacent light lists together:
[CustomShaderMergeLightLists]
cs = ShaderFixes/merge_tiles.hlsl
; Bit of a hack for now - copy the original list so the buffer type, size, etc
; is right (later when we can create custom resources we should switch to
; that):
ResourceLightLists = copy cs-t10
; Bind our copy to a UAV slot so that the shader can write to it:
cs-u2 = ref ResourceLightLists
; Run the compute shader with a single thread group in all dimensions:
Dispatch = 1, 1, 1
; Unbind the copy of the buffer to prevent it being an input + output simultaneously:
post cs-u2 = null
; Replace the original list with the merged one:
post cs-t10 = ref ResourceLightLists

[ShaderOverrideFTileShadingComputeShaderTSCS_Wetness00NVOPS_KeplerOptimised]
Hash = eb8c3e5e00a6c476
run = CustomShaderMergeLightLists


Here, Dispatch is the call I'm using to actually run the compute shader. If using vertex+pixel shaders you should use one of the Draw family of calls (search MSDN for usage - all Draw & Dispatch calls except indirect are supported).

This support does not (yet) unbind unrelated shaders from the rendering pipeline, so keep in mind that if you inject a shader just before a tesselation shader it will still have hull and domain shaders bound (likewise any effect that uses a geometry shader). I'll probably fix this in the next release, but I had other reasons to get this release out ASAP.

The ability to create custom resources is still missing (other than loading textures from file). In practical terms this means you have to copy an appropriate resource from the game (like I did above) to create new ones. If you need custom vertex data consider creating them in the vertex shader based on the SV_VertexID until this support is ready.


This release also fixes issues with copying structured buffers by reference using an intermediate resource (makes the two lines above with 'ref ResourceLightLists' work).

This release also fixes a bug in the disassembler that could sometimes create a double negative number (was supposed to only be a single negative if you need to fix any manually), or could insert an extra 0 before the number.

Logging is also improved if resource copying fails due to an issue creating a resource view (please report these).

2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit

Alienware M17x R4 w/ built in 3D, Intel i7 3740QM, GTX 680m 2GB, 16GB DDR3 1600MHz RAM, Win7 64bit, 1TB SSD, 1TB HDD, 750GB HDD

Pre-release 3D fixes, shadertool.py and other goodies: http://github.com/DarkStarSword/3d-fixes
Support me on Patreon: https://www.patreon.com/DarkStarSword or PayPal: https://www.paypal.me/DarkStarSword

Posted 01/11/2016 11:47 AM   
[quote="DarkStarSword"]I haven't asked, but I assume you are just targetting this at 2D (3D would of course change the coordinates)?[/quote] It's a postprocess screenspace effect, but assuming that those shaders are lauched for each eye it should in theory look fine in stereo, if that's what you were asking about. The problem with that discrepancy is that I have a variable in constantbuffer called sunDirectionAndTime which I'm using. But the sun itself is beeing drawn from the float3 v0 : POSITION0, feeded to the vertex shader in a form of 3 gradients in .xyz, not as the coordinates and I have no way to peek how it's beeing calculated. [quote]If they are separate draw calls you will have to bind another render target or UAV and use it to pass them, and as you already know this support is still quite alpha. I'm working through a bug at the moment in Batman where passing a UAV from a custom compute shader (support added in the upcoming version) to the game is not working as I expect, so you might want to wait until the next version. [/quote] I'm waiting patiently for it. Edit: if RWTexture will come eventually how will I be able to write at the specified coordinate to avoid interpolation? Shall I just check the SV_Position if it's (0,0) (0,1) or (0,2) or is there another way? Nevermind, stupid question, i just specify the index as an argument. Edit2: is this new feature functional enough to inject a shader and use it to premultiply two referenced textures and then reference the output as a texture in the following shader? If so I would appreciate some example code.
DarkStarSword said:I haven't asked, but I assume you are just targetting this at 2D (3D would of course change the coordinates)?

It's a postprocess screenspace effect, but assuming that those shaders are lauched for each eye it should in theory look fine in stereo, if that's what you were asking about.
The problem with that discrepancy is that I have a variable in constantbuffer called sunDirectionAndTime which I'm using. But the sun itself is beeing drawn from the float3 v0 : POSITION0, feeded to the vertex shader in a form of 3 gradients in .xyz, not as the coordinates and I have no way to peek how it's beeing calculated.

If they are separate draw calls you will have to bind another render target or UAV and use it to pass them, and as you already know this support is still quite alpha. I'm working through a bug at the moment in Batman where passing a UAV from a custom compute shader (support added in the upcoming version) to the game is not working as I expect, so you might want to wait until the next version.

I'm waiting patiently for it.

Edit: if RWTexture will come eventually how will I be able to write at the specified coordinate to avoid interpolation? Shall I just check the SV_Position if it's (0,0) (0,1) or (0,2) or is there another way?

Nevermind, stupid question, i just specify the index as an argument.

Edit2: is this new feature functional enough to inject a shader and use it to premultiply two referenced textures and then reference the output as a texture in the following shader? If so I would appreciate some example code.

EVGA GeForce GTX 980 SC
Core i5 2500K
MSI Z77A-G45
8GB DDR3
Windows 10 x64

Posted 01/11/2016 08:44 PM   
[img]http://s.quickmeme.com/img/79/79949b127bf2d60e227d25b13ae1588da4fe4562de9cda41cee29d121da96c8f.jpg[/img]
Image

EVGA GeForce GTX 980 SC
Core i5 2500K
MSI Z77A-G45
8GB DDR3
Windows 10 x64

Posted 01/14/2016 10:28 PM   
[img]https://cdn-images.xda-developers.com/direct/3/3/4/7/0/9/0/lol.jpg[/img] :D
Image

:D

EVGA GeForce GTX 980 SC
Core i5 2500K
MSI Z77A-G45
8GB DDR3
Windows 10 x64

Posted 01/14/2016 10:30 PM   
  48 / 143    
Scroll To Top