Native 3D Vision support in Unity Games via Injector
2 / 8
[quote="bo3b"]
OK, sounds good. I assume the 'script' is C# code for Unity to use to draw the images correctly? Of note, I wrote a Unity test app for VR, so I'm pretty familiar with their structure and design.
I'm happy to help here if it makes sense.
It might be worth looking at the VRgin code to see if he solved the shadow glitches from off-center projection. In VR, this would be a deal-breaker, so he had to have some solution for that.
[/quote]
I took a look at his scripts for Yooka-Laylee. The game is a lot simpler in terms of effects than INSIDE. Yooka-Laylee only uses a few basic post effects like bloom and AO, the game is also being rendered using a forward renderer, so you don't run into any of the issues i ran into with gbuffers and lighting. He's also disabling a bunch of effects in the script which makes sense for VR. My goal though is to not have to disable anything, or mess around to much with custom tailored solutions for every game (since we already do that with 3dMigoto).
INISDE on the other hand uses a custom deferred renderer and there are a total of 24 components attached to the main camera, about 19 of those are post effects or other things that handle volumetric lighting, rendering velocity vectors, additional depth maps and gbuffers, motionblur, water reflections, temporal antialiasing etc.
My initial approach was to create two new stereo cameras that would be updated from the main camera, have there matrices offset, and render to separate render targets. While at first this seemed like it would work i ran into issues because i had to also copy all the post effects from the main camera. Most of the effects were broken or caused the game to crash when enabled. The game also seemed to be reading the matrices from the original camera instead of the modified ones so lighting shadows and all the effects were off. My next approach was to offset the main camera and render its contents to the left eye and then create a duplicate for the right camera, but in this case the left eye would render correctly and the right would be wrong. I'm guessing that the extra buffers that the game uses would also have to be duplicated because otherwise their contents would be overwritten by the subsequent eye pass. So i changed my approach by not creating new cameras but instead just rendering the main camera twice during the same update, this caused the game to crash, or caused buffers to be overwritten.
What finally ended up working perfectly was to have the game render each frame sequentially, alternate between left and right eyes every other frame, and then blit the backbuffer to the left and right render targets. This way each eye would go through the whole rendering pipeline and have all the correct information for it without having to worry about buffers being overwritten. This is a similar approach to what Helifax used in his OpenGL wrapper. Of course this caused the left and right eyes to be temporally out of sync but the solution was pretty simple. When rendering the right eye i would essentially pause Unity's time by setting it's timescale to 0. Doing this alone would make the game run at half speed so i also set the alternating (left) frame's timescale to 2, disabled vsync and changed the update frequency to 120 so that it would essentially run at 60fps. It worked beautifully, everything is still in sync, running at fullspeed and all the post effects work. The only thing that caused issues was the D3D11 temporal AA that the game uses. Since TAA uses the previous frames contents and velocity vectors to calculate things, the final image was smeared horizontally because of the differences the between the left and right eyes. I simply disabled TAA to fix it. I think this approach should work with every game without having to custom tailor each one. With the exception of having to disable TAA or temporal related effects like motion blur, which i already do anyways, i'm not a fan of smearing Vaseline on my eyes.
[quote="bo3b"]
[quote="sgsrules"]I was trying to use 3dmigoto since it worked perfectly on a Unity project i had made in the past. The key difference was that this project didn't use the standard rendering path or rendered any polygons except for a fullscreen quad. Rendering was done via a raymarcher and implicit surfaces. I did this to create trippy demoscene type stuff, 3d fractals, etc, which looked brilliant in stereo 3D. But now since I'm trying to use it with games that i don't have complete control of the rendering path, 3d vision automatic mode interferes with things, which is why i was hoping i could simply disable that behavior and keep everything else.[/quote]
I'm not clear on the need for 3Dmigoto here. It does allow full shader access, but that's not what you need here.
It's also worth noting that by itself 3Dmigoto doesn't alter any shaders. Only hand-fixed stuff in ShaderFixes will be loaded live to alter a scene. Nothing is done automatically by 3Dmigoto.
It should be possible to have your setup run with 3D disabled altogether, which would remove the problem of Automatic interfering.
[/quote]
I know that 3dmigoto does not alter shaders by itself. I'm not trying to use 3dmigoto to alter any shaders. The only reason I'm trying to use 3dmigoto is to create the stereo rendertarget and enable 3dvision. As i mentioned before this was what i did when i was using custom shaders in my unity project, i didn't touch any of the shader injection part of 3dmigoto. I just told it to force unity to run in fullscreen and enable 3d. Then in my shader i would read the stereoparams property to render the appropriate eye. Unfortuntely though this approach does not work now because 3dvision automatic is trying to inject it's automatic stereo calculation and messes things up. If it's possible to enable 3dvision but turn off Automatic mode in 3dmigito this would work perfectly.
[quote="bo3b"]
If I'm understanding this, you will generate a SBS output that is fully 3D, that has no connection on 3D Vision itself. The only drawback here would be that for 3D Vision output setups, there would be no frame-sequential version, and hence only 3D TV Play and SBS supported outputs would work. This can be solved by using 3D Vision Direct, but is unnecessary to start I think.
[/quote]
I believe that is correct. As i mentioned above I used frame sequential rendering in Unity, i would alternate eye rendering based off the stereo params, but apart from that and the creation for the stereo render target there was no other connection with 3d vision.
if it makes things easier, I can easily output to SBS (or even checkboard) The screens i posted for INSIDE were done in SBS mode so that i could test things since 3D vision is not working yet.
[quote="bo3b"]
[quote="sgsrules"]I'm not really familiar with 3D Vision Direct. I tried googling it for some info and came up mostly empty, mainly old posts about vr or oculus.
*Edit* I found the API from Nvidias site as well as documentation, which i'll review when i have some free time.
Unity provides support for native code via a plugin interface, so the 3d Vision direct portion could be written entirely in C++. The way i see it working is that the stereo 2x wide rendertarget would be created in C++ and packacged as a dll. it would expose a few basic things to Unity using Unity's built in native plugin interface, things like setting the resolution and providing a pointer to the rendertarget so that Unity can reference it. I'm fluent in C# but I'm a bit rusty when it comes to C++. So this is the portion that i'll need help on. If there's a sample implementation somewhere that shows how the rendertarget, window creation, sterero tags etc are done i could use that as a reference and just add in the necessary code so that Unity can interface into it. Any help would be greatly appreciated.[/quote]
I'm fairly fluent in C++ at this point, and I'm happy to help where I can. Especially because this is a very intriguing approach. I've also looked at some 3D Vision Direct samples, and their API extensively, so I'm sure we can figure this out if we have to.
As I note above though, it seems like this is unnecessary for a SBS output render though. The only reason to add this would be to support frame-sequential output devices.
[/quote]
If i'm not mistaken 3d vision only work in frame sequential mode (or am i missing something). Since I'm personally using a 3dVision 2 kit and a Edid modded projector this is the main thing I'm trying to get working.
[quote="bo3b"]
This is maybe the part you want to use 3Dmigoto for, because we now have DarkStarSword's SBS output shaders. I don't think we presently have a way to disable 3D Vision Automatic for that sort of use. We have a force on mode, but no force off. If necessary, I can add that pretty easily.
[/quote]
Exactly! if you can have 3d Vision be on but force 3d Vision Automatic off in 3dMigoto then this is all we need to get it working. So yes please add this feature!
[quote="bo3b"]
As far as samples go, Helifax's code will be the most interesting. I recall seeing samples somewhere before, but can't pull up anything right now.
Let me know if you think I'm confused here about what you need.[/quote]
Yes, I remember seeing some code on the MTBS forums as well: http://www.mtbs3d.com/phpbb/viewtopic.php?f=105&t=16310
I could never get it to work properly though.
bo3b said:
OK, sounds good. I assume the 'script' is C# code for Unity to use to draw the images correctly? Of note, I wrote a Unity test app for VR, so I'm pretty familiar with their structure and design.
I'm happy to help here if it makes sense.
It might be worth looking at the VRgin code to see if he solved the shadow glitches from off-center projection. In VR, this would be a deal-breaker, so he had to have some solution for that.
I took a look at his scripts for Yooka-Laylee. The game is a lot simpler in terms of effects than INSIDE. Yooka-Laylee only uses a few basic post effects like bloom and AO, the game is also being rendered using a forward renderer, so you don't run into any of the issues i ran into with gbuffers and lighting. He's also disabling a bunch of effects in the script which makes sense for VR. My goal though is to not have to disable anything, or mess around to much with custom tailored solutions for every game (since we already do that with 3dMigoto).
INISDE on the other hand uses a custom deferred renderer and there are a total of 24 components attached to the main camera, about 19 of those are post effects or other things that handle volumetric lighting, rendering velocity vectors, additional depth maps and gbuffers, motionblur, water reflections, temporal antialiasing etc.
My initial approach was to create two new stereo cameras that would be updated from the main camera, have there matrices offset, and render to separate render targets. While at first this seemed like it would work i ran into issues because i had to also copy all the post effects from the main camera. Most of the effects were broken or caused the game to crash when enabled. The game also seemed to be reading the matrices from the original camera instead of the modified ones so lighting shadows and all the effects were off. My next approach was to offset the main camera and render its contents to the left eye and then create a duplicate for the right camera, but in this case the left eye would render correctly and the right would be wrong. I'm guessing that the extra buffers that the game uses would also have to be duplicated because otherwise their contents would be overwritten by the subsequent eye pass. So i changed my approach by not creating new cameras but instead just rendering the main camera twice during the same update, this caused the game to crash, or caused buffers to be overwritten.
What finally ended up working perfectly was to have the game render each frame sequentially, alternate between left and right eyes every other frame, and then blit the backbuffer to the left and right render targets. This way each eye would go through the whole rendering pipeline and have all the correct information for it without having to worry about buffers being overwritten. This is a similar approach to what Helifax used in his OpenGL wrapper. Of course this caused the left and right eyes to be temporally out of sync but the solution was pretty simple. When rendering the right eye i would essentially pause Unity's time by setting it's timescale to 0. Doing this alone would make the game run at half speed so i also set the alternating (left) frame's timescale to 2, disabled vsync and changed the update frequency to 120 so that it would essentially run at 60fps. It worked beautifully, everything is still in sync, running at fullspeed and all the post effects work. The only thing that caused issues was the D3D11 temporal AA that the game uses. Since TAA uses the previous frames contents and velocity vectors to calculate things, the final image was smeared horizontally because of the differences the between the left and right eyes. I simply disabled TAA to fix it. I think this approach should work with every game without having to custom tailor each one. With the exception of having to disable TAA or temporal related effects like motion blur, which i already do anyways, i'm not a fan of smearing Vaseline on my eyes.
bo3b said:
sgsrules said:I was trying to use 3dmigoto since it worked perfectly on a Unity project i had made in the past. The key difference was that this project didn't use the standard rendering path or rendered any polygons except for a fullscreen quad. Rendering was done via a raymarcher and implicit surfaces. I did this to create trippy demoscene type stuff, 3d fractals, etc, which looked brilliant in stereo 3D. But now since I'm trying to use it with games that i don't have complete control of the rendering path, 3d vision automatic mode interferes with things, which is why i was hoping i could simply disable that behavior and keep everything else.
I'm not clear on the need for 3Dmigoto here. It does allow full shader access, but that's not what you need here.
It's also worth noting that by itself 3Dmigoto doesn't alter any shaders. Only hand-fixed stuff in ShaderFixes will be loaded live to alter a scene. Nothing is done automatically by 3Dmigoto.
It should be possible to have your setup run with 3D disabled altogether, which would remove the problem of Automatic interfering.
I know that 3dmigoto does not alter shaders by itself. I'm not trying to use 3dmigoto to alter any shaders. The only reason I'm trying to use 3dmigoto is to create the stereo rendertarget and enable 3dvision. As i mentioned before this was what i did when i was using custom shaders in my unity project, i didn't touch any of the shader injection part of 3dmigoto. I just told it to force unity to run in fullscreen and enable 3d. Then in my shader i would read the stereoparams property to render the appropriate eye. Unfortuntely though this approach does not work now because 3dvision automatic is trying to inject it's automatic stereo calculation and messes things up. If it's possible to enable 3dvision but turn off Automatic mode in 3dmigito this would work perfectly.
bo3b said:
If I'm understanding this, you will generate a SBS output that is fully 3D, that has no connection on 3D Vision itself. The only drawback here would be that for 3D Vision output setups, there would be no frame-sequential version, and hence only 3D TV Play and SBS supported outputs would work. This can be solved by using 3D Vision Direct, but is unnecessary to start I think.
I believe that is correct. As i mentioned above I used frame sequential rendering in Unity, i would alternate eye rendering based off the stereo params, but apart from that and the creation for the stereo render target there was no other connection with 3d vision.
if it makes things easier, I can easily output to SBS (or even checkboard) The screens i posted for INSIDE were done in SBS mode so that i could test things since 3D vision is not working yet.
bo3b said:
sgsrules said:I'm not really familiar with 3D Vision Direct. I tried googling it for some info and came up mostly empty, mainly old posts about vr or oculus.
*Edit* I found the API from Nvidias site as well as documentation, which i'll review when i have some free time.
Unity provides support for native code via a plugin interface, so the 3d Vision direct portion could be written entirely in C++. The way i see it working is that the stereo 2x wide rendertarget would be created in C++ and packacged as a dll. it would expose a few basic things to Unity using Unity's built in native plugin interface, things like setting the resolution and providing a pointer to the rendertarget so that Unity can reference it. I'm fluent in C# but I'm a bit rusty when it comes to C++. So this is the portion that i'll need help on. If there's a sample implementation somewhere that shows how the rendertarget, window creation, sterero tags etc are done i could use that as a reference and just add in the necessary code so that Unity can interface into it. Any help would be greatly appreciated.
I'm fairly fluent in C++ at this point, and I'm happy to help where I can. Especially because this is a very intriguing approach. I've also looked at some 3D Vision Direct samples, and their API extensively, so I'm sure we can figure this out if we have to.
As I note above though, it seems like this is unnecessary for a SBS output render though. The only reason to add this would be to support frame-sequential output devices.
If i'm not mistaken 3d vision only work in frame sequential mode (or am i missing something). Since I'm personally using a 3dVision 2 kit and a Edid modded projector this is the main thing I'm trying to get working.
bo3b said:
This is maybe the part you want to use 3Dmigoto for, because we now have DarkStarSword's SBS output shaders. I don't think we presently have a way to disable 3D Vision Automatic for that sort of use. We have a force on mode, but no force off. If necessary, I can add that pretty easily.
Exactly! if you can have 3d Vision be on but force 3d Vision Automatic off in 3dMigoto then this is all we need to get it working. So yes please add this feature!
bo3b said:
As far as samples go, Helifax's code will be the most interesting. I recall seeing samples somewhere before, but can't pull up anything right now.
Let me know if you think I'm confused here about what you need.
[quote="helifax"]3D Vision Direct, means, the Nvidia driver just gives you access to the hardware.
You need to call 2/3 functions (except the normal ones for separation and convergence) to say where to render: Left of Right. The rest is UP to you (what you render).
I actually switched my wrapper to use 3D Vision Direct as I didn't need Automatic mode (which is DirectX exclusive and does zero to OpenGL except adding overhead to the renderer and lower framerates).
I know this thing is not very well documented (it took me quite a bit to figure it out -_-) If you are willing to wait a bit, I will give you access to my own Githut repository where the latest wrapper lives and you can see what I've done there to get 3D Vision Direct working. Haven't forgot! Just focusing on RL job & stuff and ME:A lately! ;)[/quote]
I really appreciate the help, and I'm not in a hurry, there are still a few things i need to iron out before i dive into getting 3Dvision Direct working. I've also been messing around with this stuff a lot over the past week so RL issues are starting to pile up like work, GF and dog being neglected, house and personal higene lol.
[quote="helifax"]
Funny that you mention that;)
I was thinking the same thing with my OpenGL wrapper;)
The only thing it would miss if Reshade and the CM shader though:(
But is definitely doable;) Had 3D Vision running before in Linux;) from a program. Just following the same rules that I use in my wrapper, we could kick 3D Vision under any game in Linux;)
[/quote]
Like i mentioned before you shouldn't need reshade to get the CM shader to work. I've written the basic 3d stereo shader so that it works in a single pass. I could easily port the code to GLSL. You would simply need to run the shader before outputting the image to the Direct3d surface in your wrapper. All you need is access to the depth and color buffers which should be simple enough to bind.
FYI The Zelda BOTW shader is a bit more complex since it requires extra buffers and addional passes to properly linearlize the dynamic depth buffer.
helifax said:3D Vision Direct, means, the Nvidia driver just gives you access to the hardware.
You need to call 2/3 functions (except the normal ones for separation and convergence) to say where to render: Left of Right. The rest is UP to you (what you render).
I actually switched my wrapper to use 3D Vision Direct as I didn't need Automatic mode (which is DirectX exclusive and does zero to OpenGL except adding overhead to the renderer and lower framerates).
I know this thing is not very well documented (it took me quite a bit to figure it out -_-) If you are willing to wait a bit, I will give you access to my own Githut repository where the latest wrapper lives and you can see what I've done there to get 3D Vision Direct working. Haven't forgot! Just focusing on RL job & stuff and ME:A lately! ;)
I really appreciate the help, and I'm not in a hurry, there are still a few things i need to iron out before i dive into getting 3Dvision Direct working. I've also been messing around with this stuff a lot over the past week so RL issues are starting to pile up like work, GF and dog being neglected, house and personal higene lol.
helifax said:
Funny that you mention that;)
I was thinking the same thing with my OpenGL wrapper;)
The only thing it would miss if Reshade and the CM shader though:(
But is definitely doable;) Had 3D Vision running before in Linux;) from a program. Just following the same rules that I use in my wrapper, we could kick 3D Vision under any game in Linux;)
Like i mentioned before you shouldn't need reshade to get the CM shader to work. I've written the basic 3d stereo shader so that it works in a single pass. I could easily port the code to GLSL. You would simply need to run the shader before outputting the image to the Direct3d surface in your wrapper. All you need is access to the depth and color buffers which should be simple enough to bind.
FYI The Zelda BOTW shader is a bit more complex since it requires extra buffers and addional passes to properly linearlize the dynamic depth buffer.
Like my work? You can send a donation via Paypal to sgs.rules@gmail.com
Windows 7 Pro 64x - Nvidia Driver 398.82 - EVGA 980Ti SC - Optoma HD26 with Edid override - 3D Vision 2 - i7-8700K CPU at 5.0Ghz - ASROCK Z370 Ext 4 Motherboard - 32 GB RAM Corsair Vengeance - 512 GB Samsung SSD 850 Pro - Creative Sound Blaster Z
[quote="sgsrules"]I took a look at his scripts for Yooka-Laylee. The game is a lot simpler in terms of effects than INSIDE. Yooka-Laylee only uses a few basic post effects like bloom and AO, the game is also being rendered using a forward renderer, so you don't run into any of the issues i ran into with gbuffers and lighting. He's also disabling a bunch of effects in the script which makes sense for VR. My goal though is to not have to disable anything, or mess around to much with custom tailored solutions for every game (since we already do that with 3dMigoto). [/quote]Ah, good to know. I've looked at doing stereo injection for VR as well, and I thought it was a bit magical that it would work for all effects. The forward render makes things dramatically easier for that scenario. Disabling effects is of course always our last-ditch fix too.
These sorts of things always get breathless reporting in the VR world and how it's always perfect, with no compromises. It always surprises me that people literally don't notice big things missing like even shadows, just because it's VR. Damn hard to get a straight answer out of the internet.
[quote="sgsrules"]INSIDE on the other hand uses a custom deferred renderer and there are a total of 24 components attached to the main camera, about 19 of those are post effects or other things that handle volumetric lighting, rendering velocity vectors, additional depth maps and gbuffers, motionblur, water reflections, temporal antialiasing etc.
My initial approach was to create two new stereo cameras that would be updated from the main camera, have there matrices offset, and render to separate render targets. While at first this seemed like it would work i ran into issues because i had to also copy all the post effects from the main camera. Most of the effects were broken or caused the game to crash when enabled. The game also seemed to be reading the matrices from the original camera instead of the modified ones so lighting shadows and all the effects were off. My next approach was to offset the main camera and render its contents to the left eye and then create a duplicate for the right camera, but in this case the left eye would render correctly and the right would be wrong. I'm guessing that the extra buffers that the game uses would also have to be duplicated because otherwise their contents would be overwritten by the subsequent eye pass. So i changed my approach by not creating new cameras but instead just rendering the main camera twice during the same update, this caused the game to crash, or caused buffers to be overwritten.
What finally ended up working perfectly was to have the game render each frame sequentially, alternate between left and right eyes every other frame, and then blit the backbuffer to the left and right render targets. This way each eye would go through the whole rendering pipeline and have all the correct information for it without having to worry about buffers being overwritten. This is a similar approach to what Helifax used in his OpenGL wrapper. Of course this caused the left and right eyes to be temporally out of sync but the solution was pretty simple. When rendering the right eye i would essentially pause Unity's time by setting it's timescale to 0. Doing this alone would make the game run at half speed so i also set the alternating (left) frame's timescale to 2, disabled vsync and changed the update frequency to 120 so that it would essentially run at 60fps. It worked beautifully, everything is still in sync, running at fullspeed and all the post effects work. The only thing that caused issues was the D3D11 temporal AA that the game uses. Since TAA uses the previous frames contents and velocity vectors to calculate things, the final image was smeared horizontally because of the differences the between the left and right eyes. I simply disabled TAA to fix it. I think this approach should work with every game without having to custom tailor each one. With the exception of having to disable TAA or temporal related effects like motion blur, which i already do anyways, i'm not a fan of smearing Vaseline on my eyes.[/quote]Great info, thanks for that. When doing my VR experiments with Unity, I ran into all sorts of bizarre problems about how they handle their cameras, even using the VR camera rig that OpenVR supplies.
Your solution for setting time to zero to keep eye sync is brilliant.
For later on, you can probably also support TAA and possibly motion blur. In the Alias Isolation mod, he does TAA there as well, in order to fix the 'crawling' lines of pixels. I wrote a variant of his open-source code that doubled the buffers, so 4x buffers, two for each eye. Turned out to not be necessary, because using Automatic, it would use the SBS backbuffer, and thus have the prior frame for AA. In the Direct mode, it should be possible to do something similar, but it seems like that comes later.
[quote="sgsrules"]I know that 3dmigoto does not alter shaders by itself. I'm not trying to use 3dmigoto to alter any shaders. The only reason I'm trying to use 3dmigoto is to create the stereo rendertarget and enable 3dvision. As i mentioned before this was what i did when i was using custom shaders in my unity project, i didn't touch any of the shader injection part of 3dmigoto. I just told it to force unity to run in fullscreen and enable 3d. Then in my shader i would read the stereoparams property to render the appropriate eye. Unfortuntely though this approach does not work now because 3dvision automatic is trying to inject it's automatic stereo calculation and messes things up. If it's possible to enable 3dvision but turn off Automatic mode in 3dmigito this would work perfectly.[/quote]
OK, I think I understand. At present 3Dmigoto doesn't have anything outside of Automatic Mode support. The forcing fullscreen for Unity was another of those 'must have' features to get it to start. It doesn't really make sense here with 3Dmigoto, because the tool purpose is mildly at odds with a Direct Mode usage, but let me think about this.
[quote="sgsrules"][quote="bo3b"]If I'm understanding this, you will generate a SBS output that is fully 3D, that has no connection on 3D Vision itself. The only drawback here would be that for 3D Vision output setups, there would be no frame-sequential version, and hence only 3D TV Play and SBS supported outputs would work. This can be solved by using 3D Vision Direct, but is unnecessary to start I think.[/quote]I believe that is correct. As i mentioned above I used frame sequential rendering in Unity, i would alternate eye rendering based off the stereo params, but apart from that and the creation for the stereo render target there was no other connection with 3d vision. [/quote]
This is the essence of the Automatic Mode, where it creates a stereo rendertarget, and the stereo handle and stereo params to be injected.
This is all done mainly in the stereo.h file: [url]https://github.com/bo3b/3Dmigoto/blob/master/nvstereo.h[/url]
Roughly comparable to the Stereo Bunny Sample, from the old NVidia DX11 SDK.
I expect the setup code to be roughly comparable for Direct Mode, but haven't studied it thoroughly. Let me take a look at what all it takes. Your link will be helpful, there is some great stuff buried in MTBS.
[quote="sgsrules"]if it makes things easier, I can easily output to SBS (or even checkboard) The screens i posted for INSIDE were done in SBS mode so that i could test things since 3D vision is not working yet.[/quote]Not sure, but I think that SBS is probably the best. I'm not clear on exactly what 3D Vision Direct needs, but it's logical to assume it's SBS just like Automatic. I'm not sure who flips the eyes.
[quote="sgsrules"][quote="bo3b"]As I note above though, it seems like this is unnecessary for a SBS output render though. The only reason to add this would be to support frame-sequential output devices.[/quote]If i'm not mistaken 3d vision only work in frame sequential mode (or am i missing something). Since I'm personally using a 3dVision 2 kit and a Edid modded projector this is the main thing I'm trying to get working.[/quote]OK, that sounds right. I only have frame-sequential hardware too, so I'm also keenly interested. 3D Vision hardware also supports checkerboard in some old DLP setups, but it's pretty close to say it's strictly frame-sequential. 3D TV Play is pretty much the only thing they push nowadays.
[quote="sgsrules"][quote="bo3b"]This is maybe the part you want to use 3Dmigoto for, because we now have DarkStarSword's SBS output shaders. I don't think we presently have a way to disable 3D Vision Automatic for that sort of use. We have a force on mode, but no force off. If necessary, I can add that pretty easily.[/quote]Exactly! if you can have 3d Vision be on but force 3d Vision Automatic off in 3dMigoto then this is all we need to get it working. So yes please add this feature![/quote]OK, the only real question then is where we put the Direct Mode code.
There are arguments to be made for both 3Dmigoto, and keeping it all standalone with your tool. Adding a C++ support file/class for your tool would be easy enough, so we can get access to nvapi without any trouble.
I'm not certain how well this meshes with the 3Dmigoto goals, and we try to not get too carried away with off-topic features. But, you will also need the force-full-screen function for older Unity games that don't support exclusive full-screen. Pretty sure 4.x stuff is broken because they didn't figure it out until 5.x.
That forcing full screen requires patches OS functions, which is a bit beyond the scope of your tool. Or at least, it'd be nice to just be able to call APIs, and not need to hook calls.
Seems like we need 3D Direct regardless, so let me study up on what that takes. If it seems like I have to add too much weirdness to 3Dmigoto, I might push to defer this to your code.
If it seems like a lot of the setup code is the same, then we can leverage the OS patching functionality in 3Dmigoto.
I lean toward having it be the two tools as a combined set.
[quote="sgsrules"]Yes, I remember seeing some code on the MTBS forums as well: http://www.mtbs3d.com/phpbb/viewtopic.php?f=105&t=16310
I could never get it to work properly though.[/quote]Thanks for the link, I'll start there.
sgsrules said:I took a look at his scripts for Yooka-Laylee. The game is a lot simpler in terms of effects than INSIDE. Yooka-Laylee only uses a few basic post effects like bloom and AO, the game is also being rendered using a forward renderer, so you don't run into any of the issues i ran into with gbuffers and lighting. He's also disabling a bunch of effects in the script which makes sense for VR. My goal though is to not have to disable anything, or mess around to much with custom tailored solutions for every game (since we already do that with 3dMigoto).
Ah, good to know. I've looked at doing stereo injection for VR as well, and I thought it was a bit magical that it would work for all effects. The forward render makes things dramatically easier for that scenario. Disabling effects is of course always our last-ditch fix too.
These sorts of things always get breathless reporting in the VR world and how it's always perfect, with no compromises. It always surprises me that people literally don't notice big things missing like even shadows, just because it's VR. Damn hard to get a straight answer out of the internet.
sgsrules said:INSIDE on the other hand uses a custom deferred renderer and there are a total of 24 components attached to the main camera, about 19 of those are post effects or other things that handle volumetric lighting, rendering velocity vectors, additional depth maps and gbuffers, motionblur, water reflections, temporal antialiasing etc.
My initial approach was to create two new stereo cameras that would be updated from the main camera, have there matrices offset, and render to separate render targets. While at first this seemed like it would work i ran into issues because i had to also copy all the post effects from the main camera. Most of the effects were broken or caused the game to crash when enabled. The game also seemed to be reading the matrices from the original camera instead of the modified ones so lighting shadows and all the effects were off. My next approach was to offset the main camera and render its contents to the left eye and then create a duplicate for the right camera, but in this case the left eye would render correctly and the right would be wrong. I'm guessing that the extra buffers that the game uses would also have to be duplicated because otherwise their contents would be overwritten by the subsequent eye pass. So i changed my approach by not creating new cameras but instead just rendering the main camera twice during the same update, this caused the game to crash, or caused buffers to be overwritten.
What finally ended up working perfectly was to have the game render each frame sequentially, alternate between left and right eyes every other frame, and then blit the backbuffer to the left and right render targets. This way each eye would go through the whole rendering pipeline and have all the correct information for it without having to worry about buffers being overwritten. This is a similar approach to what Helifax used in his OpenGL wrapper. Of course this caused the left and right eyes to be temporally out of sync but the solution was pretty simple. When rendering the right eye i would essentially pause Unity's time by setting it's timescale to 0. Doing this alone would make the game run at half speed so i also set the alternating (left) frame's timescale to 2, disabled vsync and changed the update frequency to 120 so that it would essentially run at 60fps. It worked beautifully, everything is still in sync, running at fullspeed and all the post effects work. The only thing that caused issues was the D3D11 temporal AA that the game uses. Since TAA uses the previous frames contents and velocity vectors to calculate things, the final image was smeared horizontally because of the differences the between the left and right eyes. I simply disabled TAA to fix it. I think this approach should work with every game without having to custom tailor each one. With the exception of having to disable TAA or temporal related effects like motion blur, which i already do anyways, i'm not a fan of smearing Vaseline on my eyes.
Great info, thanks for that. When doing my VR experiments with Unity, I ran into all sorts of bizarre problems about how they handle their cameras, even using the VR camera rig that OpenVR supplies.
Your solution for setting time to zero to keep eye sync is brilliant.
For later on, you can probably also support TAA and possibly motion blur. In the Alias Isolation mod, he does TAA there as well, in order to fix the 'crawling' lines of pixels. I wrote a variant of his open-source code that doubled the buffers, so 4x buffers, two for each eye. Turned out to not be necessary, because using Automatic, it would use the SBS backbuffer, and thus have the prior frame for AA. In the Direct mode, it should be possible to do something similar, but it seems like that comes later.
sgsrules said:I know that 3dmigoto does not alter shaders by itself. I'm not trying to use 3dmigoto to alter any shaders. The only reason I'm trying to use 3dmigoto is to create the stereo rendertarget and enable 3dvision. As i mentioned before this was what i did when i was using custom shaders in my unity project, i didn't touch any of the shader injection part of 3dmigoto. I just told it to force unity to run in fullscreen and enable 3d. Then in my shader i would read the stereoparams property to render the appropriate eye. Unfortuntely though this approach does not work now because 3dvision automatic is trying to inject it's automatic stereo calculation and messes things up. If it's possible to enable 3dvision but turn off Automatic mode in 3dmigito this would work perfectly.
OK, I think I understand. At present 3Dmigoto doesn't have anything outside of Automatic Mode support. The forcing fullscreen for Unity was another of those 'must have' features to get it to start. It doesn't really make sense here with 3Dmigoto, because the tool purpose is mildly at odds with a Direct Mode usage, but let me think about this.
sgsrules said:
bo3b said:If I'm understanding this, you will generate a SBS output that is fully 3D, that has no connection on 3D Vision itself. The only drawback here would be that for 3D Vision output setups, there would be no frame-sequential version, and hence only 3D TV Play and SBS supported outputs would work. This can be solved by using 3D Vision Direct, but is unnecessary to start I think.
I believe that is correct. As i mentioned above I used frame sequential rendering in Unity, i would alternate eye rendering based off the stereo params, but apart from that and the creation for the stereo render target there was no other connection with 3d vision.
This is the essence of the Automatic Mode, where it creates a stereo rendertarget, and the stereo handle and stereo params to be injected.
Roughly comparable to the Stereo Bunny Sample, from the old NVidia DX11 SDK.
I expect the setup code to be roughly comparable for Direct Mode, but haven't studied it thoroughly. Let me take a look at what all it takes. Your link will be helpful, there is some great stuff buried in MTBS.
sgsrules said:if it makes things easier, I can easily output to SBS (or even checkboard) The screens i posted for INSIDE were done in SBS mode so that i could test things since 3D vision is not working yet.
Not sure, but I think that SBS is probably the best. I'm not clear on exactly what 3D Vision Direct needs, but it's logical to assume it's SBS just like Automatic. I'm not sure who flips the eyes.
sgsrules said:
bo3b said:As I note above though, it seems like this is unnecessary for a SBS output render though. The only reason to add this would be to support frame-sequential output devices.
If i'm not mistaken 3d vision only work in frame sequential mode (or am i missing something). Since I'm personally using a 3dVision 2 kit and a Edid modded projector this is the main thing I'm trying to get working.
OK, that sounds right. I only have frame-sequential hardware too, so I'm also keenly interested. 3D Vision hardware also supports checkerboard in some old DLP setups, but it's pretty close to say it's strictly frame-sequential. 3D TV Play is pretty much the only thing they push nowadays.
sgsrules said:
bo3b said:This is maybe the part you want to use 3Dmigoto for, because we now have DarkStarSword's SBS output shaders. I don't think we presently have a way to disable 3D Vision Automatic for that sort of use. We have a force on mode, but no force off. If necessary, I can add that pretty easily.
Exactly! if you can have 3d Vision be on but force 3d Vision Automatic off in 3dMigoto then this is all we need to get it working. So yes please add this feature!
OK, the only real question then is where we put the Direct Mode code.
There are arguments to be made for both 3Dmigoto, and keeping it all standalone with your tool. Adding a C++ support file/class for your tool would be easy enough, so we can get access to nvapi without any trouble.
I'm not certain how well this meshes with the 3Dmigoto goals, and we try to not get too carried away with off-topic features. But, you will also need the force-full-screen function for older Unity games that don't support exclusive full-screen. Pretty sure 4.x stuff is broken because they didn't figure it out until 5.x.
That forcing full screen requires patches OS functions, which is a bit beyond the scope of your tool. Or at least, it'd be nice to just be able to call APIs, and not need to hook calls.
Seems like we need 3D Direct regardless, so let me study up on what that takes. If it seems like I have to add too much weirdness to 3Dmigoto, I might push to defer this to your code.
If it seems like a lot of the setup code is the same, then we can leverage the OS patching functionality in 3Dmigoto.
I lean toward having it be the two tools as a combined set.
[quote="sgsrules"]
Exactly! if you can have 3d Vision be on but force 3d Vision Automatic off in 3dMigoto then this is all we need to get it working. So yes please add this feature![/quote]
Would setting StereoTextureEnable to 0x00000000 in the Nvidia game profile work for your purposes? It makes games flat and they don't use more GPU than the game in 2D.
If you have the game already correctly displaying in side by side and glasses are working, you can do what I did for PCSX2: https://s3.amazonaws.com/masterotaku/PCSX2/PCSX2_3D_Vision_2017-03-17.7z
Check the changes I made to the "3dvision2sbsps.hlsl" file. There are lots of extra features I made for the top and bottom mode, but I kept the side by side mode pretty simple:
[code]
else if (mode == 2 || mode == 3) { // Side by side
//x *= 2;
if (stereo.z==1)
{
x *= 0.5;
}
else
{
x *= 0.5;
x=x+width*0.75;
}
x1 = 1;
if (mode == 3) { // Swap eyes
x += width / 2 * (x >= width / 2 ? -1 : 1);
}
}
[/code]
I do the opposite of converting 3D Vision to side by side (the original purpose of that shader). I modify the position and stretching depending on what eye you are showing. It's a bit hacky and it will look pixelated unless you use 4x DSR (it's stretching final pixels).
sgsrules said:
Exactly! if you can have 3d Vision be on but force 3d Vision Automatic off in 3dMigoto then this is all we need to get it working. So yes please add this feature!
Would setting StereoTextureEnable to 0x00000000 in the Nvidia game profile work for your purposes? It makes games flat and they don't use more GPU than the game in 2D.
Check the changes I made to the "3dvision2sbsps.hlsl" file. There are lots of extra features I made for the top and bottom mode, but I kept the side by side mode pretty simple:
else if (mode == 2 || mode == 3) { // Side by side
//x *= 2;
if (stereo.z==1)
{
x *= 0.5;
}
else
{
x *= 0.5;
x=x+width*0.75;
}
I do the opposite of converting 3D Vision to side by side (the original purpose of that shader). I modify the position and stretching depending on what eye you are showing. It's a bit hacky and it will look pixelated unless you use 4x DSR (it's stretching final pixels).
Well I managed to find a solution and everything is working the way i want it to by using 3dmigoto. The solution was actually quite simple, I'm surprised i didn't think of it before.
The problem I was having was that 3dvision automatic was kicking in and injecting it's automatic stereo calculations to the shaders, which is totally unnecessary since I'm adjusting the matrices and drawing the scene twice already. So my solution?
I set separation to zero. lmao!
Since my fix doesn't use any of the separation or convergence values from 3d Vision it's not affected. 3dvision automatic is still injecting the automatic stereo correction code, but since the separation is set to zero it essentially does nothing to the image. Obviously this is a less than ideal solution since I'm wasting a bit of performance since 3dVision automatic is doing unnecessary work. Ideally I would want 3d vision direct mode to work, but this works for now.
Well I managed to find a solution and everything is working the way i want it to by using 3dmigoto. The solution was actually quite simple, I'm surprised i didn't think of it before.
The problem I was having was that 3dvision automatic was kicking in and injecting it's automatic stereo calculations to the shaders, which is totally unnecessary since I'm adjusting the matrices and drawing the scene twice already. So my solution?
I set separation to zero. lmao!
Since my fix doesn't use any of the separation or convergence values from 3d Vision it's not affected. 3dvision automatic is still injecting the automatic stereo correction code, but since the separation is set to zero it essentially does nothing to the image. Obviously this is a less than ideal solution since I'm wasting a bit of performance since 3dVision automatic is doing unnecessary work. Ideally I would want 3d vision direct mode to work, but this works for now.
Like my work? You can send a donation via Paypal to sgs.rules@gmail.com
Windows 7 Pro 64x - Nvidia Driver 398.82 - EVGA 980Ti SC - Optoma HD26 with Edid override - 3D Vision 2 - i7-8700K CPU at 5.0Ghz - ASROCK Z370 Ext 4 Motherboard - 32 GB RAM Corsair Vengeance - 512 GB Samsung SSD 850 Pro - Creative Sound Blaster Z
[quote="sgsrules"]
I set separation to zero. lmao!
[/quote]
Did you try the StereoTextureEnable setting? It should get rid of having to change separation, and GPU usage would be the same as in 2D.
[quote="masterotaku"][quote="sgsrules"]
I set separation to zero. lmao!
[/quote]
Did you try the StereoTextureEnable setting? It should get rid of having to change separation, and GPU usage would be the same as in 2D.[/quote]
StereoTextureEnable was already set to zero in the profile. I tried setting it again but it made no difference.
I've no idea if this makes any kind of difference, but this posted today
VRWorks Support for Unity Now Available
Continuing our joint effort to deliver amazing VR experiences, game engine developer Unity today released 2017.1.0 Beta 2 which enables support for our VRWorks technology.
To speed efforts like these, earlier this year we promised to make adding VRWorks technologies to titles created with Unity Engine a snap by integrating support for VRWorks into the engine. Now — with Unity’s 2017.1.0 Beta 2 release — that promise has come to fruition.
https://developer.nvidia.com/nvidia-vrworks-support-unity-engine-now-available
I've no idea if this makes any kind of difference, but this posted today
VRWorks Support for Unity Now Available
Continuing our joint effort to deliver amazing VR experiences, game engine developer Unity today released 2017.1.0 Beta 2 which enables support for our VRWorks technology.
To speed efforts like these, earlier this year we promised to make adding VRWorks technologies to titles created with Unity Engine a snap by integrating support for VRWorks into the engine. Now — with Unity’s 2017.1.0 Beta 2 release — that promise has come to fruition.
[quote="D-Man11"]I've no idea if this makes any kind of difference, but this posted today
VRWorks Support for Unity Now Available
Continuing our joint effort to deliver amazing VR experiences, game engine developer Unity today released 2017.1.0 Beta 2 which enables support for our VRWorks technology.
To speed efforts like these, earlier this year we promised to make adding VRWorks technologies to titles created with Unity Engine a snap by integrating support for VRWorks into the engine. Now — with Unity’s 2017.1.0 Beta 2 release — that promise has come to fruition.
https://developer.nvidia.com/nvidia-vrworks-support-unity-engine-now-available[/quote]
This wouldn't really help here but it's brilliant news none the less.
D-Man11 said:I've no idea if this makes any kind of difference, but this posted today
VRWorks Support for Unity Now Available
Continuing our joint effort to deliver amazing VR experiences, game engine developer Unity today released 2017.1.0 Beta 2 which enables support for our VRWorks technology.
To speed efforts like these, earlier this year we promised to make adding VRWorks technologies to titles created with Unity Engine a snap by integrating support for VRWorks into the engine. Now — with Unity’s 2017.1.0 Beta 2 release — that promise has come to fruition.
Sorry for the big delay here. I finally figured out how to make 3D Vision Direct work, and made a sample program based on the Tutorial_7 from the Windows SDK.
http://bo3b.net/DirectModeSample.zip
This doesn't do much, just draws a cube, but... it does it in 3D Vision now, using the Direct Mode driver calls. This does not use that sick hack of a stereo signature, it just uses nvapi calls. It supports normal separation and convergence adjustments.
The reason this is interesting is because that means we/you can add Direct Mode directly to your app, and not need 3D Vision Automatic, or 3Dmigoto. This should also allow us to to make this work for both the 4.x and 5.x versions of Unity. And also likely allow windowed mode 3D.
I didn't want to post anything before, because I wasn't sure this was possible. The documentation is nonexistent and/or misleading.
I'm going to cleanup the code a bit, lots and lots of broken experiment code, but I will post this sample to GitHub for the world to use. I searched high and low, and found no trace of sample code, so dangit, I made one.
The code for doing right eye drawing, which also sets up the projection matrix properly from the mono g_projection:
[code]
...
status = NvAPI_Stereo_GetConvergence(g_StereoHandle, &pConvergence);
status = NvAPI_Stereo_GetSeparation(g_StereoHandle, &pSeparationPercentage);
status = NvAPI_Stereo_GetEyeSeparation(g_StereoHandle, &pEyeSeparation);
float separation = pEyeSeparation * pSeparationPercentage / 100;
float convergence = pEyeSeparation * pSeparationPercentage / 100 * pConvergence;
...
status = NvAPI_Stereo_SetActiveEye(g_StereoHandle, NVAPI_STEREO_EYE_RIGHT);
if (SUCCEEDED(status))
{
cbChangesOnResize.mProjection = g_Projection;
cbChangesOnResize.mProjection._31 += separation;
cbChangesOnResize.mProjection._41 = -convergence;
cbChangesOnResize.mProjection = XMMatrixTranspose(cbChangesOnResize.mProjection);
g_pImmediateContext->UpdateSubresource(g_pCBChangeOnResize, 0, nullptr, &cbChangesOnResize, 0, 0);
Render();
}
[/code]
More soon.
Sorry for the big delay here. I finally figured out how to make 3D Vision Direct work, and made a sample program based on the Tutorial_7 from the Windows SDK.
This doesn't do much, just draws a cube, but... it does it in 3D Vision now, using the Direct Mode driver calls. This does not use that sick hack of a stereo signature, it just uses nvapi calls. It supports normal separation and convergence adjustments.
The reason this is interesting is because that means we/you can add Direct Mode directly to your app, and not need 3D Vision Automatic, or 3Dmigoto. This should also allow us to to make this work for both the 4.x and 5.x versions of Unity. And also likely allow windowed mode 3D.
I didn't want to post anything before, because I wasn't sure this was possible. The documentation is nonexistent and/or misleading.
I'm going to cleanup the code a bit, lots and lots of broken experiment code, but I will post this sample to GitHub for the world to use. I searched high and low, and found no trace of sample code, so dangit, I made one.
The code for doing right eye drawing, which also sets up the projection matrix properly from the mono g_projection:
...
status = NvAPI_Stereo_GetConvergence(g_StereoHandle, &pConvergence);
status = NvAPI_Stereo_GetSeparation(g_StereoHandle, &pSeparationPercentage);
status = NvAPI_Stereo_GetEyeSeparation(g_StereoHandle, &pEyeSeparation);
Brilliant! And perfect timing too, im almost done with my botw fix so ill be jumping back on to the unity fix when i have some free time. I played all the way through INSIDE using my fix and it was near perfect, i just need to fix the reflected camera matrix thats used for the water reflections. The game looks stunning btw.
Brilliant! And perfect timing too, im almost done with my botw fix so ill be jumping back on to the unity fix when i have some free time. I played all the way through INSIDE using my fix and it was near perfect, i just need to fix the reflected camera matrix thats used for the water reflections. The game looks stunning btw.
Like my work? You can send a donation via Paypal to sgs.rules@gmail.com
Windows 7 Pro 64x - Nvidia Driver 398.82 - EVGA 980Ti SC - Optoma HD26 with Edid override - 3D Vision 2 - i7-8700K CPU at 5.0Ghz - ASROCK Z370 Ext 4 Motherboard - 32 GB RAM Corsair Vengeance - 512 GB Samsung SSD 850 Pro - Creative Sound Blaster Z
On another note... I'm wondering if a similar method could be used for Unreal Engine games. Full source code for the engine is available. It might be possible to create an injector or wrapper and use the same approach.
On another note... I'm wondering if a similar method could be used for Unreal Engine games. Full source code for the engine is available. It might be possible to create an injector or wrapper and use the same approach.
Like my work? You can send a donation via Paypal to sgs.rules@gmail.com
Windows 7 Pro 64x - Nvidia Driver 398.82 - EVGA 980Ti SC - Optoma HD26 with Edid override - 3D Vision 2 - i7-8700K CPU at 5.0Ghz - ASROCK Z370 Ext 4 Motherboard - 32 GB RAM Corsair Vengeance - 512 GB Samsung SSD 850 Pro - Creative Sound Blaster Z
I took a look at his scripts for Yooka-Laylee. The game is a lot simpler in terms of effects than INSIDE. Yooka-Laylee only uses a few basic post effects like bloom and AO, the game is also being rendered using a forward renderer, so you don't run into any of the issues i ran into with gbuffers and lighting. He's also disabling a bunch of effects in the script which makes sense for VR. My goal though is to not have to disable anything, or mess around to much with custom tailored solutions for every game (since we already do that with 3dMigoto).
INISDE on the other hand uses a custom deferred renderer and there are a total of 24 components attached to the main camera, about 19 of those are post effects or other things that handle volumetric lighting, rendering velocity vectors, additional depth maps and gbuffers, motionblur, water reflections, temporal antialiasing etc.
My initial approach was to create two new stereo cameras that would be updated from the main camera, have there matrices offset, and render to separate render targets. While at first this seemed like it would work i ran into issues because i had to also copy all the post effects from the main camera. Most of the effects were broken or caused the game to crash when enabled. The game also seemed to be reading the matrices from the original camera instead of the modified ones so lighting shadows and all the effects were off. My next approach was to offset the main camera and render its contents to the left eye and then create a duplicate for the right camera, but in this case the left eye would render correctly and the right would be wrong. I'm guessing that the extra buffers that the game uses would also have to be duplicated because otherwise their contents would be overwritten by the subsequent eye pass. So i changed my approach by not creating new cameras but instead just rendering the main camera twice during the same update, this caused the game to crash, or caused buffers to be overwritten.
What finally ended up working perfectly was to have the game render each frame sequentially, alternate between left and right eyes every other frame, and then blit the backbuffer to the left and right render targets. This way each eye would go through the whole rendering pipeline and have all the correct information for it without having to worry about buffers being overwritten. This is a similar approach to what Helifax used in his OpenGL wrapper. Of course this caused the left and right eyes to be temporally out of sync but the solution was pretty simple. When rendering the right eye i would essentially pause Unity's time by setting it's timescale to 0. Doing this alone would make the game run at half speed so i also set the alternating (left) frame's timescale to 2, disabled vsync and changed the update frequency to 120 so that it would essentially run at 60fps. It worked beautifully, everything is still in sync, running at fullspeed and all the post effects work. The only thing that caused issues was the D3D11 temporal AA that the game uses. Since TAA uses the previous frames contents and velocity vectors to calculate things, the final image was smeared horizontally because of the differences the between the left and right eyes. I simply disabled TAA to fix it. I think this approach should work with every game without having to custom tailor each one. With the exception of having to disable TAA or temporal related effects like motion blur, which i already do anyways, i'm not a fan of smearing Vaseline on my eyes.
I know that 3dmigoto does not alter shaders by itself. I'm not trying to use 3dmigoto to alter any shaders. The only reason I'm trying to use 3dmigoto is to create the stereo rendertarget and enable 3dvision. As i mentioned before this was what i did when i was using custom shaders in my unity project, i didn't touch any of the shader injection part of 3dmigoto. I just told it to force unity to run in fullscreen and enable 3d. Then in my shader i would read the stereoparams property to render the appropriate eye. Unfortuntely though this approach does not work now because 3dvision automatic is trying to inject it's automatic stereo calculation and messes things up. If it's possible to enable 3dvision but turn off Automatic mode in 3dmigito this would work perfectly.
I believe that is correct. As i mentioned above I used frame sequential rendering in Unity, i would alternate eye rendering based off the stereo params, but apart from that and the creation for the stereo render target there was no other connection with 3d vision.
if it makes things easier, I can easily output to SBS (or even checkboard) The screens i posted for INSIDE were done in SBS mode so that i could test things since 3D vision is not working yet.
If i'm not mistaken 3d vision only work in frame sequential mode (or am i missing something). Since I'm personally using a 3dVision 2 kit and a Edid modded projector this is the main thing I'm trying to get working.
Exactly! if you can have 3d Vision be on but force 3d Vision Automatic off in 3dMigoto then this is all we need to get it working. So yes please add this feature!
Yes, I remember seeing some code on the MTBS forums as well: http://www.mtbs3d.com/phpbb/viewtopic.php?f=105&t=16310
I could never get it to work properly though.
Like my work? You can send a donation via Paypal to sgs.rules@gmail.com
Windows 7 Pro 64x - Nvidia Driver 398.82 - EVGA 980Ti SC - Optoma HD26 with Edid override - 3D Vision 2 - i7-8700K CPU at 5.0Ghz - ASROCK Z370 Ext 4 Motherboard - 32 GB RAM Corsair Vengeance - 512 GB Samsung SSD 850 Pro - Creative Sound Blaster Z
I really appreciate the help, and I'm not in a hurry, there are still a few things i need to iron out before i dive into getting 3Dvision Direct working. I've also been messing around with this stuff a lot over the past week so RL issues are starting to pile up like work, GF and dog being neglected, house and personal higene lol.
Like i mentioned before you shouldn't need reshade to get the CM shader to work. I've written the basic 3d stereo shader so that it works in a single pass. I could easily port the code to GLSL. You would simply need to run the shader before outputting the image to the Direct3d surface in your wrapper. All you need is access to the depth and color buffers which should be simple enough to bind.
FYI The Zelda BOTW shader is a bit more complex since it requires extra buffers and addional passes to properly linearlize the dynamic depth buffer.
Like my work? You can send a donation via Paypal to sgs.rules@gmail.com
Windows 7 Pro 64x - Nvidia Driver 398.82 - EVGA 980Ti SC - Optoma HD26 with Edid override - 3D Vision 2 - i7-8700K CPU at 5.0Ghz - ASROCK Z370 Ext 4 Motherboard - 32 GB RAM Corsair Vengeance - 512 GB Samsung SSD 850 Pro - Creative Sound Blaster Z
These sorts of things always get breathless reporting in the VR world and how it's always perfect, with no compromises. It always surprises me that people literally don't notice big things missing like even shadows, just because it's VR. Damn hard to get a straight answer out of the internet.
Great info, thanks for that. When doing my VR experiments with Unity, I ran into all sorts of bizarre problems about how they handle their cameras, even using the VR camera rig that OpenVR supplies.
Your solution for setting time to zero to keep eye sync is brilliant.
For later on, you can probably also support TAA and possibly motion blur. In the Alias Isolation mod, he does TAA there as well, in order to fix the 'crawling' lines of pixels. I wrote a variant of his open-source code that doubled the buffers, so 4x buffers, two for each eye. Turned out to not be necessary, because using Automatic, it would use the SBS backbuffer, and thus have the prior frame for AA. In the Direct mode, it should be possible to do something similar, but it seems like that comes later.
OK, I think I understand. At present 3Dmigoto doesn't have anything outside of Automatic Mode support. The forcing fullscreen for Unity was another of those 'must have' features to get it to start. It doesn't really make sense here with 3Dmigoto, because the tool purpose is mildly at odds with a Direct Mode usage, but let me think about this.
This is the essence of the Automatic Mode, where it creates a stereo rendertarget, and the stereo handle and stereo params to be injected.
This is all done mainly in the stereo.h file: https://github.com/bo3b/3Dmigoto/blob/master/nvstereo.h
Roughly comparable to the Stereo Bunny Sample, from the old NVidia DX11 SDK.
I expect the setup code to be roughly comparable for Direct Mode, but haven't studied it thoroughly. Let me take a look at what all it takes. Your link will be helpful, there is some great stuff buried in MTBS.
Not sure, but I think that SBS is probably the best. I'm not clear on exactly what 3D Vision Direct needs, but it's logical to assume it's SBS just like Automatic. I'm not sure who flips the eyes.
OK, that sounds right. I only have frame-sequential hardware too, so I'm also keenly interested. 3D Vision hardware also supports checkerboard in some old DLP setups, but it's pretty close to say it's strictly frame-sequential. 3D TV Play is pretty much the only thing they push nowadays.
OK, the only real question then is where we put the Direct Mode code.
There are arguments to be made for both 3Dmigoto, and keeping it all standalone with your tool. Adding a C++ support file/class for your tool would be easy enough, so we can get access to nvapi without any trouble.
I'm not certain how well this meshes with the 3Dmigoto goals, and we try to not get too carried away with off-topic features. But, you will also need the force-full-screen function for older Unity games that don't support exclusive full-screen. Pretty sure 4.x stuff is broken because they didn't figure it out until 5.x.
That forcing full screen requires patches OS functions, which is a bit beyond the scope of your tool. Or at least, it'd be nice to just be able to call APIs, and not need to hook calls.
Seems like we need 3D Direct regardless, so let me study up on what that takes. If it seems like I have to add too much weirdness to 3Dmigoto, I might push to defer this to your code.
If it seems like a lot of the setup code is the same, then we can leverage the OS patching functionality in 3Dmigoto.
I lean toward having it be the two tools as a combined set.
Thanks for the link, I'll start there.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers
Would setting StereoTextureEnable to 0x00000000 in the Nvidia game profile work for your purposes? It makes games flat and they don't use more GPU than the game in 2D.
If you have the game already correctly displaying in side by side and glasses are working, you can do what I did for PCSX2: https://s3.amazonaws.com/masterotaku/PCSX2/PCSX2_3D_Vision_2017-03-17.7z
Check the changes I made to the "3dvision2sbsps.hlsl" file. There are lots of extra features I made for the top and bottom mode, but I kept the side by side mode pretty simple:
I do the opposite of converting 3D Vision to side by side (the original purpose of that shader). I modify the position and stretching depending on what eye you are showing. It's a bit hacky and it will look pixelated unless you use 4x DSR (it's stretching final pixels).
CPU: Intel Core i7 7700K @ 4.9GHz
Motherboard: Gigabyte Aorus GA-Z270X-Gaming 5
RAM: GSKILL Ripjaws Z 16GB 3866MHz CL18
GPU: Gainward Phoenix 1080 GLH
Monitor: Asus PG278QR
Speakers: Logitech Z506
Donations account: masterotakusuko@gmail.com
The problem I was having was that 3dvision automatic was kicking in and injecting it's automatic stereo calculations to the shaders, which is totally unnecessary since I'm adjusting the matrices and drawing the scene twice already. So my solution?
I set separation to zero. lmao!
Since my fix doesn't use any of the separation or convergence values from 3d Vision it's not affected. 3dvision automatic is still injecting the automatic stereo correction code, but since the separation is set to zero it essentially does nothing to the image. Obviously this is a less than ideal solution since I'm wasting a bit of performance since 3dVision automatic is doing unnecessary work. Ideally I would want 3d vision direct mode to work, but this works for now.
Like my work? You can send a donation via Paypal to sgs.rules@gmail.com
Windows 7 Pro 64x - Nvidia Driver 398.82 - EVGA 980Ti SC - Optoma HD26 with Edid override - 3D Vision 2 - i7-8700K CPU at 5.0Ghz - ASROCK Z370 Ext 4 Motherboard - 32 GB RAM Corsair Vengeance - 512 GB Samsung SSD 850 Pro - Creative Sound Blaster Z
Did you try the StereoTextureEnable setting? It should get rid of having to change separation, and GPU usage would be the same as in 2D.
CPU: Intel Core i7 7700K @ 4.9GHz
Motherboard: Gigabyte Aorus GA-Z270X-Gaming 5
RAM: GSKILL Ripjaws Z 16GB 3866MHz CL18
GPU: Gainward Phoenix 1080 GLH
Monitor: Asus PG278QR
Speakers: Logitech Z506
Donations account: masterotakusuko@gmail.com
StereoTextureEnable was already set to zero in the profile. I tried setting it again but it made no difference.
Like my work? You can send a donation via Paypal to sgs.rules@gmail.com
Windows 7 Pro 64x - Nvidia Driver 398.82 - EVGA 980Ti SC - Optoma HD26 with Edid override - 3D Vision 2 - i7-8700K CPU at 5.0Ghz - ASROCK Z370 Ext 4 Motherboard - 32 GB RAM Corsair Vengeance - 512 GB Samsung SSD 850 Pro - Creative Sound Blaster Z
VRWorks Support for Unity Now Available
Continuing our joint effort to deliver amazing VR experiences, game engine developer Unity today released 2017.1.0 Beta 2 which enables support for our VRWorks technology.
To speed efforts like these, earlier this year we promised to make adding VRWorks technologies to titles created with Unity Engine a snap by integrating support for VRWorks into the engine. Now — with Unity’s 2017.1.0 Beta 2 release — that promise has come to fruition.
https://developer.nvidia.com/nvidia-vrworks-support-unity-engine-now-available
This wouldn't really help here but it's brilliant news none the less.
Like my work? You can send a donation via Paypal to sgs.rules@gmail.com
Windows 7 Pro 64x - Nvidia Driver 398.82 - EVGA 980Ti SC - Optoma HD26 with Edid override - 3D Vision 2 - i7-8700K CPU at 5.0Ghz - ASROCK Z370 Ext 4 Motherboard - 32 GB RAM Corsair Vengeance - 512 GB Samsung SSD 850 Pro - Creative Sound Blaster Z
http://bo3b.net/DirectModeSample.zip
This doesn't do much, just draws a cube, but... it does it in 3D Vision now, using the Direct Mode driver calls. This does not use that sick hack of a stereo signature, it just uses nvapi calls. It supports normal separation and convergence adjustments.
The reason this is interesting is because that means we/you can add Direct Mode directly to your app, and not need 3D Vision Automatic, or 3Dmigoto. This should also allow us to to make this work for both the 4.x and 5.x versions of Unity. And also likely allow windowed mode 3D.
I didn't want to post anything before, because I wasn't sure this was possible. The documentation is nonexistent and/or misleading.
I'm going to cleanup the code a bit, lots and lots of broken experiment code, but I will post this sample to GitHub for the world to use. I searched high and low, and found no trace of sample code, so dangit, I made one.
The code for doing right eye drawing, which also sets up the projection matrix properly from the mono g_projection:
More soon.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers
Win7 64bit Pro
CPU: 4790K 4.8 GHZ
GPU: Aurus 1080 TI 2.08 GHZ - 100% Watercooled !
Monitor: Asus PG278QR
And lots of ram and HD's ;)
Ty bo3b
This is really great...THANKS!!!
This is a very interesting approach with the injector...nice found sgsrules!
MY WEB
Helix Mod - Making 3D Better
My 3D Screenshot Gallery
Like my fixes? you can donate to Paypal: dhr.donation@gmail.com
Like my work? You can send a donation via Paypal to sgs.rules@gmail.com
Windows 7 Pro 64x - Nvidia Driver 398.82 - EVGA 980Ti SC - Optoma HD26 with Edid override - 3D Vision 2 - i7-8700K CPU at 5.0Ghz - ASROCK Z370 Ext 4 Motherboard - 32 GB RAM Corsair Vengeance - 512 GB Samsung SSD 850 Pro - Creative Sound Blaster Z
Like my work? You can send a donation via Paypal to sgs.rules@gmail.com
Windows 7 Pro 64x - Nvidia Driver 398.82 - EVGA 980Ti SC - Optoma HD26 with Edid override - 3D Vision 2 - i7-8700K CPU at 5.0Ghz - ASROCK Z370 Ext 4 Motherboard - 32 GB RAM Corsair Vengeance - 512 GB Samsung SSD 850 Pro - Creative Sound Blaster Z