Native 3D Vision support in Unity Games via Injector
1 / 8
I recently came across VRGIN by Eusth. It's a VR injection framework for Unity, here's a link to the repo: https://github.com/Eusth/VRGIN
It was used to add VR support for Yooka Laylee: https://www.reddit.com/r/Vive/comments/65xo4x/yookalaylee_vr_mod_openvr/
I think a something similar could be done to add support for 3Dvision. It injects a Steam VR camera as well as other VR related code into unity. I imagine that it shouldn't be to difficult to strip out the Steam VR code and replace it with a simple stereoscopic renderer. The injector provides access to the camera so a new camera with adjusted matrices could render the scene twice, once for each eye to a 2x wide rendertarget that 3dmigoto could use. I imagine that this would be a lot easier than hacking a bunch of shaders to try to get 3DVision automatic mode to work with a game. Thoughts?
I think a something similar could be done to add support for 3Dvision. It injects a Steam VR camera as well as other VR related code into unity. I imagine that it shouldn't be to difficult to strip out the Steam VR code and replace it with a simple stereoscopic renderer. The injector provides access to the camera so a new camera with adjusted matrices could render the scene twice, once for each eye to a 2x wide rendertarget that 3dmigoto could use. I imagine that this would be a lot easier than hacking a bunch of shaders to try to get 3DVision automatic mode to work with a game. Thoughts?
Like my work? You can send a donation via Paypal to sgs.rules@gmail.com
Windows 7 Pro 64x - Nvidia Driver 398.82 - EVGA 980Ti SC - Optoma HD26 with Edid override - 3D Vision 2 - i7-8700K CPU at 5.0Ghz - ASROCK Z370 Ext 4 Motherboard - 32 GB RAM Corsair Vengeance - 512 GB Samsung SSD 850 Pro - Creative Sound Blaster Z
This is an extremely cool idea. Very promising.
I think you are right that the vrgin code could be adapted to provide 3D Vision output just as well.
The vrgin code is going to be providing a stereoscopic render already, because that is also needed for VR. And already handles the eye to eye swapping necessary. And drawing into a 2x wide buffer to be presented. All of those things are necessary for 3D Vision as well.
In my estimation, I think you would not use 3Dmigoto for this job, because it is tangential to the goal, and should be mostly unnecessary.
Same goes with 3D Vision Automatic. For this, since the eyes are properly created in stereo, you would want to use 3D Vision Direct instead. This will give you full control over the output timing and source, and take out all the Automatic weirdness. The only hard part would be creating an interface to the C++ nvapi necessary to call, but you only need 2 or 3 calls, and can hand build those to give you access from C#. You'd still link against the nvapi.lib static library.
Using this approach would also alleviate the need for forcing exclusive fullscreen in Unity games. While using 3D Vision Direct, you don't need a fullscreen window, and can do 3D in a window, because you are in charge of the timing.
I commented here rather than in the 3Dmigoto thread, because this seems like a better place for discussion. I'll follow this thread.
This is super cool. I won't have time to do coding myself, but if I can help out with suggestions or review, please don't hesitate to ask.
I think you are right that the vrgin code could be adapted to provide 3D Vision output just as well.
The vrgin code is going to be providing a stereoscopic render already, because that is also needed for VR. And already handles the eye to eye swapping necessary. And drawing into a 2x wide buffer to be presented. All of those things are necessary for 3D Vision as well.
In my estimation, I think you would not use 3Dmigoto for this job, because it is tangential to the goal, and should be mostly unnecessary.
Same goes with 3D Vision Automatic. For this, since the eyes are properly created in stereo, you would want to use 3D Vision Direct instead. This will give you full control over the output timing and source, and take out all the Automatic weirdness. The only hard part would be creating an interface to the C++ nvapi necessary to call, but you only need 2 or 3 calls, and can hand build those to give you access from C#. You'd still link against the nvapi.lib static library.
Using this approach would also alleviate the need for forcing exclusive fullscreen in Unity games. While using 3D Vision Direct, you don't need a fullscreen window, and can do 3D in a window, because you are in charge of the timing.
I commented here rather than in the 3Dmigoto thread, because this seems like a better place for discussion. I'll follow this thread.
This is super cool. I won't have time to do coding myself, but if I can help out with suggestions or review, please don't hesitate to ask.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607 Latest 3Dmigoto Release Bo3b's School for ShaderHackers
I would think that this would allow the use of Single Pass Stereo and Simultaneous Multi Projection? If so, 3D Vision Surround users would be very happy.
I would think that this would allow the use of Single Pass Stereo and Simultaneous Multi Projection? If so, 3D Vision Surround users would be very happy.
[quote="bo3b"]This is an extremely cool idea. Very promising.
I think you are right that the vrgin code could be adapted to provide 3D Vision output just as well.
The vrgin code is going to be providing a stereoscopic render already, because that is also needed for VR. And already handles the eye to eye swapping necessary. And drawing into a 2x wide buffer to be presented. All of those things are necessary for 3D Vision as well.
[/quote]
I ended up just using the injector part of the code, it's under his repo as IPA. All the VRGin stuff is not necessary for our intended purpose.
The script i wrote simply finds the the main camera in use, copies it's settings and generates two stereo cameras. The cameras are offset along their local x axis and a off center projection matrix is generated for each to get proper stereo pairs. The cameras render to two separate buffers which are then applied to the final render target.
[quote="bo3b"]
In my estimation, I think you would not use 3Dmigoto for this job, because it is tangential to the goal, and should be mostly unnecessary.
Same goes with 3D Vision Automatic.
[/quote]
I was trying to use 3dmigoto since it worked perfectly on a Unity project i had made in the past. The key difference was that this project didn't use the standard rendering path or rendered any polygons except for a fullscreen quad. Rendering was done via a raymarcher and implicit surfaces. I did this to create trippy demoscene type stuff, 3d fractals, etc, which looked brilliant in stereo 3D. But now since I'm trying to use it with games that i don't have complete control of the rendering path, 3d vision automatic mode interferes with things, which is why i was hoping i could simply disable that behavior and keep everything else.
[quote="bo3b"]
For this, since the eyes are properly created in stereo, you would want to use 3D Vision Direct instead. This will give you full control over the output timing and source, and take out all the Automatic weirdness. The only hard part would be creating an interface to the C++ nvapi necessary to call, but you only need 2 or 3 calls, and can hand build those to give you access from C#. You'd still link against the nvapi.lib static library.
Using this approach would also alleviate the need for forcing exclusive fullscreen in Unity games. While using 3D Vision Direct, you don't need a fullscreen window, and can do 3D in a window, because you are in charge of the timing.
[/quote]
I'm not really familiar with 3D Vision Direct. I tried googling it for some info and came up mostly empty, mainly old posts about vr or oculus.
*Edit* I found the API from Nvidias site as well as documentation, which i'll review when i have some free time.
Unity provides support for native code via a plugin interface, so the 3d Vision direct portion could be written entirely in C++. The way i see it working is that the stereo 2x wide rendertarget would be created in C++ and packacged as a dll. it would expose a few basic things to Unity using Unity's built in native plugin interface, things like setting the resolution and providing a pointer to the rendertarget so that Unity can reference it. I'm fluent in C# but I'm a bit rusty when it comes to C++. So this is the portion that i'll need help on. If there's a sample implementation somewhere that shows how the rendertarget, window creation, sterero tags etc are done i could use that as a reference and just add in the necessary code so that Unity can interface into it. Any help would be greatly appreciated.
[quote="D-Man11"]
I would think that this would allow the use of Single Pass Stereo and Simultaneous Multi Projection? If so, 3D Vision Surround users would be very happy.
[/quote]
Possibly, but as far as i know these features are only usable in Unity with VR support enabled which causes all sorts of other things to break. Shaders also need to be specifically authored for for Single Pass Stereo, most of the standard unity post effects get broken when it's enabled, same thing would happen with any other shaders that the devs created. On top of that Single Pass Stereo wasn't supported until version 5.4, the vast majority of the current games use older versions. Maybe in the future. I'm not sure why SMP would be beneficial for 3dVision Surround, it's simpler to simply create one extra wide render target. SMP is mainly aimed for VR so that you can use different sized render targets for lens matched shading.
bo3b said:This is an extremely cool idea. Very promising.
I think you are right that the vrgin code could be adapted to provide 3D Vision output just as well.
The vrgin code is going to be providing a stereoscopic render already, because that is also needed for VR. And already handles the eye to eye swapping necessary. And drawing into a 2x wide buffer to be presented. All of those things are necessary for 3D Vision as well.
I ended up just using the injector part of the code, it's under his repo as IPA. All the VRGin stuff is not necessary for our intended purpose.
The script i wrote simply finds the the main camera in use, copies it's settings and generates two stereo cameras. The cameras are offset along their local x axis and a off center projection matrix is generated for each to get proper stereo pairs. The cameras render to two separate buffers which are then applied to the final render target.
bo3b said:
In my estimation, I think you would not use 3Dmigoto for this job, because it is tangential to the goal, and should be mostly unnecessary.
Same goes with 3D Vision Automatic.
I was trying to use 3dmigoto since it worked perfectly on a Unity project i had made in the past. The key difference was that this project didn't use the standard rendering path or rendered any polygons except for a fullscreen quad. Rendering was done via a raymarcher and implicit surfaces. I did this to create trippy demoscene type stuff, 3d fractals, etc, which looked brilliant in stereo 3D. But now since I'm trying to use it with games that i don't have complete control of the rendering path, 3d vision automatic mode interferes with things, which is why i was hoping i could simply disable that behavior and keep everything else.
bo3b said:
For this, since the eyes are properly created in stereo, you would want to use 3D Vision Direct instead. This will give you full control over the output timing and source, and take out all the Automatic weirdness. The only hard part would be creating an interface to the C++ nvapi necessary to call, but you only need 2 or 3 calls, and can hand build those to give you access from C#. You'd still link against the nvapi.lib static library.
Using this approach would also alleviate the need for forcing exclusive fullscreen in Unity games. While using 3D Vision Direct, you don't need a fullscreen window, and can do 3D in a window, because you are in charge of the timing.
I'm not really familiar with 3D Vision Direct. I tried googling it for some info and came up mostly empty, mainly old posts about vr or oculus.
*Edit* I found the API from Nvidias site as well as documentation, which i'll review when i have some free time.
Unity provides support for native code via a plugin interface, so the 3d Vision direct portion could be written entirely in C++. The way i see it working is that the stereo 2x wide rendertarget would be created in C++ and packacged as a dll. it would expose a few basic things to Unity using Unity's built in native plugin interface, things like setting the resolution and providing a pointer to the rendertarget so that Unity can reference it. I'm fluent in C# but I'm a bit rusty when it comes to C++. So this is the portion that i'll need help on. If there's a sample implementation somewhere that shows how the rendertarget, window creation, sterero tags etc are done i could use that as a reference and just add in the necessary code so that Unity can interface into it. Any help would be greatly appreciated.
D-Man11 said:
I would think that this would allow the use of Single Pass Stereo and Simultaneous Multi Projection? If so, 3D Vision Surround users would be very happy.
Possibly, but as far as i know these features are only usable in Unity with VR support enabled which causes all sorts of other things to break. Shaders also need to be specifically authored for for Single Pass Stereo, most of the standard unity post effects get broken when it's enabled, same thing would happen with any other shaders that the devs created. On top of that Single Pass Stereo wasn't supported until version 5.4, the vast majority of the current games use older versions. Maybe in the future. I'm not sure why SMP would be beneficial for 3dVision Surround, it's simpler to simply create one extra wide render target. SMP is mainly aimed for VR so that you can use different sized render targets for lens matched shading.
Like my work? You can send a donation via Paypal to sgs.rules@gmail.com
Windows 7 Pro 64x - Nvidia Driver 398.82 - EVGA 980Ti SC - Optoma HD26 with Edid override - 3D Vision 2 - i7-8700K CPU at 5.0Ghz - ASROCK Z370 Ext 4 Motherboard - 32 GB RAM Corsair Vengeance - 512 GB Samsung SSD 850 Pro - Creative Sound Blaster Z
3D Vision Direct, means, the Nvidia driver just gives you access to the hardware.
You need to call 2/3 functions (except the normal ones for separation and convergence) to say where to render: Left of Right. The rest is UP to you (what you render).
I actually switched my wrapper to use 3D Vision Direct as I didn't need Automatic mode (which is DirectX exclusive and does zero to OpenGL except adding overhead to the renderer and lower framerates).
I know this thing is not very well documented (it took me quite a bit to figure it out -_-) If you are willing to wait a bit, I will give you access to my own Githut repository where the latest wrapper lives and you can see what I've done there to get 3D Vision Direct working. Haven't forgot! Just focusing on RL job & stuff and ME:A lately! ;)
3D Vision Direct, means, the Nvidia driver just gives you access to the hardware.
You need to call 2/3 functions (except the normal ones for separation and convergence) to say where to render: Left of Right. The rest is UP to you (what you render).
I actually switched my wrapper to use 3D Vision Direct as I didn't need Automatic mode (which is DirectX exclusive and does zero to OpenGL except adding overhead to the renderer and lower framerates).
I know this thing is not very well documented (it took me quite a bit to figure it out -_-) If you are willing to wait a bit, I will give you access to my own Githut repository where the latest wrapper lives and you can see what I've done there to get 3D Vision Direct working. Haven't forgot! Just focusing on RL job & stuff and ME:A lately! ;)
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
Search NVIDIA Special Event: Simultaneous Multi-Projection on youtube, about 16 minute runtime.
You can see that it corrects the scenery so that it aligns properly on multiple displays that angle in around the user. The way it was presented here, it seemed like it was going to be implemented in non VR hardware as well.
Single Pass Stereo, has been around since at least 2010 in varying forms and while originally targeted at VR, it was also available for monitors.
I'll see if I can dig up some of the links
edit: here's one of the early papers were some researchers were first exploring the idea of Single Pas Stereo in 2008 via OpenGL.
http://www-igm.univ-mlv.fr/~fdesorbi/missbiblio/FNB08b/FNB08b.pdf
Search NVIDIA Special Event: Simultaneous Multi-Projection on youtube, about 16 minute runtime.
You can see that it corrects the scenery so that it aligns properly on multiple displays that angle in around the user. The way it was presented here, it seemed like it was going to be implemented in non VR hardware as well.
Single Pass Stereo, has been around since at least 2010 in varying forms and while originally targeted at VR, it was also available for monitors.
@D-Man11: I read through that paper, and it's an interesting idea. It's actually another possibility that we could use to make our own 3D Vision Automatic, and not be held to the whims of NVidia.
The basic idea is to double up the vertices using geometry shaders, and then run the PS across all the doubled vertices. Not a full 2x speedup, but helps, and also importantly solves the deferred shading problems that occur with shadows and reflections.
Idle speculation, but I was toying with the idea of making a Linux version of this, so we get to rule our own destiny. It seems very clear to me that Microsoft wants to herd everyone into a walled garden, and I like freedom better.
I don't think this will really help @sgrules here though, it's too far afield from his approach.
@D-Man11: I read through that paper, and it's an interesting idea. It's actually another possibility that we could use to make our own 3D Vision Automatic, and not be held to the whims of NVidia.
The basic idea is to double up the vertices using geometry shaders, and then run the PS across all the doubled vertices. Not a full 2x speedup, but helps, and also importantly solves the deferred shading problems that occur with shadows and reflections.
Idle speculation, but I was toying with the idea of making a Linux version of this, so we get to rule our own destiny. It seems very clear to me that Microsoft wants to herd everyone into a walled garden, and I like freedom better.
I don't think this will really help @sgrules here though, it's too far afield from his approach.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607 Latest 3Dmigoto Release Bo3b's School for ShaderHackers
That was an early and somewhat rudimentary paper. I spent hours trying to find the others that I read. I'm starting to think it was some of the OpenGL ES stuff that I read, prior to all traces of it vanishing. I do recall that a lot of the work was being done in OpenGL using new extensions that were getting updated and renamed as time went by. Some of it was about reducing the overhead and might have been in some of the Mantle and Vulkan stuff I read. But they were also working on some DirectX stuff. Sadly, I can't seem to find any of it. I also know, that at the time, I was reading Nvidia's article about wrapping Unreal Engine dx11 to OpenGL
[url=https://developer.nvidia.com/sites/default/files/akamai/gameworks/events/gdc14/GDC_14_Bringing%20Unreal%20Engine%204%20to%20OpenGL.pdf]UnReal to OpenGL pdf[/url]
Edit: I found one of the later OpenGL papers that I was thinking of, there were earlier versions of this transition from a monitor to VR headset. This is where it's now called " OVR_multiview"
https://www.khronos.org/registry/OpenGL/extensions/OVR/OVR_multiview.txt
Here's the OpenGL ES link that's now dead and redirects, it previously linked to using 3D Vision Automatic with mobile devices via OpenGL ES, but I can no longer find it in the SDK.
http://docs.nvidia.com/tegra/data/Use_NVIDIA_3D_Vision_Automatic.html
I wish I could find the DirectX stuff
That was an early and somewhat rudimentary paper. I spent hours trying to find the others that I read. I'm starting to think it was some of the OpenGL ES stuff that I read, prior to all traces of it vanishing. I do recall that a lot of the work was being done in OpenGL using new extensions that were getting updated and renamed as time went by. Some of it was about reducing the overhead and might have been in some of the Mantle and Vulkan stuff I read. But they were also working on some DirectX stuff. Sadly, I can't seem to find any of it. I also know, that at the time, I was reading Nvidia's article about wrapping Unreal Engine dx11 to OpenGL
Edit: I found one of the later OpenGL papers that I was thinking of, there were earlier versions of this transition from a monitor to VR headset. This is where it's now called " OVR_multiview"
[quote="sgsrules"]I ended up just using the injector part of the code, it's under his repo as IPA. All the VRGin stuff is not necessary for our intended purpose.
The script i wrote simply finds the the main camera in use, copies it's settings and generates two stereo cameras. The cameras are offset along their local x axis and a off center projection matrix is generated for each to get proper stereo pairs. The cameras render to two separate buffers which are then applied to the final render target.[/quote]
OK, sounds good. I assume the 'script' is C# code for Unity to use to draw the images correctly? Of note, I wrote a Unity test app for VR, so I'm pretty familiar with their structure and design.
I'm happy to help here if it makes sense.
It might be worth looking at the VRgin code to see if he solved the shadow glitches from off-center projection. In VR, this would be a deal-breaker, so he had to have some solution for that.
[quote="sgsrules"]I was trying to use 3dmigoto since it worked perfectly on a Unity project i had made in the past. The key difference was that this project didn't use the standard rendering path or rendered any polygons except for a fullscreen quad. Rendering was done via a raymarcher and implicit surfaces. I did this to create trippy demoscene type stuff, 3d fractals, etc, which looked brilliant in stereo 3D. But now since I'm trying to use it with games that i don't have complete control of the rendering path, 3d vision automatic mode interferes with things, which is why i was hoping i could simply disable that behavior and keep everything else.[/quote]
I'm not clear on the need for 3Dmigoto here. It does allow full shader access, but that's not what you need here.
It's also worth noting that by itself 3Dmigoto doesn't alter any shaders. Only hand-fixed stuff in ShaderFixes will be loaded live to alter a scene. Nothing is done automatically by 3Dmigoto.
It should be possible to have your setup run with 3D disabled altogether, which would remove the problem of Automatic interfering.
If I'm understanding this, you will generate a SBS output that is fully 3D, that has no connection on 3D Vision itself. The only drawback here would be that for 3D Vision output setups, there would be no frame-sequential version, and hence only 3D TV Play and SBS supported outputs would work. This can be solved by using 3D Vision Direct, but is unnecessary to start I think.
[quote="sgsrules"]I'm not really familiar with 3D Vision Direct. I tried googling it for some info and came up mostly empty, mainly old posts about vr or oculus.
*Edit* I found the API from Nvidias site as well as documentation, which i'll review when i have some free time.
Unity provides support for native code via a plugin interface, so the 3d Vision direct portion could be written entirely in C++. The way i see it working is that the stereo 2x wide rendertarget would be created in C++ and packacged as a dll. it would expose a few basic things to Unity using Unity's built in native plugin interface, things like setting the resolution and providing a pointer to the rendertarget so that Unity can reference it. I'm fluent in C# but I'm a bit rusty when it comes to C++. So this is the portion that i'll need help on. If there's a sample implementation somewhere that shows how the rendertarget, window creation, sterero tags etc are done i could use that as a reference and just add in the necessary code so that Unity can interface into it. Any help would be greatly appreciated.[/quote]
I'm fairly fluent in C++ at this point, and I'm happy to help where I can. Especially because this is a very intriguing approach. I've also looked at some 3D Vision Direct samples, and their API extensively, so I'm sure we can figure this out if we have to.
As I note above though, it seems like this is unnecessary for a SBS output render though. The only reason to add this would be to support frame-sequential output devices.
This is maybe the part you want to use 3Dmigoto for, because we now have DarkStarSword's SBS output shaders. I don't think we presently have a way to disable 3D Vision Automatic for that sort of use. We have a force on mode, but no force off. If necessary, I can add that pretty easily.
As far as samples go, Helifax's code will be the most interesting. I recall seeing samples somewhere before, but can't pull up anything right now.
Let me know if you think I'm confused here about what you need.
sgsrules said:I ended up just using the injector part of the code, it's under his repo as IPA. All the VRGin stuff is not necessary for our intended purpose.
The script i wrote simply finds the the main camera in use, copies it's settings and generates two stereo cameras. The cameras are offset along their local x axis and a off center projection matrix is generated for each to get proper stereo pairs. The cameras render to two separate buffers which are then applied to the final render target.
OK, sounds good. I assume the 'script' is C# code for Unity to use to draw the images correctly? Of note, I wrote a Unity test app for VR, so I'm pretty familiar with their structure and design.
I'm happy to help here if it makes sense.
It might be worth looking at the VRgin code to see if he solved the shadow glitches from off-center projection. In VR, this would be a deal-breaker, so he had to have some solution for that.
sgsrules said:I was trying to use 3dmigoto since it worked perfectly on a Unity project i had made in the past. The key difference was that this project didn't use the standard rendering path or rendered any polygons except for a fullscreen quad. Rendering was done via a raymarcher and implicit surfaces. I did this to create trippy demoscene type stuff, 3d fractals, etc, which looked brilliant in stereo 3D. But now since I'm trying to use it with games that i don't have complete control of the rendering path, 3d vision automatic mode interferes with things, which is why i was hoping i could simply disable that behavior and keep everything else.
I'm not clear on the need for 3Dmigoto here. It does allow full shader access, but that's not what you need here.
It's also worth noting that by itself 3Dmigoto doesn't alter any shaders. Only hand-fixed stuff in ShaderFixes will be loaded live to alter a scene. Nothing is done automatically by 3Dmigoto.
It should be possible to have your setup run with 3D disabled altogether, which would remove the problem of Automatic interfering.
If I'm understanding this, you will generate a SBS output that is fully 3D, that has no connection on 3D Vision itself. The only drawback here would be that for 3D Vision output setups, there would be no frame-sequential version, and hence only 3D TV Play and SBS supported outputs would work. This can be solved by using 3D Vision Direct, but is unnecessary to start I think.
sgsrules said:I'm not really familiar with 3D Vision Direct. I tried googling it for some info and came up mostly empty, mainly old posts about vr or oculus.
*Edit* I found the API from Nvidias site as well as documentation, which i'll review when i have some free time.
Unity provides support for native code via a plugin interface, so the 3d Vision direct portion could be written entirely in C++. The way i see it working is that the stereo 2x wide rendertarget would be created in C++ and packacged as a dll. it would expose a few basic things to Unity using Unity's built in native plugin interface, things like setting the resolution and providing a pointer to the rendertarget so that Unity can reference it. I'm fluent in C# but I'm a bit rusty when it comes to C++. So this is the portion that i'll need help on. If there's a sample implementation somewhere that shows how the rendertarget, window creation, sterero tags etc are done i could use that as a reference and just add in the necessary code so that Unity can interface into it. Any help would be greatly appreciated.
I'm fairly fluent in C++ at this point, and I'm happy to help where I can. Especially because this is a very intriguing approach. I've also looked at some 3D Vision Direct samples, and their API extensively, so I'm sure we can figure this out if we have to.
As I note above though, it seems like this is unnecessary for a SBS output render though. The only reason to add this would be to support frame-sequential output devices.
This is maybe the part you want to use 3Dmigoto for, because we now have DarkStarSword's SBS output shaders. I don't think we presently have a way to disable 3D Vision Automatic for that sort of use. We have a force on mode, but no force off. If necessary, I can add that pretty easily.
As far as samples go, Helifax's code will be the most interesting. I recall seeing samples somewhere before, but can't pull up anything right now.
Let me know if you think I'm confused here about what you need.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607 Latest 3Dmigoto Release Bo3b's School for ShaderHackers
Isn't direct mode activated by the NVSTEREO_IMAGE_SIGNATURE
Like this, from 2004 when 3D Vision did not exist?
[url]http://www.stereo3d.com/discus/messages/24/2265.html?1095551140#POST14302[/url]
[quote="D-Man11"]Isn't direct mode activated by the NVSTEREO_IMAGE_SIGNATURE
Like this, from 2004 when 3D Vision did not exist?
[url]http://www.stereo3d.com/discus/messages/24/2265.html?1095551140#POST14302[/url][/quote]
That was sort of Gen1 of the software, where you had to add that freaky signature to textures to make them 3D. It still works, but is an obsolete way to enable 3D.
The current way is to use the NVAPI directly, and create a Stereo Handle: NvAPI_Stereo_CreateHandleFromIUnknown
Which is then used to activate 3D: NvAPI_Stereo_Activate
(We actually use that old hacky way in 3Dmigoto: [url]https://github.com/bo3b/3Dmigoto/blob/master/nvstereo.h[/url], but only because we are at a bizarre spot in the runtime, in the guts of a game, where we don't have much control.)
To set Direct Mode, we'd use: NvAPI_Stereo_SetDriverMode(NVAPI_STEREO_DRIVER_MODE_DIRECT=2) before creating the DX11Device, so that Automatic mode would not kick in and take over.
That was sort of Gen1 of the software, where you had to add that freaky signature to textures to make them 3D. It still works, but is an obsolete way to enable 3D.
The current way is to use the NVAPI directly, and create a Stereo Handle: NvAPI_Stereo_CreateHandleFromIUnknown
Which is then used to activate 3D: NvAPI_Stereo_Activate
(We actually use that old hacky way in 3Dmigoto: https://github.com/bo3b/3Dmigoto/blob/master/nvstereo.h, but only because we are at a bizarre spot in the runtime, in the guts of a game, where we don't have much control.)
To set Direct Mode, we'd use: NvAPI_Stereo_SetDriverMode(NVAPI_STEREO_DRIVER_MODE_DIRECT=2) before creating the DX11Device, so that Automatic mode would not kick in and take over.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607 Latest 3Dmigoto Release Bo3b's School for ShaderHackers
[quote="bo3b"]Idle speculation, but I was toying with the idea of making a Linux version of this, so we get to rule our own destiny. It seems very clear to me that Microsoft wants to herd everyone into a walled garden, and I like freedom better.[/quote]
"Give me Linux, or give me death!". I'm quite sure a person of great historical significance said that many years ago. Should this ever become anything more than idle speculation, perhaps this could merit it's own separate thread for the sake of discussion. I'd never really entertained the idea of using Linux until fairly recently, as I don't like the direction that computing in general is heading in, as good as Win 10 may be in many departments.
To the uninitiated this sounds like a massive undertaking, and there's certainly no prior expectation on my part, but if this ever materializes 3D gaming could in theory be returning to the multi-boot era, never mind a dual-boot one.
bo3b said:Idle speculation, but I was toying with the idea of making a Linux version of this, so we get to rule our own destiny. It seems very clear to me that Microsoft wants to herd everyone into a walled garden, and I like freedom better.
"Give me Linux, or give me death!". I'm quite sure a person of great historical significance said that many years ago. Should this ever become anything more than idle speculation, perhaps this could merit it's own separate thread for the sake of discussion. I'd never really entertained the idea of using Linux until fairly recently, as I don't like the direction that computing in general is heading in, as good as Win 10 may be in many departments.
To the uninitiated this sounds like a massive undertaking, and there's certainly no prior expectation on my part, but if this ever materializes 3D gaming could in theory be returning to the multi-boot era, never mind a dual-boot one.
[quote="bo3b"]Idle speculation, but I was toying with the idea of making a Linux version of this, so we get to rule our own destiny. It seems very clear to me that Microsoft wants to herd everyone into a walled garden, and I like freedom better.[/quote]
Funny that you mention that;)
I was thinking the same thing with my OpenGL wrapper;)
The only thing it would miss if Reshade and the CM shader though:(
But is definitely doable;) Had 3D Vision running before in Linux;) from a program. Just following the same rules that I use in my wrapper, we could kick 3D Vision under any game in Linux;)
bo3b said:Idle speculation, but I was toying with the idea of making a Linux version of this, so we get to rule our own destiny. It seems very clear to me that Microsoft wants to herd everyone into a walled garden, and I like freedom better.
Funny that you mention that;)
I was thinking the same thing with my OpenGL wrapper;)
The only thing it would miss if Reshade and the CM shader though:(
But is definitely doable;) Had 3D Vision running before in Linux;) from a program. Just following the same rules that I use in my wrapper, we could kick 3D Vision under any game in Linux;)
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
Backing up the whole point of this thread, I bought a Steam copy of P.A.M.E.L.A. on early access a couple of weeks ago. It uses Unity engine version 5.5.2. Looks really good, works well enough, just not in stereo 3D. Tried forcing stereo 3D via the launch options, but got no response. Not exactly a massive surprise to those more familiar with Unity games and 3D Vision. Oh well, thought it was worth a try.
@sgsrules & bo3b
Best of luck and thank you both for taking a look at this possible stereoscopic workaround for Unity games.
@helifax
That sounds intriguing! This is obviously great if this is doable, although I'd be curious to know as to what your thoughts and experiences are using Linux itself, regarding it's benefits and it's shortcomings at present, along with the practicality of getting stereo 3D working in Linux based games. I'll assume that Linux itself has improved over the years, but is it improving fast enough? If either yourself or bo3b ever decide to take this idea of Linux and stereo 3D any further and look to discuss it on a separate dedicated thread, I'd be happy to chime in on that. I know that Linux has been around forever, but given what the situation is right now, maybe it's time as a workable alternative OS might finally be arriving.
Backing up the whole point of this thread, I bought a Steam copy of P.A.M.E.L.A. on early access a couple of weeks ago. It uses Unity engine version 5.5.2. Looks really good, works well enough, just not in stereo 3D. Tried forcing stereo 3D via the launch options, but got no response. Not exactly a massive surprise to those more familiar with Unity games and 3D Vision. Oh well, thought it was worth a try.
@sgsrules & bo3b
Best of luck and thank you both for taking a look at this possible stereoscopic workaround for Unity games.
@helifax
That sounds intriguing! This is obviously great if this is doable, although I'd be curious to know as to what your thoughts and experiences are using Linux itself, regarding it's benefits and it's shortcomings at present, along with the practicality of getting stereo 3D working in Linux based games. I'll assume that Linux itself has improved over the years, but is it improving fast enough? If either yourself or bo3b ever decide to take this idea of Linux and stereo 3D any further and look to discuss it on a separate dedicated thread, I'd be happy to chime in on that. I know that Linux has been around forever, but given what the situation is right now, maybe it's time as a workable alternative OS might finally be arriving.
[quote="D-Man11"]Search NVIDIA Special Event: Simultaneous Multi-Projection on youtube, about 16 minute runtime.
You can see that it corrects the scenery so that it aligns properly on multiple displays that angle in around the user. The way it was presented here, it seemed like it was going to be implemented in non VR hardware as well.
[/quote]
Neat stuff!
[quote="D-Man11"]
Single Pass Stereo, has been around since at least 2010 in varying forms and while originally targeted at VR, it was also available for monitors.
I'll see if I can dig up some of the links
edit: here's one of the early papers were some researchers were first exploring the idea of Single Pas Stereo in 2008 via OpenGL.
http://www-igm.univ-mlv.fr/~fdesorbi/missbiblio/FNB08b/FNB08b.pdf[/quote]
Yes, I know that Single Pass stereo has been around for a while. I'm just saying that Unity hasn't supported it until recently (5.4 i believe), so any games prior to that will not be able to use it. You also need to modify all your shaders so that anything that reads from the framebuffers will offset it's coordinates to read from the appropriate side of the buffer. It's a trivial fix but you need to do it for all your post effects.
I implemented single pass stereo in my OpenGL based engine when i first got my Oculus DK1. It was a fairly simple process. First attach a geometry shader to your shader program. Instead of cycling through the vertices once to output a polygon or primitive you would do it twice, once for each eye and apply the view and projection matrices here instead of in the vertex shader. Then you could either output them to separate rendertargets in the shadet if using a layered texture array, or offset the vertices to the appropriate side of the screen if using a single 2x wide buffer. If doing the latter I would apply an "eye" attribute to each vertex in the geometry shader output so that the pixel shader would know which fragments to discard on which side of the screen. If anyone's interested I could dig up some old code. I also mentioned this approach to helifax a couple of years ago for him to use in his OpenGL wrapper: https://forums.geforce.com/default/topic/682130/-opengl-3d-vision-wrapper-enabling-3d-vision-in-opengl-apps/?offset=1017#4721350
But i never followed up on it.
D-Man11 said:Search NVIDIA Special Event: Simultaneous Multi-Projection on youtube, about 16 minute runtime.
You can see that it corrects the scenery so that it aligns properly on multiple displays that angle in around the user. The way it was presented here, it seemed like it was going to be implemented in non VR hardware as well.
Neat stuff!
D-Man11 said:
Single Pass Stereo, has been around since at least 2010 in varying forms and while originally targeted at VR, it was also available for monitors.
Yes, I know that Single Pass stereo has been around for a while. I'm just saying that Unity hasn't supported it until recently (5.4 i believe), so any games prior to that will not be able to use it. You also need to modify all your shaders so that anything that reads from the framebuffers will offset it's coordinates to read from the appropriate side of the buffer. It's a trivial fix but you need to do it for all your post effects.
I implemented single pass stereo in my OpenGL based engine when i first got my Oculus DK1. It was a fairly simple process. First attach a geometry shader to your shader program. Instead of cycling through the vertices once to output a polygon or primitive you would do it twice, once for each eye and apply the view and projection matrices here instead of in the vertex shader. Then you could either output them to separate rendertargets in the shadet if using a layered texture array, or offset the vertices to the appropriate side of the screen if using a single 2x wide buffer. If doing the latter I would apply an "eye" attribute to each vertex in the geometry shader output so that the pixel shader would know which fragments to discard on which side of the screen. If anyone's interested I could dig up some old code. I also mentioned this approach to helifax a couple of years ago for him to use in his OpenGL wrapper: https://forums.geforce.com/default/topic/682130/-opengl-3d-vision-wrapper-enabling-3d-vision-in-opengl-apps/?offset=1017#4721350
But i never followed up on it.
Like my work? You can send a donation via Paypal to sgs.rules@gmail.com
Windows 7 Pro 64x - Nvidia Driver 398.82 - EVGA 980Ti SC - Optoma HD26 with Edid override - 3D Vision 2 - i7-8700K CPU at 5.0Ghz - ASROCK Z370 Ext 4 Motherboard - 32 GB RAM Corsair Vengeance - 512 GB Samsung SSD 850 Pro - Creative Sound Blaster Z
It was used to add VR support for Yooka Laylee: https://www.reddit.com/r/Vive/comments/65xo4x/yookalaylee_vr_mod_openvr/
I think a something similar could be done to add support for 3Dvision. It injects a Steam VR camera as well as other VR related code into unity. I imagine that it shouldn't be to difficult to strip out the Steam VR code and replace it with a simple stereoscopic renderer. The injector provides access to the camera so a new camera with adjusted matrices could render the scene twice, once for each eye to a 2x wide rendertarget that 3dmigoto could use. I imagine that this would be a lot easier than hacking a bunch of shaders to try to get 3DVision automatic mode to work with a game. Thoughts?
Like my work? You can send a donation via Paypal to sgs.rules@gmail.com
Windows 7 Pro 64x - Nvidia Driver 398.82 - EVGA 980Ti SC - Optoma HD26 with Edid override - 3D Vision 2 - i7-8700K CPU at 5.0Ghz - ASROCK Z370 Ext 4 Motherboard - 32 GB RAM Corsair Vengeance - 512 GB Samsung SSD 850 Pro - Creative Sound Blaster Z
I think you are right that the vrgin code could be adapted to provide 3D Vision output just as well.
The vrgin code is going to be providing a stereoscopic render already, because that is also needed for VR. And already handles the eye to eye swapping necessary. And drawing into a 2x wide buffer to be presented. All of those things are necessary for 3D Vision as well.
In my estimation, I think you would not use 3Dmigoto for this job, because it is tangential to the goal, and should be mostly unnecessary.
Same goes with 3D Vision Automatic. For this, since the eyes are properly created in stereo, you would want to use 3D Vision Direct instead. This will give you full control over the output timing and source, and take out all the Automatic weirdness. The only hard part would be creating an interface to the C++ nvapi necessary to call, but you only need 2 or 3 calls, and can hand build those to give you access from C#. You'd still link against the nvapi.lib static library.
Using this approach would also alleviate the need for forcing exclusive fullscreen in Unity games. While using 3D Vision Direct, you don't need a fullscreen window, and can do 3D in a window, because you are in charge of the timing.
I commented here rather than in the 3Dmigoto thread, because this seems like a better place for discussion. I'll follow this thread.
This is super cool. I won't have time to do coding myself, but if I can help out with suggestions or review, please don't hesitate to ask.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers
I ended up just using the injector part of the code, it's under his repo as IPA. All the VRGin stuff is not necessary for our intended purpose.
The script i wrote simply finds the the main camera in use, copies it's settings and generates two stereo cameras. The cameras are offset along their local x axis and a off center projection matrix is generated for each to get proper stereo pairs. The cameras render to two separate buffers which are then applied to the final render target.
I was trying to use 3dmigoto since it worked perfectly on a Unity project i had made in the past. The key difference was that this project didn't use the standard rendering path or rendered any polygons except for a fullscreen quad. Rendering was done via a raymarcher and implicit surfaces. I did this to create trippy demoscene type stuff, 3d fractals, etc, which looked brilliant in stereo 3D. But now since I'm trying to use it with games that i don't have complete control of the rendering path, 3d vision automatic mode interferes with things, which is why i was hoping i could simply disable that behavior and keep everything else.
I'm not really familiar with 3D Vision Direct. I tried googling it for some info and came up mostly empty, mainly old posts about vr or oculus.
*Edit* I found the API from Nvidias site as well as documentation, which i'll review when i have some free time.
Unity provides support for native code via a plugin interface, so the 3d Vision direct portion could be written entirely in C++. The way i see it working is that the stereo 2x wide rendertarget would be created in C++ and packacged as a dll. it would expose a few basic things to Unity using Unity's built in native plugin interface, things like setting the resolution and providing a pointer to the rendertarget so that Unity can reference it. I'm fluent in C# but I'm a bit rusty when it comes to C++. So this is the portion that i'll need help on. If there's a sample implementation somewhere that shows how the rendertarget, window creation, sterero tags etc are done i could use that as a reference and just add in the necessary code so that Unity can interface into it. Any help would be greatly appreciated.
Possibly, but as far as i know these features are only usable in Unity with VR support enabled which causes all sorts of other things to break. Shaders also need to be specifically authored for for Single Pass Stereo, most of the standard unity post effects get broken when it's enabled, same thing would happen with any other shaders that the devs created. On top of that Single Pass Stereo wasn't supported until version 5.4, the vast majority of the current games use older versions. Maybe in the future. I'm not sure why SMP would be beneficial for 3dVision Surround, it's simpler to simply create one extra wide render target. SMP is mainly aimed for VR so that you can use different sized render targets for lens matched shading.
Like my work? You can send a donation via Paypal to sgs.rules@gmail.com
Windows 7 Pro 64x - Nvidia Driver 398.82 - EVGA 980Ti SC - Optoma HD26 with Edid override - 3D Vision 2 - i7-8700K CPU at 5.0Ghz - ASROCK Z370 Ext 4 Motherboard - 32 GB RAM Corsair Vengeance - 512 GB Samsung SSD 850 Pro - Creative Sound Blaster Z
You need to call 2/3 functions (except the normal ones for separation and convergence) to say where to render: Left of Right. The rest is UP to you (what you render).
I actually switched my wrapper to use 3D Vision Direct as I didn't need Automatic mode (which is DirectX exclusive and does zero to OpenGL except adding overhead to the renderer and lower framerates).
I know this thing is not very well documented (it took me quite a bit to figure it out -_-) If you are willing to wait a bit, I will give you access to my own Githut repository where the latest wrapper lives and you can see what I've done there to get 3D Vision Direct working. Haven't forgot! Just focusing on RL job & stuff and ME:A lately! ;)
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
You can see that it corrects the scenery so that it aligns properly on multiple displays that angle in around the user. The way it was presented here, it seemed like it was going to be implemented in non VR hardware as well.
Single Pass Stereo, has been around since at least 2010 in varying forms and while originally targeted at VR, it was also available for monitors.
I'll see if I can dig up some of the links
edit: here's one of the early papers were some researchers were first exploring the idea of Single Pas Stereo in 2008 via OpenGL.
http://www-igm.univ-mlv.fr/~fdesorbi/missbiblio/FNB08b/FNB08b.pdf
The basic idea is to double up the vertices using geometry shaders, and then run the PS across all the doubled vertices. Not a full 2x speedup, but helps, and also importantly solves the deferred shading problems that occur with shadows and reflections.
Idle speculation, but I was toying with the idea of making a Linux version of this, so we get to rule our own destiny. It seems very clear to me that Microsoft wants to herd everyone into a walled garden, and I like freedom better.
I don't think this will really help @sgrules here though, it's too far afield from his approach.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers
UnReal to OpenGL pdf
Edit: I found one of the later OpenGL papers that I was thinking of, there were earlier versions of this transition from a monitor to VR headset. This is where it's now called " OVR_multiview"
https://www.khronos.org/registry/OpenGL/extensions/OVR/OVR_multiview.txt
Here's the OpenGL ES link that's now dead and redirects, it previously linked to using 3D Vision Automatic with mobile devices via OpenGL ES, but I can no longer find it in the SDK.
http://docs.nvidia.com/tegra/data/Use_NVIDIA_3D_Vision_Automatic.html
I wish I could find the DirectX stuff
OK, sounds good. I assume the 'script' is C# code for Unity to use to draw the images correctly? Of note, I wrote a Unity test app for VR, so I'm pretty familiar with their structure and design.
I'm happy to help here if it makes sense.
It might be worth looking at the VRgin code to see if he solved the shadow glitches from off-center projection. In VR, this would be a deal-breaker, so he had to have some solution for that.
I'm not clear on the need for 3Dmigoto here. It does allow full shader access, but that's not what you need here.
It's also worth noting that by itself 3Dmigoto doesn't alter any shaders. Only hand-fixed stuff in ShaderFixes will be loaded live to alter a scene. Nothing is done automatically by 3Dmigoto.
It should be possible to have your setup run with 3D disabled altogether, which would remove the problem of Automatic interfering.
If I'm understanding this, you will generate a SBS output that is fully 3D, that has no connection on 3D Vision itself. The only drawback here would be that for 3D Vision output setups, there would be no frame-sequential version, and hence only 3D TV Play and SBS supported outputs would work. This can be solved by using 3D Vision Direct, but is unnecessary to start I think.
I'm fairly fluent in C++ at this point, and I'm happy to help where I can. Especially because this is a very intriguing approach. I've also looked at some 3D Vision Direct samples, and their API extensively, so I'm sure we can figure this out if we have to.
As I note above though, it seems like this is unnecessary for a SBS output render though. The only reason to add this would be to support frame-sequential output devices.
This is maybe the part you want to use 3Dmigoto for, because we now have DarkStarSword's SBS output shaders. I don't think we presently have a way to disable 3D Vision Automatic for that sort of use. We have a force on mode, but no force off. If necessary, I can add that pretty easily.
As far as samples go, Helifax's code will be the most interesting. I recall seeing samples somewhere before, but can't pull up anything right now.
Let me know if you think I'm confused here about what you need.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers
Like this, from 2004 when 3D Vision did not exist?
http://www.stereo3d.com/discus/messages/24/2265.html?1095551140#POST14302
That was sort of Gen1 of the software, where you had to add that freaky signature to textures to make them 3D. It still works, but is an obsolete way to enable 3D.
The current way is to use the NVAPI directly, and create a Stereo Handle: NvAPI_Stereo_CreateHandleFromIUnknown
Which is then used to activate 3D: NvAPI_Stereo_Activate
(We actually use that old hacky way in 3Dmigoto: https://github.com/bo3b/3Dmigoto/blob/master/nvstereo.h, but only because we are at a bizarre spot in the runtime, in the guts of a game, where we don't have much control.)
To set Direct Mode, we'd use: NvAPI_Stereo_SetDriverMode(NVAPI_STEREO_DRIVER_MODE_DIRECT=2) before creating the DX11Device, so that Automatic mode would not kick in and take over.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers
"Give me Linux, or give me death!". I'm quite sure a person of great historical significance said that many years ago. Should this ever become anything more than idle speculation, perhaps this could merit it's own separate thread for the sake of discussion. I'd never really entertained the idea of using Linux until fairly recently, as I don't like the direction that computing in general is heading in, as good as Win 10 may be in many departments.
To the uninitiated this sounds like a massive undertaking, and there's certainly no prior expectation on my part, but if this ever materializes 3D gaming could in theory be returning to the multi-boot era, never mind a dual-boot one.
Intel Core i7 4770k @ 4.4Ghz, 3x GTX Titan, 16GB Tactical Tracer LED, CPU/GPU Dual-Loop Water-Cooled - Driver 331.82, DX11.0
Funny that you mention that;)
I was thinking the same thing with my OpenGL wrapper;)
The only thing it would miss if Reshade and the CM shader though:(
But is definitely doable;) Had 3D Vision running before in Linux;) from a program. Just following the same rules that I use in my wrapper, we could kick 3D Vision under any game in Linux;)
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
@sgsrules & bo3b
Best of luck and thank you both for taking a look at this possible stereoscopic workaround for Unity games.
@helifax
That sounds intriguing! This is obviously great if this is doable, although I'd be curious to know as to what your thoughts and experiences are using Linux itself, regarding it's benefits and it's shortcomings at present, along with the practicality of getting stereo 3D working in Linux based games. I'll assume that Linux itself has improved over the years, but is it improving fast enough? If either yourself or bo3b ever decide to take this idea of Linux and stereo 3D any further and look to discuss it on a separate dedicated thread, I'd be happy to chime in on that. I know that Linux has been around forever, but given what the situation is right now, maybe it's time as a workable alternative OS might finally be arriving.
Intel Core i7 4770k @ 4.4Ghz, 3x GTX Titan, 16GB Tactical Tracer LED, CPU/GPU Dual-Loop Water-Cooled - Driver 331.82, DX11.0
Neat stuff!
Yes, I know that Single Pass stereo has been around for a while. I'm just saying that Unity hasn't supported it until recently (5.4 i believe), so any games prior to that will not be able to use it. You also need to modify all your shaders so that anything that reads from the framebuffers will offset it's coordinates to read from the appropriate side of the buffer. It's a trivial fix but you need to do it for all your post effects.
I implemented single pass stereo in my OpenGL based engine when i first got my Oculus DK1. It was a fairly simple process. First attach a geometry shader to your shader program. Instead of cycling through the vertices once to output a polygon or primitive you would do it twice, once for each eye and apply the view and projection matrices here instead of in the vertex shader. Then you could either output them to separate rendertargets in the shadet if using a layered texture array, or offset the vertices to the appropriate side of the screen if using a single 2x wide buffer. If doing the latter I would apply an "eye" attribute to each vertex in the geometry shader output so that the pixel shader would know which fragments to discard on which side of the screen. If anyone's interested I could dig up some old code. I also mentioned this approach to helifax a couple of years ago for him to use in his OpenGL wrapper: https://forums.geforce.com/default/topic/682130/-opengl-3d-vision-wrapper-enabling-3d-vision-in-opengl-apps/?offset=1017#4721350
But i never followed up on it.
Like my work? You can send a donation via Paypal to sgs.rules@gmail.com
Windows 7 Pro 64x - Nvidia Driver 398.82 - EVGA 980Ti SC - Optoma HD26 with Edid override - 3D Vision 2 - i7-8700K CPU at 5.0Ghz - ASROCK Z370 Ext 4 Motherboard - 32 GB RAM Corsair Vengeance - 512 GB Samsung SSD 850 Pro - Creative Sound Blaster Z