[quote="DarkStarSword"]Done in 3DMigoto 1.2.20 - you can now change stereo_params=125 and ini_params=120 in the d3dx.ini[/quote]
Thanks, it works fine and solve my problems.
Regarding my other request I suppress my unwanted text by using v0 coordinates in pixel shader to define which area of screen has to be cleared, but then I discovered that the option "rasterizer_disable_scissor=1" was making the problem...So I set it to 0. I wonder what purpose has this option...
I have another request. I did not understand most of the last features of 3dmigoto but I wonder if it could be possible to use them to hide labels if they should be behind cockpit frame (see picture). There are 1 shader for label and 2 shaders for cockpit frame.
Labels are drawn at screen depth, I modified the shader to put them in 3D.
Having labels not hidden by cockpit frame is not a good thing, but in 3Dvision it is really annoying.
DarkStarSword said:Done in 3DMigoto 1.2.20 - you can now change stereo_params=125 and ini_params=120 in the d3dx.ini
Thanks, it works fine and solve my problems.
Regarding my other request I suppress my unwanted text by using v0 coordinates in pixel shader to define which area of screen has to be cleared, but then I discovered that the option "rasterizer_disable_scissor=1" was making the problem...So I set it to 0. I wonder what purpose has this option...
I have another request. I did not understand most of the last features of 3dmigoto but I wonder if it could be possible to use them to hide labels if they should be behind cockpit frame (see picture). There are 1 shader for label and 2 shaders for cockpit frame.
Labels are drawn at screen depth, I modified the shader to put them in 3D.
Having labels not hidden by cockpit frame is not a good thing, but in 3Dvision it is really annoying.
@3DMigoto Developers - i.e. Bo3b and Darkstarsword.
Given the current fever over VR, is there a tweak that can be added in 3DMigoto to generate SBS (LR and RL, just for completeness) output? Because if there is, then with the application of some other third party tool to create lens warping, we could have a simple way of playing all current 3DVision games on a VR headset (minus all the cool stuff like headtracking etc, but still a big win). The Nvidia API used to have this before it became "3DVision" and as you know applications that try and render 3D movies without the plugin show in SBS anyway, so we kinda just need to bypass the last part...
An even cooler addition would be add in the lens warping algorithm, OR hook into the new Nvidia VR API and use that. We then have a ready made one-stop shop for all existing games to be used on a headset
@3DMigoto Developers - i.e. Bo3b and Darkstarsword.
Given the current fever over VR, is there a tweak that can be added in 3DMigoto to generate SBS (LR and RL, just for completeness) output? Because if there is, then with the application of some other third party tool to create lens warping, we could have a simple way of playing all current 3DVision games on a VR headset (minus all the cool stuff like headtracking etc, but still a big win). The Nvidia API used to have this before it became "3DVision" and as you know applications that try and render 3D movies without the plugin show in SBS anyway, so we kinda just need to bypass the last part...
An even cooler addition would be add in the lens warping algorithm, OR hook into the new Nvidia VR API and use that. We then have a ready made one-stop shop for all existing games to be used on a headset
[quote="mike_ar69"]@3DMigoto Developers - i.e. Bo3b and Darkstarsword.
Given the current fever over VR, is there a tweak that can be added in 3DMigoto to generate SBS (LR and RL, just for completeness) output? Because if there is, then with the application of some other third party tool to create lens warping, we could have a simple way of playing all current 3DVision games on a VR headset (minus all the cool stuff like headtracking etc, but still a big win). The Nvidia API used to have this before it became "3DVision" and as you know applications that try and render 3D movies without the plugin show in SBS anyway, so we kinda just need to bypass the last part...
An even cooler addition would be add in the lens warping algorithm, OR hook into the new Nvidia VR API and use that. We then have a ready made one-stop shop for all existing games to be used on a headset[/quote]
I believe in the case of 3DMigoto which relies on 3D Vision Automatic to actually stereorize stuff, this will be a NO-Go if you don't have 3D Vision as the driver will fail to stereorize anything.
In which case I think we need a full wrapper to do what nvidia automatic does.... (or use it on a PC that is already 3D Vision Ready but instead of displaying on the screen you do additional processing and send it to the VR helmet).
I think is a bit to early to tell since we don't have a VR set and to see if nvidia driver even allows this thing...
Another approach is to use 3D Discover Mode! And intercept before the Red and Blue filters are applied to each eye and present those frames. This way we don't need a 3D Vision Ready PC (monitor + emitter).
An additional approach is to do what I do in my wrapper, which works the same without 3D Vision as I handle everything inside. AT 90 FPS per second I bet there will be no eye lag so that approach might work. This is something I intend to do with my wrapper in the future anyway.
But again, without the hardware and the API knowledge I can't say for sure. Maybe bo3b or DarkStarSword know more on this then me, but this is how I see it;)
mike_ar69 said:@3DMigoto Developers - i.e. Bo3b and Darkstarsword.
Given the current fever over VR, is there a tweak that can be added in 3DMigoto to generate SBS (LR and RL, just for completeness) output? Because if there is, then with the application of some other third party tool to create lens warping, we could have a simple way of playing all current 3DVision games on a VR headset (minus all the cool stuff like headtracking etc, but still a big win). The Nvidia API used to have this before it became "3DVision" and as you know applications that try and render 3D movies without the plugin show in SBS anyway, so we kinda just need to bypass the last part...
An even cooler addition would be add in the lens warping algorithm, OR hook into the new Nvidia VR API and use that. We then have a ready made one-stop shop for all existing games to be used on a headset
I believe in the case of 3DMigoto which relies on 3D Vision Automatic to actually stereorize stuff, this will be a NO-Go if you don't have 3D Vision as the driver will fail to stereorize anything.
In which case I think we need a full wrapper to do what nvidia automatic does.... (or use it on a PC that is already 3D Vision Ready but instead of displaying on the screen you do additional processing and send it to the VR helmet).
I think is a bit to early to tell since we don't have a VR set and to see if nvidia driver even allows this thing...
Another approach is to use 3D Discover Mode! And intercept before the Red and Blue filters are applied to each eye and present those frames. This way we don't need a 3D Vision Ready PC (monitor + emitter).
An additional approach is to do what I do in my wrapper, which works the same without 3D Vision as I handle everything inside. AT 90 FPS per second I bet there will be no eye lag so that approach might work. This is something I intend to do with my wrapper in the future anyway.
But again, without the hardware and the API knowledge I can't say for sure. Maybe bo3b or DarkStarSword know more on this then me, but this is how I see it;)
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
[quote="helifax"]
Another approach is to use 3D Discover Mode! And intercept before the Red and Blue filters are applied to each eye and present those frames. This way we don't need a 3D Vision Ready PC (monitor + emitter).
[/quote]
Yeah - this is what I was talking about. Nvidia automatic generates the two images THEN it packages them up into an appropriate output format: 3DVision, Anaglyph, SBS, Top-Bottom or whatever. So we intercept just after we get the 2 stereo images and make SBS the output, or do nothing in which case SBS is probably "what you get". That way we are getting the full benefit of 3D automatic, and our own fixes, in generating the two stereo images. For VR, we just need to make sure the image are LR, and not cross-eyed, that's all. The lens warping will be a necessary extra step for VR, hence my other suggestions about where that might happen. TrinusVR just needs to you provide an LR SBS output, then it takes over - though of course it also provides an interface to the motion sensors and so on. But as a first pass, to play 3DVision games in a VR headset (so looking around still using a mouse etc) this could still work. Put it this way, I will try it tonight with Minecraft and let you know :-)
PS - there is version of Minecraft call "Mine'CRIFT", which has been VR-ified. I plan on trying that as well.
helifax said:
Another approach is to use 3D Discover Mode! And intercept before the Red and Blue filters are applied to each eye and present those frames. This way we don't need a 3D Vision Ready PC (monitor + emitter).
Yeah - this is what I was talking about. Nvidia automatic generates the two images THEN it packages them up into an appropriate output format: 3DVision, Anaglyph, SBS, Top-Bottom or whatever. So we intercept just after we get the 2 stereo images and make SBS the output, or do nothing in which case SBS is probably "what you get". That way we are getting the full benefit of 3D automatic, and our own fixes, in generating the two stereo images. For VR, we just need to make sure the image are LR, and not cross-eyed, that's all. The lens warping will be a necessary extra step for VR, hence my other suggestions about where that might happen. TrinusVR just needs to you provide an LR SBS output, then it takes over - though of course it also provides an interface to the motion sensors and so on. But as a first pass, to play 3DVision games in a VR headset (so looking around still using a mouse etc) this could still work. Put it this way, I will try it tonight with Minecraft and let you know :-)
PS - there is version of Minecraft call "Mine'CRIFT", which has been VR-ified. I plan on trying that as well.
[quote="mike_ar69"]@3DMigoto Developers - i.e. Bo3b and Darkstarsword.
Given the current fever over VR, is there a tweak that can be added in 3DMigoto to generate SBS (LR and RL, just for completeness) output? Because if there is, then with the application of some other third party tool to create lens warping, we could have a simple way of playing all current 3DVision games on a VR headset (minus all the cool stuff like headtracking etc, but still a big win). The Nvidia API used to have this before it became "3DVision" and as you know applications that try and render 3D movies without the plugin show in SBS anyway, so we kinda just need to bypass the last part...[/quote]
+1
:)
mike_ar69 said:@3DMigoto Developers - i.e. Bo3b and Darkstarsword.
Given the current fever over VR, is there a tweak that can be added in 3DMigoto to generate SBS (LR and RL, just for completeness) output? Because if there is, then with the application of some other third party tool to create lens warping, we could have a simple way of playing all current 3DVision games on a VR headset (minus all the cool stuff like headtracking etc, but still a big win). The Nvidia API used to have this before it became "3DVision" and as you know applications that try and render 3D movies without the plugin show in SBS anyway, so we kinda just need to bypass the last part...
[quote="mike_ar69"][quote="helifax"]
Another approach is to use 3D Discover Mode! And intercept before the Red and Blue filters are applied to each eye and present those frames. This way we don't need a 3D Vision Ready PC (monitor + emitter).
[/quote]
Yeah - this is what I was talking about. Nvidia automatic generates the two images THEN it packages them up into an appropriate output format: 3DVision, Anaglyph, SBS, Top-Bottom or whatever. So we intercept just after we get the 2 stereo images and make SBS the output, or do nothing in which case SBS is probably "what you get". That way we are getting the full benefit of 3D automatic, and our own fixes, in generating the two stereo images. For VR, we just need to make sure the image are LR, and not cross-eyed, that's all. The lens warping will be a necessary extra step for VR, hence my other suggestions about where that might happen. TrinusVR just needs to you provide an LR SBS output, then it takes over - though of course it also provides an interface to the motion sensors and so on. But as a first pass, to play 3DVision games in a VR headset (so looking around still using a mouse etc) this could still work. Put it this way, I will try it tonight with Minecraft and let you know :-)
PS - there is version of Minecraft call "Mine'CRIFT", which has been VR-ified. I plan on trying that as well.[/quote]
Since I can't test it.. as I don't have a VR headset:) I am actually interested in trying my OpenGL wrapper on any of the available games. Start it normally on 3D Vision then CTRL+T to disable 3D Vision which will give you SBS. My wrapper requires 3D Vision to be present ONLY as a check! If I don't find the hardware I switch to mono, but if I disable the check it will always be in SBS ;)) I can do this in basically one line of code:)) Also the surface I am presenting to is using DirectX9 (might be required to change it to DX11 at some point).
But I think is a quick test and I am also interested to see if the wrapper can be used like this;)
I still think the Vive will be awesome and the way to go;) as I have much more faith in it;) (voices in my head). :))
helifax said:
Another approach is to use 3D Discover Mode! And intercept before the Red and Blue filters are applied to each eye and present those frames. This way we don't need a 3D Vision Ready PC (monitor + emitter).
Yeah - this is what I was talking about. Nvidia automatic generates the two images THEN it packages them up into an appropriate output format: 3DVision, Anaglyph, SBS, Top-Bottom or whatever. So we intercept just after we get the 2 stereo images and make SBS the output, or do nothing in which case SBS is probably "what you get". That way we are getting the full benefit of 3D automatic, and our own fixes, in generating the two stereo images. For VR, we just need to make sure the image are LR, and not cross-eyed, that's all. The lens warping will be a necessary extra step for VR, hence my other suggestions about where that might happen. TrinusVR just needs to you provide an LR SBS output, then it takes over - though of course it also provides an interface to the motion sensors and so on. But as a first pass, to play 3DVision games in a VR headset (so looking around still using a mouse etc) this could still work. Put it this way, I will try it tonight with Minecraft and let you know :-)
PS - there is version of Minecraft call "Mine'CRIFT", which has been VR-ified. I plan on trying that as well.
Since I can't test it.. as I don't have a VR headset:) I am actually interested in trying my OpenGL wrapper on any of the available games. Start it normally on 3D Vision then CTRL+T to disable 3D Vision which will give you SBS. My wrapper requires 3D Vision to be present ONLY as a check! If I don't find the hardware I switch to mono, but if I disable the check it will always be in SBS ;)) I can do this in basically one line of code:)) Also the surface I am presenting to is using DirectX9 (might be required to change it to DX11 at some point).
But I think is a quick test and I am also interested to see if the wrapper can be used like this;)
I still think the Vive will be awesome and the way to go;) as I have much more faith in it;) (voices in my head). :))
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
[quote="helifax"][quote="mike_ar69"][quote="helifax"]
Another approach is to use 3D Discover Mode! And intercept before the Red and Blue filters are applied to each eye and present those frames. This way we don't need a 3D Vision Ready PC (monitor + emitter).
[/quote]
Yeah - this is what I was talking about. Nvidia automatic generates the two images THEN it packages them up into an appropriate output format: 3DVision, Anaglyph, SBS, Top-Bottom or whatever. So we intercept just after we get the 2 stereo images and make SBS the output, or do nothing in which case SBS is probably "what you get". That way we are getting the full benefit of 3D automatic, and our own fixes, in generating the two stereo images. For VR, we just need to make sure the image are LR, and not cross-eyed, that's all. The lens warping will be a necessary extra step for VR, hence my other suggestions about where that might happen. TrinusVR just needs to you provide an LR SBS output, then it takes over - though of course it also provides an interface to the motion sensors and so on. But as a first pass, to play 3DVision games in a VR headset (so looking around still using a mouse etc) this could still work. Put it this way, I will try it tonight with Minecraft and let you know :-)
PS - there is version of Minecraft call "Mine'CRIFT", which has been VR-ified. I plan on trying that as well.[/quote]
Since I can't test it.. as I don't have a VR headset:) I am actually interested in trying my OpenGL wrapper on any of the available games. Start it normally on 3D Vision then CTRL+T to disable 3D Vision which will give you SBS. My wrapper requires 3D Vision to be present ONLY as a check! If I don't find the hardware I switch to mono, but if I disable the check it will always be in SBS ;)) I can do this in basically one line of code:)) Also the surface I am presenting to is using DirectX9 (might be required to change it to DX11 at some point).
But I think is a quick test and I am also interested to see if the wrapper can be used like this;)
I still think the Vive will be awesome and the way to go;) as I have much more faith in it;) (voices in my head). :))[/quote]
Yeah I will try your wrapper. Since I updated to Win10 recently, I will need to re-install from scratch all that stuff which might not be tonight, so I'll let you know when I get it done :-)
helifax said:
Another approach is to use 3D Discover Mode! And intercept before the Red and Blue filters are applied to each eye and present those frames. This way we don't need a 3D Vision Ready PC (monitor + emitter).
Yeah - this is what I was talking about. Nvidia automatic generates the two images THEN it packages them up into an appropriate output format: 3DVision, Anaglyph, SBS, Top-Bottom or whatever. So we intercept just after we get the 2 stereo images and make SBS the output, or do nothing in which case SBS is probably "what you get". That way we are getting the full benefit of 3D automatic, and our own fixes, in generating the two stereo images. For VR, we just need to make sure the image are LR, and not cross-eyed, that's all. The lens warping will be a necessary extra step for VR, hence my other suggestions about where that might happen. TrinusVR just needs to you provide an LR SBS output, then it takes over - though of course it also provides an interface to the motion sensors and so on. But as a first pass, to play 3DVision games in a VR headset (so looking around still using a mouse etc) this could still work. Put it this way, I will try it tonight with Minecraft and let you know :-)
PS - there is version of Minecraft call "Mine'CRIFT", which has been VR-ified. I plan on trying that as well.
Since I can't test it.. as I don't have a VR headset:) I am actually interested in trying my OpenGL wrapper on any of the available games. Start it normally on 3D Vision then CTRL+T to disable 3D Vision which will give you SBS. My wrapper requires 3D Vision to be present ONLY as a check! If I don't find the hardware I switch to mono, but if I disable the check it will always be in SBS ;)) I can do this in basically one line of code:)) Also the surface I am presenting to is using DirectX9 (might be required to change it to DX11 at some point).
But I think is a quick test and I am also interested to see if the wrapper can be used like this;)
I still think the Vive will be awesome and the way to go;) as I have much more faith in it;) (voices in my head). :))
Yeah I will try your wrapper. Since I updated to Win10 recently, I will need to re-install from scratch all that stuff which might not be tonight, so I'll let you know when I get it done :-)
It should be fairly easy to grab the left and right views out of 3D Vision on the present call and we could then change them to output side-by-side, but we would probably also need to stop 3D Vision from trying to output stereo/anaglyph anyway and I'm not entirely sure how we could do that (I'm more concerned about anaglyph here as it might change the colours even if we make sure that it doesn't draw everything doubled).
We might also want to consider optionally outputting to a virtual screen instead of directly to the VR display (it could easily be a very large virtual screen) - the projection would likely look better and it will probably be more comfortable for the viewer in a lot of games.
Edit: Actually, the virtual screen may not even be optional due to differences in the stereo projections - smething at infinity is drawn separation pixels apart in 3D Vision, but in VR it should (I think) be in the same position on both displays since they are already almost but not quite lined up with each eye. It might work if we put separation to minimum and use only convergence for the 3D effect, but not sure that would quite work either because I'm not sure it would account for all the differences between the nvidia stereo correction and a VR style stereo projection.
It should be fairly easy to grab the left and right views out of 3D Vision on the present call and we could then change them to output side-by-side, but we would probably also need to stop 3D Vision from trying to output stereo/anaglyph anyway and I'm not entirely sure how we could do that (I'm more concerned about anaglyph here as it might change the colours even if we make sure that it doesn't draw everything doubled).
We might also want to consider optionally outputting to a virtual screen instead of directly to the VR display (it could easily be a very large virtual screen) - the projection would likely look better and it will probably be more comfortable for the viewer in a lot of games.
Edit: Actually, the virtual screen may not even be optional due to differences in the stereo projections - smething at infinity is drawn separation pixels apart in 3D Vision, but in VR it should (I think) be in the same position on both displays since they are already almost but not quite lined up with each eye. It might work if we put separation to minimum and use only convergence for the 3D effect, but not sure that would quite work either because I'm not sure it would account for all the differences between the nvidia stereo correction and a VR style stereo projection.
2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit
Heh! We are all thinking along the same lines. I was thinking of something similar to this idea as a way to get all of our fixes available in the VR headsets to come, essentially treat them as new 3D display devices.
There are several things that already try to do this, to varying degrees of success. I haven't used any in the last year, so I don't know how they stack up today. VorpX does the automatic fixing, and it's very likely we could add a 3Dmigoto/Helix fix to his pipeline. Vireio does from scratch what 3D Vision does, but is platform agnostic and also does SBS output. Last I checked Vireio was a bit fragile, but it's open source so it's fixable. There is also a virtual cinema already.
My thought was to use the Oculus SDK in combination with Nvapi to create a different output. 3D Vision would be enabled, but off by default, to avoid having them try to capture the output. We'd then hook the game in question, apply our fix, and include setting driver mode to automatic. At Present(), we'd connect to Oculus SDK for warping and proper side-by-side output.
At a minimum, I think we could make a virtual 3D screen, like a projector size screen to play. Doing the math on the pixels and making some approximations, we'd wind up with a similar pixel ratio as a 720p projector in a virtual screen. That's certainly a good experience today, and would probably work here as well. That would involve faking out 3D TV Play or some sort of EDID override or something to allow the data to be displayed.
Pretty sure this can be done, just not sure about quality. The main reason I didn't use DK2 is the quality wasn't there, I'll be very curious to see how CV1 compares.
Heh! We are all thinking along the same lines. I was thinking of something similar to this idea as a way to get all of our fixes available in the VR headsets to come, essentially treat them as new 3D display devices.
There are several things that already try to do this, to varying degrees of success. I haven't used any in the last year, so I don't know how they stack up today. VorpX does the automatic fixing, and it's very likely we could add a 3Dmigoto/Helix fix to his pipeline. Vireio does from scratch what 3D Vision does, but is platform agnostic and also does SBS output. Last I checked Vireio was a bit fragile, but it's open source so it's fixable. There is also a virtual cinema already.
My thought was to use the Oculus SDK in combination with Nvapi to create a different output. 3D Vision would be enabled, but off by default, to avoid having them try to capture the output. We'd then hook the game in question, apply our fix, and include setting driver mode to automatic. At Present(), we'd connect to Oculus SDK for warping and proper side-by-side output.
At a minimum, I think we could make a virtual 3D screen, like a projector size screen to play. Doing the math on the pixels and making some approximations, we'd wind up with a similar pixel ratio as a 720p projector in a virtual screen. That's certainly a good experience today, and would probably work here as well. That would involve faking out 3D TV Play or some sort of EDID override or something to allow the data to be displayed.
Pretty sure this can be done, just not sure about quality. The main reason I didn't use DK2 is the quality wasn't there, I'll be very curious to see how CV1 compares.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607 Latest 3Dmigoto Release Bo3b's School for ShaderHackers
[quote="bo3b"]......3D Vision would be enabled.....
.....faking out 3D TV Play...... [/quote]
Accessing Nvidia's Stereoscopic drivers via either 3D Vision, 3DTV Play, Optimized for GeForce or 3D Vision could be a problem.
3D Vision can work via any connection (excluding USB) but "requires' that an emitter and certified display is present. It will only output Frame Packed, Frame Sequential or Checkerboard.
3DTV Play requires a $40 product key to unlock or a Nvidia emitter. It will only work via a "HDMI" connection and the display "must" be HDMI 1.4 compliant. Nvidia limits this to certain screen sizes, small displays must be added by Nvidia themselves, such as the Sony HMZ-T1. Unless they have removed the screen size requirement that they put in place to prevent access by displays such as the Sony PS3 monitor. 3DTV Play only outputs Frame Packed or Checkerboard.
Optimized for GeForce only works via a HDMI connection. Does not require a product key, but does require a certified display. It will "only" output Line Interleaved.
3D Vision Discover, was known to work with any display for free without any special requirements. But for quite some time now, support is hit and miss with different drivers and different displays.
The problem lies in where Nvidia buries their stuff in the drivers and how they perform their checks and balances. Any changes to refresh rates and/or resolution typically result in a red warning overlay. Any changes to registry settings get reset, if not locked.
The consumer side is pretty stiff and un-flexible.
I'm uncertain how flexible the pro side is.
bo3b said:......3D Vision would be enabled.....
.....faking out 3D TV Play......
Accessing Nvidia's Stereoscopic drivers via either 3D Vision, 3DTV Play, Optimized for GeForce or 3D Vision could be a problem.
3D Vision can work via any connection (excluding USB) but "requires' that an emitter and certified display is present. It will only output Frame Packed, Frame Sequential or Checkerboard.
3DTV Play requires a $40 product key to unlock or a Nvidia emitter. It will only work via a "HDMI" connection and the display "must" be HDMI 1.4 compliant. Nvidia limits this to certain screen sizes, small displays must be added by Nvidia themselves, such as the Sony HMZ-T1. Unless they have removed the screen size requirement that they put in place to prevent access by displays such as the Sony PS3 monitor. 3DTV Play only outputs Frame Packed or Checkerboard.
Optimized for GeForce only works via a HDMI connection. Does not require a product key, but does require a certified display. It will "only" output Line Interleaved.
3D Vision Discover, was known to work with any display for free without any special requirements. But for quite some time now, support is hit and miss with different drivers and different displays.
The problem lies in where Nvidia buries their stuff in the drivers and how they perform their checks and balances. Any changes to refresh rates and/or resolution typically result in a red warning overlay. Any changes to registry settings get reset, if not locked.
The consumer side is pretty stiff and un-flexible.
Nvidia also allows HDMI 1.4 access to it's stereo drivers for games supporting Native 3D or for Blu-ray playback. But this is also limited to the same refresh rates and resolutions supported in 3DTV Play. If they aren't used, you get the red overlay and stereo is disabled.
Nvidia also allows HDMI 1.4 access to it's stereo drivers for games supporting Native 3D or for Blu-ray playback. But this is also limited to the same refresh rates and resolutions supported in 3DTV Play. If they aren't used, you get the red overlay and stereo is disabled.
D-Man11:
- What bo3b is saying is to try and emulate a 3D Vision setup so the nvidia 3D Vision driver kicks in and does the Stereorization for us. The driver then doesn't need to output anywhere but to our buffers which we use to draw.
As a pure schematic:
Plain 2D ----> Emulate 3D Vision ----> 3D driver kicks in and does the Automatic Stereo Conversion ----> Apply 3DMigoto game fix ----> We capture the output and using Oculus SDK we present it in our own Window ----> Stereo 3D.
This is exactly like my wrapper is working, except relying on the 3D Vision Automatic driver to do the automatic Stereorization (since there isn't anything for OpenGL).
At least this is how I understand it;)
D-Man11:
- What bo3b is saying is to try and emulate a 3D Vision setup so the nvidia 3D Vision driver kicks in and does the Stereorization for us. The driver then doesn't need to output anywhere but to our buffers which we use to draw.
As a pure schematic:
Plain 2D ----> Emulate 3D Vision ----> 3D driver kicks in and does the Automatic Stereo Conversion ----> Apply 3DMigoto game fix ----> We capture the output and using Oculus SDK we present it in our own Window ----> Stereo 3D.
This is exactly like my wrapper is working, except relying on the 3D Vision Automatic driver to do the automatic Stereorization (since there isn't anything for OpenGL).
At least this is how I understand it;)
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
What I'm saying is that Nvidia doesn't allow you to willy nilly pick any resolution, refresh rate or format with their consumer stereoscopic drivers.
So a lot will depend on how it's accessed. Like I said, even games that have Native Stereoscopic support for all formats, resolutions and refresh rates are limited to what Nvidia allows when used with a Nvidia GPU.
I wouldn't be surprised if they came out with another product like "VR Vision" or something and lock it's support to certified headsets.
As far as your wrapper, the only reason that it can be used in side by side is because of the way that it generates and presents the frames. Which you admit, results in issues due to the overhead incurred from wrapping OpenGL to DirectX. If you can solved your overhead, OpenGL games would be an option. Unless by solving your overhead, it changes the current way that you present your frames. In which case, when you exit 3D via ctrl+t, you would see a single image, just as you do with DirectX games.
What I'm saying is that Nvidia doesn't allow you to willy nilly pick any resolution, refresh rate or format with their consumer stereoscopic drivers.
So a lot will depend on how it's accessed. Like I said, even games that have Native Stereoscopic support for all formats, resolutions and refresh rates are limited to what Nvidia allows when used with a Nvidia GPU.
I wouldn't be surprised if they came out with another product like "VR Vision" or something and lock it's support to certified headsets.
As far as your wrapper, the only reason that it can be used in side by side is because of the way that it generates and presents the frames. Which you admit, results in issues due to the overhead incurred from wrapping OpenGL to DirectX. If you can solved your overhead, OpenGL games would be an option. Unless by solving your overhead, it changes the current way that you present your frames. In which case, when you exit 3D via ctrl+t, you would see a single image, just as you do with DirectX games.
[quote="D-Man11"]What I'm saying is that Nvidia doesn't allow you to willy nilly pick any resolution, refresh rate or format with their consumer stereoscopic drivers.
So a lot will depend on how it's accessed. Like I said, even games that have Native Stereoscopic support for all formats, resolutions and refresh rates are limited to what Nvidia allows when used with a Nvidia GPU.
I wouldn't be surprised if they came out with another product like "VR Vision" or something and lock it's support to certified headsets.
As far as your wrapper, the only reason that it can be used in side by side is because of the way that it generates and presents the frames. Which you admit, results in issues due to the overhead incurred from wrapping OpenGL to DirectX. If you can solved your overhead, OpenGL games would be an option. Unless by solving your overhead, it changes the current way that you present your frames. In which case, when you exit 3D via ctrl+t, you would see a single image, just as you do with DirectX games.[/quote]
There is no overhead. The OpenGL games are still rendered using OpenGL. The additional logic that the wrapper does is minimal at best! The huge chunk of code that gets executed is during initialisation. I just capture the result and display it in a DirectX 9 Window since the nvidia driver requires a DirectX 9 - 11 context to allow the 3d Vision hardware to kick in.
You still see side-by-side when you CTRL+T because I don't stop Stereo rendering. Basically I don't do anything as nothing has happened. I ALWAYS render side-by-side!!! Is the nvidia driver that takes left image and displays it for left eye and right image for right eye and changes the display mode to FRAME SEQUENTIAL. This is done by adding a special signature in the header of the color textures for each eye. The driver picks up this signature and changes the way how the display type works, not the content.
Thus the 120Hz/2 = 60Hz for each eye. Is basically a display CONVERSION and NOT a CONTENT conversion;)
Is easy to make the wrapper show MONO on CTRL+T but I didn't do this because of the feedback I got: Some people like to have the ability to get a SBS image and use it with OTHER hardware than nvidia 3D Vision;)
The Only overhead is in the OpenGL-DirectX compatibility Layer that is part of the nvidia Driver and which I don't have any control over it and in the Display type conversion. What I want to say is that I don't need the nvidia driver at all to make an OpenGL game display in Stereo 3D in SBS!
I can do it on an Intel HD GPU or an AMD one as well. I use Nvapi in order to output to 3D Vision in Frame sequential.
D-Man11 said:What I'm saying is that Nvidia doesn't allow you to willy nilly pick any resolution, refresh rate or format with their consumer stereoscopic drivers.
So a lot will depend on how it's accessed. Like I said, even games that have Native Stereoscopic support for all formats, resolutions and refresh rates are limited to what Nvidia allows when used with a Nvidia GPU.
I wouldn't be surprised if they came out with another product like "VR Vision" or something and lock it's support to certified headsets.
As far as your wrapper, the only reason that it can be used in side by side is because of the way that it generates and presents the frames. Which you admit, results in issues due to the overhead incurred from wrapping OpenGL to DirectX. If you can solved your overhead, OpenGL games would be an option. Unless by solving your overhead, it changes the current way that you present your frames. In which case, when you exit 3D via ctrl+t, you would see a single image, just as you do with DirectX games.
There is no overhead. The OpenGL games are still rendered using OpenGL. The additional logic that the wrapper does is minimal at best! The huge chunk of code that gets executed is during initialisation. I just capture the result and display it in a DirectX 9 Window since the nvidia driver requires a DirectX 9 - 11 context to allow the 3d Vision hardware to kick in.
You still see side-by-side when you CTRL+T because I don't stop Stereo rendering. Basically I don't do anything as nothing has happened. I ALWAYS render side-by-side!!! Is the nvidia driver that takes left image and displays it for left eye and right image for right eye and changes the display mode to FRAME SEQUENTIAL. This is done by adding a special signature in the header of the color textures for each eye. The driver picks up this signature and changes the way how the display type works, not the content.
Thus the 120Hz/2 = 60Hz for each eye. Is basically a display CONVERSION and NOT a CONTENT conversion;)
Is easy to make the wrapper show MONO on CTRL+T but I didn't do this because of the feedback I got: Some people like to have the ability to get a SBS image and use it with OTHER hardware than nvidia 3D Vision;)
The Only overhead is in the OpenGL-DirectX compatibility Layer that is part of the nvidia Driver and which I don't have any control over it and in the Display type conversion. What I want to say is that I don't need the nvidia driver at all to make an OpenGL game display in Stereo 3D in SBS!
I can do it on an Intel HD GPU or an AMD one as well. I use Nvapi in order to output to 3D Vision in Frame sequential.
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
Hmmm. thanks for the reply.
But what about the eye sync issues using your OpenGL wrapper, isn't it some kind of frame stutter?Would this be noticeable using a VR headset?
Thanks, it works fine and solve my problems.
Regarding my other request I suppress my unwanted text by using v0 coordinates in pixel shader to define which area of screen has to be cleared, but then I discovered that the option "rasterizer_disable_scissor=1" was making the problem...So I set it to 0. I wonder what purpose has this option...
I have another request. I did not understand most of the last features of 3dmigoto but I wonder if it could be possible to use them to hide labels if they should be behind cockpit frame (see picture). There are 1 shader for label and 2 shaders for cockpit frame.
Labels are drawn at screen depth, I modified the shader to put them in 3D.
Having labels not hidden by cockpit frame is not a good thing, but in 3Dvision it is really annoying.
Given the current fever over VR, is there a tweak that can be added in 3DMigoto to generate SBS (LR and RL, just for completeness) output? Because if there is, then with the application of some other third party tool to create lens warping, we could have a simple way of playing all current 3DVision games on a VR headset (minus all the cool stuff like headtracking etc, but still a big win). The Nvidia API used to have this before it became "3DVision" and as you know applications that try and render 3D movies without the plugin show in SBS anyway, so we kinda just need to bypass the last part...
An even cooler addition would be add in the lens warping algorithm, OR hook into the new Nvidia VR API and use that. We then have a ready made one-stop shop for all existing games to be used on a headset
Rig: Intel i7-8700K @4.7GHz, 16Gb Ram, SSD, GTX 1080Ti, Win10x64, Asus VG278
I believe in the case of 3DMigoto which relies on 3D Vision Automatic to actually stereorize stuff, this will be a NO-Go if you don't have 3D Vision as the driver will fail to stereorize anything.
In which case I think we need a full wrapper to do what nvidia automatic does.... (or use it on a PC that is already 3D Vision Ready but instead of displaying on the screen you do additional processing and send it to the VR helmet).
I think is a bit to early to tell since we don't have a VR set and to see if nvidia driver even allows this thing...
Another approach is to use 3D Discover Mode! And intercept before the Red and Blue filters are applied to each eye and present those frames. This way we don't need a 3D Vision Ready PC (monitor + emitter).
An additional approach is to do what I do in my wrapper, which works the same without 3D Vision as I handle everything inside. AT 90 FPS per second I bet there will be no eye lag so that approach might work. This is something I intend to do with my wrapper in the future anyway.
But again, without the hardware and the API knowledge I can't say for sure. Maybe bo3b or DarkStarSword know more on this then me, but this is how I see it;)
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
Yeah - this is what I was talking about. Nvidia automatic generates the two images THEN it packages them up into an appropriate output format: 3DVision, Anaglyph, SBS, Top-Bottom or whatever. So we intercept just after we get the 2 stereo images and make SBS the output, or do nothing in which case SBS is probably "what you get". That way we are getting the full benefit of 3D automatic, and our own fixes, in generating the two stereo images. For VR, we just need to make sure the image are LR, and not cross-eyed, that's all. The lens warping will be a necessary extra step for VR, hence my other suggestions about where that might happen. TrinusVR just needs to you provide an LR SBS output, then it takes over - though of course it also provides an interface to the motion sensors and so on. But as a first pass, to play 3DVision games in a VR headset (so looking around still using a mouse etc) this could still work. Put it this way, I will try it tonight with Minecraft and let you know :-)
PS - there is version of Minecraft call "Mine'CRIFT", which has been VR-ified. I plan on trying that as well.
Rig: Intel i7-8700K @4.7GHz, 16Gb Ram, SSD, GTX 1080Ti, Win10x64, Asus VG278
+1
:)
MY WEB
Helix Mod - Making 3D Better
My 3D Screenshot Gallery
Like my fixes? you can donate to Paypal: dhr.donation@gmail.com
Since I can't test it.. as I don't have a VR headset:) I am actually interested in trying my OpenGL wrapper on any of the available games. Start it normally on 3D Vision then CTRL+T to disable 3D Vision which will give you SBS. My wrapper requires 3D Vision to be present ONLY as a check! If I don't find the hardware I switch to mono, but if I disable the check it will always be in SBS ;)) I can do this in basically one line of code:)) Also the surface I am presenting to is using DirectX9 (might be required to change it to DX11 at some point).
But I think is a quick test and I am also interested to see if the wrapper can be used like this;)
I still think the Vive will be awesome and the way to go;) as I have much more faith in it;) (voices in my head). :))
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
Yeah I will try your wrapper. Since I updated to Win10 recently, I will need to re-install from scratch all that stuff which might not be tonight, so I'll let you know when I get it done :-)
Rig: Intel i7-8700K @4.7GHz, 16Gb Ram, SSD, GTX 1080Ti, Win10x64, Asus VG278
We might also want to consider optionally outputting to a virtual screen instead of directly to the VR display (it could easily be a very large virtual screen) - the projection would likely look better and it will probably be more comfortable for the viewer in a lot of games.
Edit: Actually, the virtual screen may not even be optional due to differences in the stereo projections - smething at infinity is drawn separation pixels apart in 3D Vision, but in VR it should (I think) be in the same position on both displays since they are already almost but not quite lined up with each eye. It might work if we put separation to minimum and use only convergence for the 3D effect, but not sure that would quite work either because I'm not sure it would account for all the differences between the nvidia stereo correction and a VR style stereo projection.
2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit
Alienware M17x R4 w/ built in 3D, Intel i7 3740QM, GTX 680m 2GB, 16GB DDR3 1600MHz RAM, Win7 64bit, 1TB SSD, 1TB HDD, 750GB HDD
Pre-release 3D fixes, shadertool.py and other goodies: http://github.com/DarkStarSword/3d-fixes
Support me on Patreon: https://www.patreon.com/DarkStarSword or PayPal: https://www.paypal.me/DarkStarSword
There are several things that already try to do this, to varying degrees of success. I haven't used any in the last year, so I don't know how they stack up today. VorpX does the automatic fixing, and it's very likely we could add a 3Dmigoto/Helix fix to his pipeline. Vireio does from scratch what 3D Vision does, but is platform agnostic and also does SBS output. Last I checked Vireio was a bit fragile, but it's open source so it's fixable. There is also a virtual cinema already.
My thought was to use the Oculus SDK in combination with Nvapi to create a different output. 3D Vision would be enabled, but off by default, to avoid having them try to capture the output. We'd then hook the game in question, apply our fix, and include setting driver mode to automatic. At Present(), we'd connect to Oculus SDK for warping and proper side-by-side output.
At a minimum, I think we could make a virtual 3D screen, like a projector size screen to play. Doing the math on the pixels and making some approximations, we'd wind up with a similar pixel ratio as a 720p projector in a virtual screen. That's certainly a good experience today, and would probably work here as well. That would involve faking out 3D TV Play or some sort of EDID override or something to allow the data to be displayed.
Pretty sure this can be done, just not sure about quality. The main reason I didn't use DK2 is the quality wasn't there, I'll be very curious to see how CV1 compares.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers
Accessing Nvidia's Stereoscopic drivers via either 3D Vision, 3DTV Play, Optimized for GeForce or 3D Vision could be a problem.
3D Vision can work via any connection (excluding USB) but "requires' that an emitter and certified display is present. It will only output Frame Packed, Frame Sequential or Checkerboard.
3DTV Play requires a $40 product key to unlock or a Nvidia emitter. It will only work via a "HDMI" connection and the display "must" be HDMI 1.4 compliant. Nvidia limits this to certain screen sizes, small displays must be added by Nvidia themselves, such as the Sony HMZ-T1. Unless they have removed the screen size requirement that they put in place to prevent access by displays such as the Sony PS3 monitor. 3DTV Play only outputs Frame Packed or Checkerboard.
Optimized for GeForce only works via a HDMI connection. Does not require a product key, but does require a certified display. It will "only" output Line Interleaved.
3D Vision Discover, was known to work with any display for free without any special requirements. But for quite some time now, support is hit and miss with different drivers and different displays.
The problem lies in where Nvidia buries their stuff in the drivers and how they perform their checks and balances. Any changes to refresh rates and/or resolution typically result in a red warning overlay. Any changes to registry settings get reset, if not locked.
The consumer side is pretty stiff and un-flexible.
I'm uncertain how flexible the pro side is.
- What bo3b is saying is to try and emulate a 3D Vision setup so the nvidia 3D Vision driver kicks in and does the Stereorization for us. The driver then doesn't need to output anywhere but to our buffers which we use to draw.
As a pure schematic:
Plain 2D ----> Emulate 3D Vision ----> 3D driver kicks in and does the Automatic Stereo Conversion ----> Apply 3DMigoto game fix ----> We capture the output and using Oculus SDK we present it in our own Window ----> Stereo 3D.
This is exactly like my wrapper is working, except relying on the 3D Vision Automatic driver to do the automatic Stereorization (since there isn't anything for OpenGL).
At least this is how I understand it;)
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
So a lot will depend on how it's accessed. Like I said, even games that have Native Stereoscopic support for all formats, resolutions and refresh rates are limited to what Nvidia allows when used with a Nvidia GPU.
I wouldn't be surprised if they came out with another product like "VR Vision" or something and lock it's support to certified headsets.
As far as your wrapper, the only reason that it can be used in side by side is because of the way that it generates and presents the frames. Which you admit, results in issues due to the overhead incurred from wrapping OpenGL to DirectX. If you can solved your overhead, OpenGL games would be an option. Unless by solving your overhead, it changes the current way that you present your frames. In which case, when you exit 3D via ctrl+t, you would see a single image, just as you do with DirectX games.
There is no overhead. The OpenGL games are still rendered using OpenGL. The additional logic that the wrapper does is minimal at best! The huge chunk of code that gets executed is during initialisation. I just capture the result and display it in a DirectX 9 Window since the nvidia driver requires a DirectX 9 - 11 context to allow the 3d Vision hardware to kick in.
You still see side-by-side when you CTRL+T because I don't stop Stereo rendering. Basically I don't do anything as nothing has happened. I ALWAYS render side-by-side!!! Is the nvidia driver that takes left image and displays it for left eye and right image for right eye and changes the display mode to FRAME SEQUENTIAL. This is done by adding a special signature in the header of the color textures for each eye. The driver picks up this signature and changes the way how the display type works, not the content.
Thus the 120Hz/2 = 60Hz for each eye. Is basically a display CONVERSION and NOT a CONTENT conversion;)
Is easy to make the wrapper show MONO on CTRL+T but I didn't do this because of the feedback I got: Some people like to have the ability to get a SBS image and use it with OTHER hardware than nvidia 3D Vision;)
The Only overhead is in the OpenGL-DirectX compatibility Layer that is part of the nvidia Driver and which I don't have any control over it and in the Display type conversion. What I want to say is that I don't need the nvidia driver at all to make an OpenGL game display in Stereo 3D in SBS!
I can do it on an Intel HD GPU or an AMD one as well. I use Nvapi in order to output to 3D Vision in Frame sequential.
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
But what about the eye sync issues using your OpenGL wrapper, isn't it some kind of frame stutter?Would this be noticeable using a VR headset?