Hello all,
I am currently working on a project, in OpenGL (core profile 400)
I have implemented stereo 3D support and I can run it on one monitor in 3D (Fullscreen) under Windows (SLI stereo3D doesn't work ~ driver related)
I have made some preliminary tests and I will be able to make it work under Linux (Ubuntu) as well, using 3D Vision hardware both in 3D Vision Surround fullscreen and window mode.
But this is not what I want to ask.
For the last week I have "searched" the web high and low. I read all the papers from nVidia about implementing 3D.
Currently I found alot of "tutorials" and so on... The thing that I am looking for is the answer on how to PROPERLY implement:
- Convergence
- Separation aka Depth.
Currently I have set 2 projection matrices like nvidia says and I get the stereo 3D as intended. However I cannot manipulate the convergence and separation correctly.
Based on the docs:
- Convergence is the distance to the plane of Zero Parallax. Done this. However I notice a few weird things: (I am looking down the -Z axis, Y is UP)
1. Convergence works OK between +0 and +whatever, but at negative values all the elements are pushed back into the screen but incorrectly: The closer elements are pushed more than the farther elements. Comparing with the AUTO 3D of nvidia it seems this doesn't happen to them.
Why is that? Do I need other projection matrixes? nVidia said to use just one (the mono one just translated on the X with +- Asymmetric Frustum)
2. Separation: nVidia says that separation is the IOD (inter ocular distance, or interaxial) divided by the screen width. This is very strange since I haven't seen any implementation using this. All the implementations give a "number". The best one I could find was:
IOD = convergence / 30.0f (where convergence is the focal length or distance to plane of 0 parallax) Who is the 30.0f ? or why would they do that...?
If anyone has any experience in coding stereo 3D please help me out here a bit;))
Best Regards,
I am currently working on a project, in OpenGL (core profile 400)
I have implemented stereo 3D support and I can run it on one monitor in 3D (Fullscreen) under Windows (SLI stereo3D doesn't work ~ driver related)
I have made some preliminary tests and I will be able to make it work under Linux (Ubuntu) as well, using 3D Vision hardware both in 3D Vision Surround fullscreen and window mode.
But this is not what I want to ask.
For the last week I have "searched" the web high and low. I read all the papers from nVidia about implementing 3D.
Currently I found alot of "tutorials" and so on... The thing that I am looking for is the answer on how to PROPERLY implement:
- Convergence
- Separation aka Depth.
Currently I have set 2 projection matrices like nvidia says and I get the stereo 3D as intended. However I cannot manipulate the convergence and separation correctly.
Based on the docs:
- Convergence is the distance to the plane of Zero Parallax. Done this. However I notice a few weird things: (I am looking down the -Z axis, Y is UP)
1. Convergence works OK between +0 and +whatever, but at negative values all the elements are pushed back into the screen but incorrectly: The closer elements are pushed more than the farther elements. Comparing with the AUTO 3D of nvidia it seems this doesn't happen to them.
Why is that? Do I need other projection matrixes? nVidia said to use just one (the mono one just translated on the X with +- Asymmetric Frustum)
2. Separation: nVidia says that separation is the IOD (inter ocular distance, or interaxial) divided by the screen width. This is very strange since I haven't seen any implementation using this. All the implementations give a "number". The best one I could find was:
IOD = convergence / 30.0f (where convergence is the focal length or distance to plane of 0 parallax) Who is the 30.0f ? or why would they do that...?
If anyone has any experience in coding stereo 3D please help me out here a bit;))
Best Regards,
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
You can code a native opengl (quadbuffered) application for nvidia 3d-vision. There is however no good current wrapper to play opengl games (which generally does not support quadbuffering and stereo) in stereo-3d. The legacy s-3d drivers supports a number of opengl games but those do on the other side not support 3d-vision...
You can code a native opengl (quadbuffered) application for nvidia 3d-vision. There is however no good current wrapper to play opengl games (which generally does not support quadbuffering and stereo) in stereo-3d. The legacy s-3d drivers supports a number of opengl games but those do on the other side not support 3d-vision...
I am allowed to tell ofc:)) haha
I am working on my Master Thesis project. Currently I have implemented in OpenGL the CDLOD technique of procedural generating & rendering terrain. The full paper can be found here: http://www.vertexasylum.com/downloads/cdlod/cdlod_latest.pdf
After this I thought hey, why not stereoscopic rendering :)) Since I always wanted to render in stereo :))
I am still plan on adding some more stuff to the project besides 3D rendering, but currently I am a bit
stuck here. Like I said the rendering works, I can adjust convergence from "a bit into the screen" to "screen depth" to "alot of pop-out stuff near the viewer", but controlling depth is a bit messed up for some unknown reason.
Once I finish the project I will release a demo both on windows and linux, so people can play with it:))
So now, I am hunting for someone who actually coded in stereo3D before. (and not a tutorial, since those are very limited).
No, for some time now 3D Vision (and not PRO) is avaiable for OpenGL under Windows. (starting with 310.xx version)
Under Linux, there is no official support, but there are ways of making it work. The experience is not as flawless, but it works 95% ( the glasses look a bit darker and sometimes they lose sync for 1 second) but that is due to the fact that we need to pump the firmware into the pyramid and "manually" sync the monitor and glasses every 0.008(3) seconds.
It works for any Linux app as long as the app has quad buffer rendering as an option. Otherwise the emitter will kick in but you get mono since that is what the app is rendering.
After this I thought hey, why not stereoscopic rendering :)) Since I always wanted to render in stereo :))
I am still plan on adding some more stuff to the project besides 3D rendering, but currently I am a bit
stuck here. Like I said the rendering works, I can adjust convergence from "a bit into the screen" to "screen depth" to "alot of pop-out stuff near the viewer", but controlling depth is a bit messed up for some unknown reason.
Once I finish the project I will release a demo both on windows and linux, so people can play with it:))
So now, I am hunting for someone who actually coded in stereo3D before. (and not a tutorial, since those are very limited).
No, for some time now 3D Vision (and not PRO) is avaiable for OpenGL under Windows. (starting with 310.xx version)
Under Linux, there is no official support, but there are ways of making it work. The experience is not as flawless, but it works 95% ( the glasses look a bit darker and sometimes they lose sync for 1 second) but that is due to the fact that we need to pump the firmware into the pyramid and "manually" sync the monitor and glasses every 0.008(3) seconds.
It works for any Linux app as long as the app has quad buffer rendering as an option. Otherwise the emitter will kick in but you get mono since that is what the app is rendering.
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
Have you tried contacting the guy that wrote ctrlaltStudio? david@ctrlaltstudio.com
He has a Windows and Mac build for the viewer.
"If you intend to use CtrlAltStudio Viewer code in your project then I encourage you to contact me and say “hi”. Perhaps I can provide further assistance or we can collaborate"
"The CtlrAltStudio Viewer is licensed under LGPL. It is based on the Firestorm viewer’s codebase under LGPL, which in turn is based on Linden Lab’s Project Snowstorm codebase under LGPL. Many other viewers also provide their code under LGPL, thus promoting sharing of each other’ work and meaning that code from them has also made its way into the Firestorm and Snowstorm code."
A link to his source code [url]http://ctrlaltstudioviewer.codeplex.com/[/url]
He has seperation and depth sliders in ctrlaltstudio
[url]http://ctrlaltstudio.com/blog/2013/06/20/introducing-the-ctrlaltstudio-viewer-second-life-and-opensim-in-stereoscopic-3d[/url]
Have you tried contacting the guy that wrote ctrlaltStudio? david@ctrlaltstudio.com
He has a Windows and Mac build for the viewer.
"If you intend to use CtrlAltStudio Viewer code in your project then I encourage you to contact me and say “hi”. Perhaps I can provide further assistance or we can collaborate"
"The CtlrAltStudio Viewer is licensed under LGPL. It is based on the Firestorm viewer’s codebase under LGPL, which in turn is based on Linden Lab’s Project Snowstorm codebase under LGPL. Many other viewers also provide their code under LGPL, thus promoting sharing of each other’ work and meaning that code from them has also made its way into the Firestorm and Snowstorm code."
Now that is very interesting!!! Big thanks D-Man11 ;))
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
Your welcome.
But the thanks should go to Dalek and his thread. If I hadn't learned of ctrlaltstudio there, I'd probably never known of it.
[url]https://forums.geforce.com/default/topic/556297/?comment=3860011[/url]
Dalek wanted to use it with secondlife. There's a wiki page for secondlife 3rd party viewers that might interest you as well.
[url]http://wiki.secondlife.com/wiki/Third_Party_Viewer_Directory[/url]
I have not programmed 3D but I wanted to at least understand the definitions of Nvidia.
Separation(unit) = Interaxial(unit) / Virtual Screen width(unit).
Convergence(unit) = distance to virtual screen(unit)
Virtual screen is pretty much defined by convergence along with left and right frustrum.
Changing the convergence Changes the virtual screen size which Changes interaxial as separation is constant.
If I understand it all correctly it's pretty confusing.
Basically the focus lies on giving the player the desired separation not keeping a constant separation in-game between renders.
Very low convergence would be >6.5cm separation if render distance is kept constant.
Convergence(unit) = distance to virtual screen(unit)
Virtual screen is pretty much defined by convergence along with left and right frustrum.
Changing the convergence Changes the virtual screen size which Changes interaxial as separation is constant.
If I understand it all correctly it's pretty confusing.
Basically the focus lies on giving the player the desired separation not keeping a constant separation in-game between renders.
Very low convergence would be >6.5cm separation if render distance is kept constant.
Thanks to everybody using my assembler it warms my heart.
To have a critical piece of code that everyone can enjoy!
What more can you ask for?
That is exactly what I want to understand also... Now to make things even interesting apparently if you set a projection matrix using glFrustrum() the projection plane is actually the nearPlane...so the question is...controlling the position of the nearPlane will actually control the plane of Zero parallax and thus the convergence?
The more I read more confused I get:))
However if I try to move the nearPlane around I distort the projection soo bad..so there must be another catch..
But every image/schematic shows this:
ftp://ftp.sgi.com/opengl/contrib/kschwarz/GLUT_INTRO/SOURCE/PBOURKE/testing.gif
Weird...
That is exactly what I want to understand also... Now to make things even interesting apparently if you set a projection matrix using glFrustrum() the projection plane is actually the nearPlane...so the question is...controlling the position of the nearPlane will actually control the plane of Zero parallax and thus the convergence?
The more I read more confused I get:))
However if I try to move the nearPlane around I distort the projection soo bad..so there must be another catch..
But every image/schematic shows this:
ftp://ftp.sgi.com/opengl/contrib/kschwarz/GLUT_INTRO/SOURCE/PBOURKE/testing.gif
Weird...
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
After more digging...I came to this:
http://www.sky.com/shop/export/sites/www.sky.com/shop/__PDF/3D/Basic_Principles_of_Stereoscopic_3D_v1.pdf
"When shooting parallel, the zero parallax point will be at infinity, giving every object negative parallax (all objects appear in front of the screen). Extra
time will be needed in post-production to adjust the convergence. Setting the cameras at normal human interocular distance can be tricky for close ups."
So if I am using (which I am) the so called Asymmetric Frustum way...apparently I get this (which I do) and there is no way to change the convergence... So I need the "toed-in" method? but that introduces alot of other problems....hmm
I wonder how nVidia is doing it...since based on their papers they seem to do the parallel way...but then..how do they adjust the convergence of the objects?
EDIT: Right...Convergence is also called focal length... adjusting this actually adjust the convergence;)) So I was right before...
"When shooting parallel, the zero parallax point will be at infinity, giving every object negative parallax (all objects appear in front of the screen). Extra
time will be needed in post-production to adjust the convergence. Setting the cameras at normal human interocular distance can be tricky for close ups."
So if I am using (which I am) the so called Asymmetric Frustum way...apparently I get this (which I do) and there is no way to change the convergence... So I need the "toed-in" method? but that introduces alot of other problems....hmm
I wonder how nVidia is doing it...since based on their papers they seem to do the parallel way...but then..how do they adjust the convergence of the objects?
EDIT: Right...Convergence is also called focal length... adjusting this actually adjust the convergence;)) So I was right before...
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
After Reading the Formulas I just confirmed that with 3D Vision automatic the in-game Eye-separation increases if convergence is lowered enough. This is quite different from Movie 3D rigs which places the camera at the distance between normal human Eyes.
After Reading the Formulas I just confirmed that with 3D Vision automatic the in-game Eye-separation increases if convergence is lowered enough. This is quite different from Movie 3D rigs which places the camera at the distance between normal human Eyes.
Thanks to everybody using my assembler it warms my heart.
To have a critical piece of code that everyone can enjoy!
What more can you ask for?
[quote="Flugan"]After Reading the Formulas I just confirmed that with 3D Vision automatic the in-game Eye-separation increases if convergence is lowered enough. This is quite different from Movie 3D rigs which places the camera at the distance between normal human Eyes.[/quote]
Yes I also noticed this the hard way. Keeping a fixed eye separation and lowering the convergence definitely gives you a strong headache:)) So the convergence "aka focal length" if directly tied with the eye-separation.
How to control the separation makes sense now.
Flugan said:After Reading the Formulas I just confirmed that with 3D Vision automatic the in-game Eye-separation increases if convergence is lowered enough. This is quite different from Movie 3D rigs which places the camera at the distance between normal human Eyes.
Yes I also noticed this the hard way. Keeping a fixed eye separation and lowering the convergence definitely gives you a strong headache:)) So the convergence "aka focal length" if directly tied with the eye-separation.
How to control the separation makes sense now.
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
You probably already ran across this, but this NVidia PDF is the best explanation I found.
[url]http://www.nvidia.com/content/GTC-2010/pdfs/2010_GTC2010.pdf[/url]
On the web itself there is some other confusing stuff, like people suggesting that toe-in is the right thing for stereoscopic. It seems pretty clear to me that toe-in is completely wrong, not least that NVidia says it's wrong.
In that paper, they say that convergence is used only to set the depth of the image from closest to farthest, their 'parallax budget.' Separation is used to scale the parallax budget larger or smaller. We are accustomed to thinking of dialing up the convergence after the separation, but code-wise I think it's the other way around.
Also I think that convergence is only ever a positive number, and that mathematically a negative number is never found. I think this might be because it doesn't make sense to push the image in past the screen itself. Deeper than screen depth with nothing closer would be a very unusual case.
In NVidia automatic mode, they apply a footer to every shader that needs to be stereoized. The formula they use is:
Xnew = Xold - Separation * (W - Convergence)
This is the formula that uses the system wide convergence and separation numbers. I believe that the screen size is already taken into account before we get to this point.
On the web itself there is some other confusing stuff, like people suggesting that toe-in is the right thing for stereoscopic. It seems pretty clear to me that toe-in is completely wrong, not least that NVidia says it's wrong.
In that paper, they say that convergence is used only to set the depth of the image from closest to farthest, their 'parallax budget.' Separation is used to scale the parallax budget larger or smaller. We are accustomed to thinking of dialing up the convergence after the separation, but code-wise I think it's the other way around.
Also I think that convergence is only ever a positive number, and that mathematically a negative number is never found. I think this might be because it doesn't make sense to push the image in past the screen itself. Deeper than screen depth with nothing closer would be a very unusual case.
In NVidia automatic mode, they apply a footer to every shader that needs to be stereoized. The formula they use is:
Xnew = Xold - Separation * (W - Convergence)
This is the formula that uses the system wide convergence and separation numbers. I believe that the screen size is already taken into account before we get to this point.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607 Latest 3Dmigoto Release Bo3b's School for ShaderHackers
Yupp! That is exactly the paper I am following!
Indeed a negative convergence has no real value... However based on experimentation I found the following:
You can recalculate the Projection matrix using a negative value.
You cannot set the cameras (eyes) offset using a negative value.
To better clarify, in the Paper they are creating the projection from a standard mono regular perspective projection but for the left and right clipping planes they take into account the Asymmetric Frustum. Later they offset the projection matrices on the left and right.
since glFrustum only accepts the plane coords I changed this to a more regular glPerspective to take into account the Y Fov and instead of translating the projection matrix, I set the cameras with glLookAt and I add the offset here.
So, I can send a negative convergence to glFrustum (to create the projection) but cannot set a negative converge value when I am setting the cameras offset. Makes sense right?
However, if I do send a big negative value..I do push the objects more in screen but the projections start to look off.
So the correct and safe way is to get with the convergence to the about 0 which would mean that all the objects are in the screen. At which point if you play with the eye separation I see you get a more or less "in-screen push"
In my glsl shaders I haven't added any new calculations. Basically are the same shaders that I use for mono rendering. While using them in stereo I see no problem with them. But perhaps I should do the same thing?
Here I am a bit confused... If my app is natively rendering in stereo do I still need to add that footer to the shaders? or not?
The paper says:
"
How to implement stereo projection ?
Fully defined by mono projection and Separation & Convergence
Replace the perspective projection matrix by an offset perspective projection
horizontal offset of Interaxial
Negative for Right eye
Positive for Left eye
[b] Or just before rasterization in the vertex shader, offset the clip position by
the parallax amount (Nvidia 3D vision driver solution)[/b]
clipPos.x += EyeSign * Separation * ( clipPos.w – Convergence )
EyeSign = +1 for right, -1 for left
"
Currently I am controlling the separation and convergence (focal length) in the app and on each frame I setup the Projections for each eye and camera positions (since my camera is not fixed).
The only place where I take the screen size into consideration is when setting the projection matrices and that is the Aspect Ratio...(not exactly screen size) Hmm..will need to look here a bit more
Yupp! That is exactly the paper I am following!
Indeed a negative convergence has no real value... However based on experimentation I found the following:
You can recalculate the Projection matrix using a negative value.
You cannot set the cameras (eyes) offset using a negative value.
To better clarify, in the Paper they are creating the projection from a standard mono regular perspective projection but for the left and right clipping planes they take into account the Asymmetric Frustum. Later they offset the projection matrices on the left and right.
since glFrustum only accepts the plane coords I changed this to a more regular glPerspective to take into account the Y Fov and instead of translating the projection matrix, I set the cameras with glLookAt and I add the offset here.
So, I can send a negative convergence to glFrustum (to create the projection) but cannot set a negative converge value when I am setting the cameras offset. Makes sense right?
However, if I do send a big negative value..I do push the objects more in screen but the projections start to look off.
So the correct and safe way is to get with the convergence to the about 0 which would mean that all the objects are in the screen. At which point if you play with the eye separation I see you get a more or less "in-screen push"
In my glsl shaders I haven't added any new calculations. Basically are the same shaders that I use for mono rendering. While using them in stereo I see no problem with them. But perhaps I should do the same thing?
Here I am a bit confused... If my app is natively rendering in stereo do I still need to add that footer to the shaders? or not?
The paper says:
"
How to implement stereo projection ?
Fully defined by mono projection and Separation & Convergence
Replace the perspective projection matrix by an offset perspective projection
horizontal offset of Interaxial
Negative for Right eye
Positive for Left eye
Or just before rasterization in the vertex shader, offset the clip position by
the parallax amount (Nvidia 3D vision driver solution)
clipPos.x += EyeSign * Separation * ( clipPos.w – Convergence )
EyeSign = +1 for right, -1 for left
"
Currently I am controlling the separation and convergence (focal length) in the app and on each frame I setup the Projections for each eye and camera positions (since my camera is not fixed).
The only place where I take the screen size into consideration is when setting the projection matrices and that is the Aspect Ratio...(not exactly screen size) Hmm..will need to look here a bit more
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
I am currently working on a project, in OpenGL (core profile 400)
I have implemented stereo 3D support and I can run it on one monitor in 3D (Fullscreen) under Windows (SLI stereo3D doesn't work ~ driver related)
I have made some preliminary tests and I will be able to make it work under Linux (Ubuntu) as well, using 3D Vision hardware both in 3D Vision Surround fullscreen and window mode.
But this is not what I want to ask.
For the last week I have "searched" the web high and low. I read all the papers from nVidia about implementing 3D.
Currently I found alot of "tutorials" and so on... The thing that I am looking for is the answer on how to PROPERLY implement:
- Convergence
- Separation aka Depth.
Currently I have set 2 projection matrices like nvidia says and I get the stereo 3D as intended. However I cannot manipulate the convergence and separation correctly.
Based on the docs:
- Convergence is the distance to the plane of Zero Parallax. Done this. However I notice a few weird things: (I am looking down the -Z axis, Y is UP)
1. Convergence works OK between +0 and +whatever, but at negative values all the elements are pushed back into the screen but incorrectly: The closer elements are pushed more than the farther elements. Comparing with the AUTO 3D of nvidia it seems this doesn't happen to them.
Why is that? Do I need other projection matrixes? nVidia said to use just one (the mono one just translated on the X with +- Asymmetric Frustum)
2. Separation: nVidia says that separation is the IOD (inter ocular distance, or interaxial) divided by the screen width. This is very strange since I haven't seen any implementation using this. All the implementations give a "number". The best one I could find was:
IOD = convergence / 30.0f (where convergence is the focal length or distance to plane of 0 parallax) Who is the 30.0f ? or why would they do that...?
If anyone has any experience in coding stereo 3D please help me out here a bit;))
Best Regards,
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
I know that Doom 3 BFG is working with 3D Vision but I thought it was an exception.
Thanks to everybody using my assembler it warms my heart.
To have a critical piece of code that everyone can enjoy!
What more can you ask for?
donations: ulfjalmbrant@hotmail.com
Mb: Asus P5W DH Deluxe
Cpu: C2D E6600
Gb: Nvidia 7900GT + 8800GTX
3D:100" passive projector polarized setup + 22" IZ3D
Stereodrivers: Iz3d & Tridef ignition and nvidia old school.
I am working on my Master Thesis project. Currently I have implemented in OpenGL the CDLOD technique of procedural generating & rendering terrain. The full paper can be found here: http://www.vertexasylum.com/downloads/cdlod/cdlod_latest.pdf
After this I thought hey, why not stereoscopic rendering :)) Since I always wanted to render in stereo :))
I am still plan on adding some more stuff to the project besides 3D rendering, but currently I am a bit
stuck here. Like I said the rendering works, I can adjust convergence from "a bit into the screen" to "screen depth" to "alot of pop-out stuff near the viewer", but controlling depth is a bit messed up for some unknown reason.
Once I finish the project I will release a demo both on windows and linux, so people can play with it:))
So now, I am hunting for someone who actually coded in stereo3D before. (and not a tutorial, since those are very limited).
No, for some time now 3D Vision (and not PRO) is avaiable for OpenGL under Windows. (starting with 310.xx version)
Under Linux, there is no official support, but there are ways of making it work. The experience is not as flawless, but it works 95% ( the glasses look a bit darker and sometimes they lose sync for 1 second) but that is due to the fact that we need to pump the firmware into the pyramid and "manually" sync the monitor and glasses every 0.008(3) seconds.
It works for any Linux app as long as the app has quad buffer rendering as an option. Otherwise the emitter will kick in but you get mono since that is what the app is rendering.
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
He has a Windows and Mac build for the viewer.
"If you intend to use CtrlAltStudio Viewer code in your project then I encourage you to contact me and say “hi”. Perhaps I can provide further assistance or we can collaborate"
"The CtlrAltStudio Viewer is licensed under LGPL. It is based on the Firestorm viewer’s codebase under LGPL, which in turn is based on Linden Lab’s Project Snowstorm codebase under LGPL. Many other viewers also provide their code under LGPL, thus promoting sharing of each other’ work and meaning that code from them has also made its way into the Firestorm and Snowstorm code."
A link to his source code http://ctrlaltstudioviewer.codeplex.com/
He has seperation and depth sliders in ctrlaltstudio
http://ctrlaltstudio.com/blog/2013/06/20/introducing-the-ctrlaltstudio-viewer-second-life-and-opensim-in-stereoscopic-3d
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
But the thanks should go to Dalek and his thread. If I hadn't learned of ctrlaltstudio there, I'd probably never known of it.
https://forums.geforce.com/default/topic/556297/?comment=3860011
Dalek wanted to use it with secondlife. There's a wiki page for secondlife 3rd party viewers that might interest you as well.
http://wiki.secondlife.com/wiki/Third_Party_Viewer_Directory
Separation(unit) = Interaxial(unit) / Virtual Screen width(unit).
Convergence(unit) = distance to virtual screen(unit)
Virtual screen is pretty much defined by convergence along with left and right frustrum.
Changing the convergence Changes the virtual screen size which Changes interaxial as separation is constant.
If I understand it all correctly it's pretty confusing.
Basically the focus lies on giving the player the desired separation not keeping a constant separation in-game between renders.
Very low convergence would be >6.5cm separation if render distance is kept constant.
Thanks to everybody using my assembler it warms my heart.
To have a critical piece of code that everyone can enjoy!
What more can you ask for?
donations: ulfjalmbrant@hotmail.com
The more I read more confused I get:))
However if I try to move the nearPlane around I distort the projection soo bad..so there must be another catch..
But every image/schematic shows this:
ftp://ftp.sgi.com/opengl/contrib/kschwarz/GLUT_INTRO/SOURCE/PBOURKE/testing.gif
Weird...
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
http://www.sky.com/shop/export/sites/www.sky.com/shop/__PDF/3D/Basic_Principles_of_Stereoscopic_3D_v1.pdf
"When shooting parallel, the zero parallax point will be at infinity, giving every object negative parallax (all objects appear in front of the screen). Extra
time will be needed in post-production to adjust the convergence. Setting the cameras at normal human interocular distance can be tricky for close ups."
So if I am using (which I am) the so called Asymmetric Frustum way...apparently I get this (which I do) and there is no way to change the convergence... So I need the "toed-in" method? but that introduces alot of other problems....hmm
I wonder how nVidia is doing it...since based on their papers they seem to do the parallel way...but then..how do they adjust the convergence of the objects?
EDIT: Right...Convergence is also called focal length... adjusting this actually adjust the convergence;)) So I was right before...
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
Thanks to everybody using my assembler it warms my heart.
To have a critical piece of code that everyone can enjoy!
What more can you ask for?
donations: ulfjalmbrant@hotmail.com
Yes I also noticed this the hard way. Keeping a fixed eye separation and lowering the convergence definitely gives you a strong headache:)) So the convergence "aka focal length" if directly tied with the eye-separation.
How to control the separation makes sense now.
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
http://www.nvidia.com/content/GTC-2010/pdfs/2010_GTC2010.pdf
On the web itself there is some other confusing stuff, like people suggesting that toe-in is the right thing for stereoscopic. It seems pretty clear to me that toe-in is completely wrong, not least that NVidia says it's wrong.
In that paper, they say that convergence is used only to set the depth of the image from closest to farthest, their 'parallax budget.' Separation is used to scale the parallax budget larger or smaller. We are accustomed to thinking of dialing up the convergence after the separation, but code-wise I think it's the other way around.
Also I think that convergence is only ever a positive number, and that mathematically a negative number is never found. I think this might be because it doesn't make sense to push the image in past the screen itself. Deeper than screen depth with nothing closer would be a very unusual case.
In NVidia automatic mode, they apply a footer to every shader that needs to be stereoized. The formula they use is:
Xnew = Xold - Separation * (W - Convergence)
This is the formula that uses the system wide convergence and separation numbers. I believe that the screen size is already taken into account before we get to this point.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers
Indeed a negative convergence has no real value... However based on experimentation I found the following:
You can recalculate the Projection matrix using a negative value.
You cannot set the cameras (eyes) offset using a negative value.
To better clarify, in the Paper they are creating the projection from a standard mono regular perspective projection but for the left and right clipping planes they take into account the Asymmetric Frustum. Later they offset the projection matrices on the left and right.
since glFrustum only accepts the plane coords I changed this to a more regular glPerspective to take into account the Y Fov and instead of translating the projection matrix, I set the cameras with glLookAt and I add the offset here.
So, I can send a negative convergence to glFrustum (to create the projection) but cannot set a negative converge value when I am setting the cameras offset. Makes sense right?
However, if I do send a big negative value..I do push the objects more in screen but the projections start to look off.
So the correct and safe way is to get with the convergence to the about 0 which would mean that all the objects are in the screen. At which point if you play with the eye separation I see you get a more or less "in-screen push"
In my glsl shaders I haven't added any new calculations. Basically are the same shaders that I use for mono rendering. While using them in stereo I see no problem with them. But perhaps I should do the same thing?
Here I am a bit confused... If my app is natively rendering in stereo do I still need to add that footer to the shaders? or not?
The paper says:
"
How to implement stereo projection ?
Fully defined by mono projection and Separation & Convergence
Replace the perspective projection matrix by an offset perspective projection
horizontal offset of Interaxial
Negative for Right eye
Positive for Left eye
Or just before rasterization in the vertex shader, offset the clip position by
the parallax amount (Nvidia 3D vision driver solution)
clipPos.x += EyeSign * Separation * ( clipPos.w – Convergence )
EyeSign = +1 for right, -1 for left
"
Currently I am controlling the separation and convergence (focal length) in the app and on each frame I setup the Projections for each eye and camera positions (since my camera is not fixed).
The only place where I take the screen size into consideration is when setting the projection matrices and that is the Aspect Ratio...(not exactly screen size) Hmm..will need to look here a bit more
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)