On a side topic...Anyone knows a way to capture stereo 3D images in openGL apps? ALT+F1 doesn't work (since I am not rendering using nVidia Autostereo)
Fraps seems to capture only one eye....
It would be nice If I could share a couple of pictures and let you guys tell me if you find it right or wrong...
On a side topic...Anyone knows a way to capture stereo 3D images in openGL apps? ALT+F1 doesn't work (since I am not rendering using nVidia Autostereo)
Fraps seems to capture only one eye....
It would be nice If I could share a couple of pictures and let you guys tell me if you find it right or wrong...
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
3D Vision Blog has a 2009 article on fraps and it mentions that surround isn't supported. Did you get surround working with 3D using OpenGl? Fraps support page states OpenGl support but doesn't mention if it does so in 3D. You should use their contact page.
What about MSI Afterburner, I know it has capture capabilities.
3D Vision Blog has a 2009 article on fraps and it mentions that surround isn't supported. Did you get surround working with 3D using OpenGl? Fraps support page states OpenGl support but doesn't mention if it does so in 3D. You should use their contact page.
What about MSI Afterburner, I know it has capture capabilities.
I found a post of how you can capture a screenshot as long as it's your own app.
http://www.opengl.org/discussion_boards/showthread.php/124228-stereoscopic-screenshot
Here's a Nvidia PDF talking about OpenGl video capture on a Quadro, perhaps it might have useful information if you want video capture. It pertains to their SDI cards used in Broadcast
"Both the capture device and the GPU must be programmatically configured using the combination of NVIDIA I/O API and OpenGL capture extension (NvAPI with GL/WGL extension on windows, and NVCtrl with GL/GLX extension on Linux)."
3D Dizzy has Doom 3 BFG stereo videos on youtube, but I think it was PS3 footage using a HDMI in-line capture device.
Here's a Nvidia PDF talking about OpenGl video capture on a Quadro, perhaps it might have useful information if you want video capture. It pertains to their SDI cards used in Broadcast
"Both the capture device and the GPU must be programmatically configured using the combination of NVIDIA I/O API and OpenGL capture extension (NvAPI with GL/WGL extension on windows, and NVCtrl with GL/GLX extension on Linux)."
3D Dizzy has Doom 3 BFG stereo videos on youtube, but I think it was PS3 footage using a HDMI in-line capture device.
[quote="D-Man11"]3D Vision Blog has a 2009 article on fraps and it mentions that surround isn't supported. Did you get surround working with 3D using OpenGl? Fraps support page states OpenGl support but doesn't mention if it does so in 3D. You should use their contact page.
What about MSI Afterburner, I know it has capture capabilities.
[/quote]
Big Thanks for the article!!
Fraps DOES work in 3D Surround on DX9-DX11. I can capture videos just fine. Screenshots are always 2D in DX9-11 and OpenGL both Surround and single monitor...hmmm
I haven't tried Afterburner yet..but I will try with it...
As for 3D Surround in OpenGL under Windows is not working :| However it does work under Linux:))) (but then again I am using nvstusb lib and stereo3D controller to use the glasses and the pyramid)
Last I checked nVidia hasn't implemented quad buffering in Surround under Linux. If I query opengl with GL_STEREO every time I get a "pixel format not supported" in Surround.
I think I will send a bug report to nVidia on this;)) Probably they haven't even thought about it hahah:))
For a screenshot I think I can also dump the content of the left and right buffer to a texture file and manually stitch the l and r images into one to view it...
D-Man11 said:3D Vision Blog has a 2009 article on fraps and it mentions that surround isn't supported. Did you get surround working with 3D using OpenGl? Fraps support page states OpenGl support but doesn't mention if it does so in 3D. You should use their contact page.
What about MSI Afterburner, I know it has capture capabilities.
Big Thanks for the article!!
Fraps DOES work in 3D Surround on DX9-DX11. I can capture videos just fine. Screenshots are always 2D in DX9-11 and OpenGL both Surround and single monitor...hmmm
I haven't tried Afterburner yet..but I will try with it...
As for 3D Surround in OpenGL under Windows is not working :| However it does work under Linux:))) (but then again I am using nvstusb lib and stereo3D controller to use the glasses and the pyramid)
Last I checked nVidia hasn't implemented quad buffering in Surround under Linux. If I query opengl with GL_STEREO every time I get a "pixel format not supported" in Surround.
I think I will send a bug report to nVidia on this;)) Probably they haven't even thought about it hahah:))
For a screenshot I think I can also dump the content of the left and right buffer to a texture file and manually stitch the l and r images into one to view it...
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
I was just about to suggest implementing an internal screenshot button as I assumed that is possible. But you just covered that option. I had some hopes on fraps considering it can capture 3D video but apparently not in Doom 3 BFG (OpenGL).
I was just about to suggest implementing an internal screenshot button as I assumed that is possible. But you just covered that option. I had some hopes on fraps considering it can capture 3D video but apparently not in Doom 3 BFG (OpenGL).
Thanks to everybody using my assembler it warms my heart.
To have a critical piece of code that everyone can enjoy!
What more can you ask for?
[quote="helifax"]I think I will send a bug report to nVidia on this;)) Probably they haven't even thought about it hahah:))[/quote]
You should also request that they implement it in shadowplay while they are still hard at work on a final release version.
[quote="helifax"]For a screenshot I think I can also dump the content of the left and right buffer to a texture file and manually stitch the l and r images into one to view it...[/quote]
So the info at this link did not work?
[url]http://www.opengl.org/discussion_boards/showthread.php/124228-stereoscopic-screenshot[/url]
Also contacting fraps support might be a good idea.
[url]http://www.fraps.com/contact.php[/url]
helifax said:I think I will send a bug report to nVidia on this;)) Probably they haven't even thought about it hahah:))
You should also request that they implement it in shadowplay while they are still hard at work on a final release version.
helifax said:For a screenshot I think I can also dump the content of the left and right buffer to a texture file and manually stitch the l and r images into one to view it...
[quote="D-Man11"][quote="helifax"]I think I will send a bug report to nVidia on this;)) Probably they haven't even thought about it hahah:))[/quote]
You should also request that they implement it in shadowplay while they are still hard at work on a final release version.
[quote="helifax"]For a screenshot I think I can also dump the content of the left and right buffer to a texture file and manually stitch the l and r images into one to view it...[/quote]
So the info at this link did not work?
[url]http://www.opengl.org/discussion_boards/showthread.php/124228-stereoscopic-screenshot[/url]
Also contacting fraps support might be a good idea.
[url]http://www.fraps.com/contact.php[/url]
[/quote]
That info is exactly what I think I will do;)) dump the content to a texture file;)) one for each eye:D
Yes, I will also put in a word with them about shadowplay..yet I cannot test it on a GTX590 :( don't have the hardware h.264 encoder..:(
helifax said:I think I will send a bug report to nVidia on this;)) Probably they haven't even thought about it hahah:))
You should also request that they implement it in shadowplay while they are still hard at work on a final release version.
helifax said:For a screenshot I think I can also dump the content of the left and right buffer to a texture file and manually stitch the l and r images into one to view it...
That info is exactly what I think I will do;)) dump the content to a texture file;)) one for each eye:D
Yes, I will also put in a word with them about shadowplay..yet I cannot test it on a GTX590 :( don't have the hardware h.264 encoder..:(
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
OpenGL uses a right hand coordinate system, and DirectX uses a left hand coordinate system. I think you are already taking that into account, but just to double check, since I don't understand a couple of comments.
[quote="helifax"]1. Convergence works OK between +0 and +whatever, but at negative values all the elements are pushed back into the screen but incorrectly: The closer elements are pushed more than the farther elements. Comparing with the AUTO 3D of nvidia it seems this doesn't happen to them.
Why is that? Do I need other projection matrixes? nVidia said to use just one (the mono one just translated on the X with +- Asymmetric Frustum)[/quote]This seems right to me, and what I'd expect if you use negative convergence. It's the corollary for positive convergence, where if we push it really hard we get a toyified effect because things are stretched.
For the negative convergence case, if you think of the triangular shape of the view, then you are flattening the triangle when pushing convergence into the screen. The up close items will be moved more than distant items because of the shape of the triangle, so I would expect up close items to be impacted more than distant ones.
I don't think you can adjust the convergence into the screen this way, without also adjusting the separation at the same time, to stretch that triangle back to it's original shape.
OpenGL uses a right hand coordinate system, and DirectX uses a left hand coordinate system. I think you are already taking that into account, but just to double check, since I don't understand a couple of comments.
helifax said:1. Convergence works OK between +0 and +whatever, but at negative values all the elements are pushed back into the screen but incorrectly: The closer elements are pushed more than the farther elements. Comparing with the AUTO 3D of nvidia it seems this doesn't happen to them.
Why is that? Do I need other projection matrixes? nVidia said to use just one (the mono one just translated on the X with +- Asymmetric Frustum)
This seems right to me, and what I'd expect if you use negative convergence. It's the corollary for positive convergence, where if we push it really hard we get a toyified effect because things are stretched.
For the negative convergence case, if you think of the triangular shape of the view, then you are flattening the triangle when pushing convergence into the screen. The up close items will be moved more than distant items because of the shape of the triangle, so I would expect up close items to be impacted more than distant ones.
I don't think you can adjust the convergence into the screen this way, without also adjusting the separation at the same time, to stretch that triangle back to it's original shape.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607 Latest 3Dmigoto Release Bo3b's School for ShaderHackers
[quote="helifax"]In my glsl shaders I haven't added any new calculations. Basically are the same shaders that I use for mono rendering. While using them in stereo I see no problem with them. But perhaps I should do the same thing?
Here I am a bit confused... If my app is natively rendering in stereo do I still need to add that footer to the shaders? or not? [/quote]No, I think you already have it right. I brought up the footer as just an example of why and how it works in NVidia Automatic mode. That automatic mode is why we get 3D on all the old games that were never designed with it. In newer games, or your example, the control is explicit instead of automatic. You essentially take over the control of what Ctrl-F3/F4 and Ctrl-F5/F6 do, using your UI instead of NVidia.
helifax said:In my glsl shaders I haven't added any new calculations. Basically are the same shaders that I use for mono rendering. While using them in stereo I see no problem with them. But perhaps I should do the same thing?
Here I am a bit confused... If my app is natively rendering in stereo do I still need to add that footer to the shaders? or not?
No, I think you already have it right. I brought up the footer as just an example of why and how it works in NVidia Automatic mode. That automatic mode is why we get 3D on all the old games that were never designed with it. In newer games, or your example, the control is explicit instead of automatic. You essentially take over the control of what Ctrl-F3/F4 and Ctrl-F5/F6 do, using your UI instead of NVidia.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607 Latest 3Dmigoto Release Bo3b's School for ShaderHackers
[quote="bo3b"]OpenGL uses a right hand coordinate system, and DirectX uses a left hand coordinate system. I think you are already taking that into account, but just to double check, since I don't understand a couple of comments.
[quote="helifax"]1. Convergence works OK between +0 and +whatever, but at negative values all the elements are pushed back into the screen but incorrectly: The closer elements are pushed more than the farther elements. Comparing with the AUTO 3D of nvidia it seems this doesn't happen to them.
Why is that? Do I need other projection matrixes? nVidia said to use just one (the mono one just translated on the X with +- Asymmetric Frustum)[/quote]This seems right to me, and what I'd expect if you use negative convergence. It's the corollary for positive convergence, where if we push it really hard we get a toyified effect because things are stretched.
For the negative convergence case, if you think of the triangular shape of the view, then you are flattening the triangle when pushing convergence into the screen. The up close items will be moved more than distant items because of the shape of the triangle, so I would expect up close items to be impacted more than distant ones.
I don't think you can adjust the convergence into the screen this way, without also adjusting the separation at the same time, to stretch that triangle back to it's original shape.
[/quote]
Yes I am using the correct matrix system for openGL (Right hand column major system).
And I also managed to find out where the problem lies. Ofc it was in my code and here I get a bit confused.
In my APP I am not using the standard glFrustum. Basically I have my own camera class and I am constructing the projection matrix based on the same rules. I also have to use an external custom LIB for all the maths involved. The problem was with the pointer to the matrix and it's indices.
I have assumed he indexes the elements per row when in fact he was indexing the matrix elements per column...
I was writing:
M[0] M[1] M[2] M[4]
..........
When I should have
M[0] M[4] M[8] M[12]
From there you imagine everything broke:)) I noticed the Projection matrix wasn't working correctly when the I tried to do frustum culling and the planes weren't the correct ones:)) Ofc like everything the problem lies were you expect the less:)))
So back to the topic, yes for negative converge I also need to adjust the separation... Then again a negative convergence value doesn't make alot of sense:))
bo3b said:OpenGL uses a right hand coordinate system, and DirectX uses a left hand coordinate system. I think you are already taking that into account, but just to double check, since I don't understand a couple of comments.
helifax said:1. Convergence works OK between +0 and +whatever, but at negative values all the elements are pushed back into the screen but incorrectly: The closer elements are pushed more than the farther elements. Comparing with the AUTO 3D of nvidia it seems this doesn't happen to them.
Why is that? Do I need other projection matrixes? nVidia said to use just one (the mono one just translated on the X with +- Asymmetric Frustum)
This seems right to me, and what I'd expect if you use negative convergence. It's the corollary for positive convergence, where if we push it really hard we get a toyified effect because things are stretched.
For the negative convergence case, if you think of the triangular shape of the view, then you are flattening the triangle when pushing convergence into the screen. The up close items will be moved more than distant items because of the shape of the triangle, so I would expect up close items to be impacted more than distant ones.
I don't think you can adjust the convergence into the screen this way, without also adjusting the separation at the same time, to stretch that triangle back to it's original shape.
Yes I am using the correct matrix system for openGL (Right hand column major system).
And I also managed to find out where the problem lies. Ofc it was in my code and here I get a bit confused.
In my APP I am not using the standard glFrustum. Basically I have my own camera class and I am constructing the projection matrix based on the same rules. I also have to use an external custom LIB for all the maths involved. The problem was with the pointer to the matrix and it's indices.
I have assumed he indexes the elements per row when in fact he was indexing the matrix elements per column...
I was writing:
M[0] M[1] M[2] M[4]
..........
When I should have
M[0] M[4] M[8] M[12]
From there you imagine everything broke:)) I noticed the Projection matrix wasn't working correctly when the I tried to do frustum culling and the planes weren't the correct ones:)) Ofc like everything the problem lies were you expect the less:)))
So back to the topic, yes for negative converge I also need to adjust the separation... Then again a negative convergence value doesn't make alot of sense:))
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
Hey, great! Glad you found the bug.
I wouldn't worry about negative convergence unless you have something specific in mind, like an effect, or an experiment.
But, I kind of lost track of where we are. I'm not sure I'm helping with the understanding here, I'm not expert enough. But if you have more questions, I'm happy to give them a try.
I wouldn't worry about negative convergence unless you have something specific in mind, like an effect, or an experiment.
But, I kind of lost track of where we are. I'm not sure I'm helping with the understanding here, I'm not expert enough. But if you have more questions, I'm happy to give them a try.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607 Latest 3Dmigoto Release Bo3b's School for ShaderHackers
[quote="bo3b"]Hey, great! Glad you found the bug.
I wouldn't worry about negative convergence unless you have something specific in mind, like an effect, or an experiment.
But, I kind of lost track of where we are. I'm not sure I'm helping with the understanding here, I'm not expert enough. But if you have more questions, I'm happy to give them a try.[/quote]
Yes, I ditched the negative convergence plan (It was just an experiment actually since I wanted to see how it works and what would happen;)) )
Currently the stereo rendering is working correctly (from my point of view) and I can manually and separately adjust the convergence (+ value) and separation which is what I wanted in the first place.;))
The app itself is not completely done yet as is still a work in progress but after I finish it (have around one-two weeks) I will post a link here.
It will not be AAA material stuff (meaning no fancy shaders and stuff) because it's focus is terrain generation (from heightmaps)using Continuous Dynamic Level of Detail (as a technique) and as a side note I also tossed in Instance Cloud Reduction (instance culling using geometry shaders) for some of the terrain vegetation( aka trees)
The beauty of it, is that everything is instanced rendering (the terrain + trees)
The stereo 3D rendering is just the way I want to visualize it;)) and it makes no sense to make a stereo 3D app without being able to visualize anything hahaha:))
Currently eveything is done in forward rendering although in the future I would love to pump everything through my deferred rendering pipeline (which I have done in a different project)
If I have any other questions I will defo put them;)) Thank you all for the help;))
I wouldn't worry about negative convergence unless you have something specific in mind, like an effect, or an experiment.
But, I kind of lost track of where we are. I'm not sure I'm helping with the understanding here, I'm not expert enough. But if you have more questions, I'm happy to give them a try.
Yes, I ditched the negative convergence plan (It was just an experiment actually since I wanted to see how it works and what would happen;)) )
Currently the stereo rendering is working correctly (from my point of view) and I can manually and separately adjust the convergence (+ value) and separation which is what I wanted in the first place.;))
The app itself is not completely done yet as is still a work in progress but after I finish it (have around one-two weeks) I will post a link here.
It will not be AAA material stuff (meaning no fancy shaders and stuff) because it's focus is terrain generation (from heightmaps)using Continuous Dynamic Level of Detail (as a technique) and as a side note I also tossed in Instance Cloud Reduction (instance culling using geometry shaders) for some of the terrain vegetation( aka trees)
The beauty of it, is that everything is instanced rendering (the terrain + trees)
The stereo 3D rendering is just the way I want to visualize it;)) and it makes no sense to make a stereo 3D app without being able to visualize anything hahaha:))
Currently eveything is done in forward rendering although in the future I would love to pump everything through my deferred rendering pipeline (which I have done in a different project)
If I have any other questions I will defo put them;)) Thank you all for the help;))
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
I just saw that OpenGL 4.4 was announced at Siggraph last month and Nvidia relesed a Beta driver to coincide with the announcement the same day.
https://developer.nvidia.com/content/opengl-44-announced-nvidia-releases-beta-drivers
http://images.anandtech.com/doci/7161/OGL1.png
[quote="D-Man11"]I just saw that OpenGL 4.4 was announced at Siggraph last month and Nvidia relesed a Beta driver to coincide with the announcement the same day.
https://developer.nvidia.com/content/opengl-44-announced-nvidia-releases-beta-drivers
http://images.anandtech.com/doci/7161/OGL1.png[/quote]
Very nice!! Thank you very much D-Man11 for letting us know;)) I will defo look at it in the future:)
D-Man11 said:I just saw that OpenGL 4.4 was announced at Siggraph last month and Nvidia relesed a Beta driver to coincide with the announcement the same day.
Very nice!! Thank you very much D-Man11 for letting us know;)) I will defo look at it in the future:)
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
[quote="bo3b"]OpenGL uses a right hand coordinate system, and DirectX uses a left hand coordinate system.[/quote]
I saw the following post and thought it might interest you if you were not aware of it bo3b, perhaps on a future project.
Hi,
I'm looking to port some code from OpenGL (right handed) to DirectX
(left handed) and have a couple of questions for anyone out there who
has done this already:
Q: In my OpenGL impl. I have seperate World/View matrices
(anticipating the move to DirectX). Do I have to do anything special
with the OpenGL model/view matrices to make them work in DirectX (i.e.
do I have to transpose them etc)? Or can I just scale my OpenGL
matrices in Z by -1?
Q: Loading the geometry, is the winding order different? If I use my
GL geometry as is will the normals be pointing in the wrong direction?
Many thanks in advance,
Ben.
Ben, you can use DirectX with right handed coordinate systems if you wish.
Look at D3DXMatrixPerspectiveRH, D3DXMatrixOrthoRH, and D3DXMatrixLookAtRH.
Depending upon the size, complexity, and need to maintain this code in the
future this could be an option for you.
bo3b said:OpenGL uses a right hand coordinate system, and DirectX uses a left hand coordinate system.
I saw the following post and thought it might interest you if you were not aware of it bo3b, perhaps on a future project.
Hi,
I'm looking to port some code from OpenGL (right handed) to DirectX
(left handed) and have a couple of questions for anyone out there who
has done this already:
Q: In my OpenGL impl. I have seperate World/View matrices
(anticipating the move to DirectX). Do I have to do anything special
with the OpenGL model/view matrices to make them work in DirectX (i.e.
do I have to transpose them etc)? Or can I just scale my OpenGL
matrices in Z by -1?
Q: Loading the geometry, is the winding order different? If I use my
GL geometry as is will the normals be pointing in the wrong direction?
Many thanks in advance,
Ben.
Ben, you can use DirectX with right handed coordinate systems if you wish.
Look at D3DXMatrixPerspectiveRH, D3DXMatrixOrthoRH, and D3DXMatrixLookAtRH.
Depending upon the size, complexity, and need to maintain this code in the
future this could be an option for you.
Fraps seems to capture only one eye....
It would be nice If I could share a couple of pictures and let you guys tell me if you find it right or wrong...
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
What about MSI Afterburner, I know it has capture capabilities.
http://www.opengl.org/discussion_boards/showthread.php/124228-stereoscopic-screenshot
Here's a Nvidia PDF talking about OpenGl video capture on a Quadro, perhaps it might have useful information if you want video capture. It pertains to their SDI cards used in Broadcast
"Both the capture device and the GPU must be programmatically configured using the combination of NVIDIA I/O API and OpenGL capture extension (NvAPI with GL/WGL extension on windows, and NVCtrl with GL/GLX extension on Linux)."
3D Dizzy has Doom 3 BFG stereo videos on youtube, but I think it was PS3 footage using a HDMI in-line capture device.
Big Thanks for the article!!
Fraps DOES work in 3D Surround on DX9-DX11. I can capture videos just fine. Screenshots are always 2D in DX9-11 and OpenGL both Surround and single monitor...hmmm
I haven't tried Afterburner yet..but I will try with it...
As for 3D Surround in OpenGL under Windows is not working :| However it does work under Linux:))) (but then again I am using nvstusb lib and stereo3D controller to use the glasses and the pyramid)
Last I checked nVidia hasn't implemented quad buffering in Surround under Linux. If I query opengl with GL_STEREO every time I get a "pixel format not supported" in Surround.
I think I will send a bug report to nVidia on this;)) Probably they haven't even thought about it hahah:))
For a screenshot I think I can also dump the content of the left and right buffer to a texture file and manually stitch the l and r images into one to view it...
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
Thanks to everybody using my assembler it warms my heart.
To have a critical piece of code that everyone can enjoy!
What more can you ask for?
donations: ulfjalmbrant@hotmail.com
You should also request that they implement it in shadowplay while they are still hard at work on a final release version.
So the info at this link did not work?
http://www.opengl.org/discussion_boards/showthread.php/124228-stereoscopic-screenshot
Also contacting fraps support might be a good idea.
http://www.fraps.com/contact.php
That info is exactly what I think I will do;)) dump the content to a texture file;)) one for each eye:D
Yes, I will also put in a word with them about shadowplay..yet I cannot test it on a GTX590 :( don't have the hardware h.264 encoder..:(
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
This seems right to me, and what I'd expect if you use negative convergence. It's the corollary for positive convergence, where if we push it really hard we get a toyified effect because things are stretched.
For the negative convergence case, if you think of the triangular shape of the view, then you are flattening the triangle when pushing convergence into the screen. The up close items will be moved more than distant items because of the shape of the triangle, so I would expect up close items to be impacted more than distant ones.
I don't think you can adjust the convergence into the screen this way, without also adjusting the separation at the same time, to stretch that triangle back to it's original shape.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers
Yes I am using the correct matrix system for openGL (Right hand column major system).
And I also managed to find out where the problem lies. Ofc it was in my code and here I get a bit confused.
In my APP I am not using the standard glFrustum. Basically I have my own camera class and I am constructing the projection matrix based on the same rules. I also have to use an external custom LIB for all the maths involved. The problem was with the pointer to the matrix and it's indices.
I have assumed he indexes the elements per row when in fact he was indexing the matrix elements per column...
I was writing:
M[0] M[1] M[2] M[4]
..........
When I should have
M[0] M[4] M[8] M[12]
From there you imagine everything broke:)) I noticed the Projection matrix wasn't working correctly when the I tried to do frustum culling and the planes weren't the correct ones:)) Ofc like everything the problem lies were you expect the less:)))
So back to the topic, yes for negative converge I also need to adjust the separation... Then again a negative convergence value doesn't make alot of sense:))
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
I wouldn't worry about negative convergence unless you have something specific in mind, like an effect, or an experiment.
But, I kind of lost track of where we are. I'm not sure I'm helping with the understanding here, I'm not expert enough. But if you have more questions, I'm happy to give them a try.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers
Yes, I ditched the negative convergence plan (It was just an experiment actually since I wanted to see how it works and what would happen;)) )
Currently the stereo rendering is working correctly (from my point of view) and I can manually and separately adjust the convergence (+ value) and separation which is what I wanted in the first place.;))
The app itself is not completely done yet as is still a work in progress but after I finish it (have around one-two weeks) I will post a link here.
It will not be AAA material stuff (meaning no fancy shaders and stuff) because it's focus is terrain generation (from heightmaps)using Continuous Dynamic Level of Detail (as a technique) and as a side note I also tossed in Instance Cloud Reduction (instance culling using geometry shaders) for some of the terrain vegetation( aka trees)
The beauty of it, is that everything is instanced rendering (the terrain + trees)
The stereo 3D rendering is just the way I want to visualize it;)) and it makes no sense to make a stereo 3D app without being able to visualize anything hahaha:))
Currently eveything is done in forward rendering although in the future I would love to pump everything through my deferred rendering pipeline (which I have done in a different project)
If I have any other questions I will defo put them;)) Thank you all for the help;))
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
https://developer.nvidia.com/content/opengl-44-announced-nvidia-releases-beta-drivers
http://images.anandtech.com/doci/7161/OGL1.png
Very nice!! Thank you very much D-Man11 for letting us know;)) I will defo look at it in the future:)
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
I saw the following post and thought it might interest you if you were not aware of it bo3b, perhaps on a future project.
Hi,
I'm looking to port some code from OpenGL (right handed) to DirectX
(left handed) and have a couple of questions for anyone out there who
has done this already:
Q: In my OpenGL impl. I have seperate World/View matrices
(anticipating the move to DirectX). Do I have to do anything special
with the OpenGL model/view matrices to make them work in DirectX (i.e.
do I have to transpose them etc)? Or can I just scale my OpenGL
matrices in Z by -1?
Q: Loading the geometry, is the winding order different? If I use my
GL geometry as is will the normals be pointing in the wrong direction?
Many thanks in advance,
Ben.
Ben, you can use DirectX with right handed coordinate systems if you wish.
Look at D3DXMatrixPerspectiveRH, D3DXMatrixOrthoRH, and D3DXMatrixLookAtRH.
Depending upon the size, complexity, and need to maintain this code in the
future this could be an option for you.