Stereo technique implemented in the 3D vision laptop (GeForce)
  1 / 2    
Hi, I have a question which confuses me for a long time. I want to buy a 3D vision laptop with GeForce card. However, I am not clear about what kind of stereo technique implemented in the laptop. I think there are two kinds of stereo techniques: quad buffered stereo and the interlaced stereo. Since the 3D vision laptop uses GeForce GPU, I think it cannot use quad bnuffered stereo. So it should use the interlaced stereo. However, for the interlaced stereo, the monitor should support a polarized filter and the glass is a passive one. But way the 3D vision kit for the laptop uses a active shutter glass? Another question is if I use the 3D vision laptop, can I implement my own stereo application using OpenGL? Thanks a lot! YL
Hi,
I have a question which confuses me for a long time.
I want to buy a 3D vision laptop with GeForce card.
However, I am not clear about what kind of stereo technique implemented in the laptop.
I think there are two kinds of stereo techniques: quad buffered stereo and the interlaced stereo.
Since the 3D vision laptop uses GeForce GPU, I think it cannot use quad bnuffered stereo.
So it should use the interlaced stereo.
However, for the interlaced stereo, the monitor should support a polarized filter and the glass is a passive one. But way the 3D vision kit for the laptop uses a active shutter glass?

Another question is if I use the 3D vision laptop, can I implement my own stereo application using OpenGL?

Thanks a lot!

YL

#1
Posted 02/24/2015 01:37 AM   
I think frame sequential 3D? At least that's what sview says when 3Dvision is working.
I think frame sequential 3D? At least that's what sview says when 3Dvision is working.

Model: Clevo P570WM Laptop
GPU: GeForce GTX 980M ~8GB GDDR5
CPU: Intel Core i7-4960X CPU +4.2GHz (12 CPUs)
Memory: 32GB Corsair Vengeance DDR3L 1600MHz, 4x8gb
OS: Microsoft Windows 7 Ultimate

#2
Posted 02/24/2015 03:13 AM   
Thanks for reply. I think you are right. The frame sequential (active stereo) is used for the GeForce card. A further question is can I use OpenGL to implement the frame sequential stereo on GeForce card? As I know, a Quadro card can do this, but I am not clear if a GeForce card can reach this.
Thanks for reply.
I think you are right. The frame sequential (active stereo) is used for the GeForce card.
A further question is can I use OpenGL to implement the frame sequential stereo on GeForce card?
As I know, a Quadro card can do this, but I am not clear if a GeForce card can reach this.

#3
Posted 02/24/2015 06:25 AM   
Laptops using Nvidia certified passive displays are supported via "Optomized for Nvidia GefOrce" It uses the same Nvidia stereoscopic driver architecture as any other display, albeit in a interlaced format. http://www.nvidia.com/object/optimized-for-geforce-3d-overview.html Laptops with 120 Hz displays are supported via Nvidia's 3D Vision http://www.nvidia.com/object/3d-vision-main.html Note: It is possible to use Nvidia's stereoscopic driver with a laptop connected to an external display, but it does require that the video output is routed directly out of the Nvidia GPU. https://forums.geforce.com/default/topic/571045/?comment=3883815
Laptops using Nvidia certified passive displays are supported via "Optomized for Nvidia GefOrce"
It uses the same Nvidia stereoscopic driver architecture as any other display, albeit in a interlaced format.

http://www.nvidia.com/object/optimized-for-geforce-3d-overview.html


Laptops with 120 Hz displays are supported via Nvidia's 3D Vision

http://www.nvidia.com/object/3d-vision-main.html


Note: It is possible to use Nvidia's stereoscopic driver with a laptop connected to an external display, but it does require that the video output is routed directly out of the Nvidia GPU.


https://forums.geforce.com/default/topic/571045/?comment=3883815

#4
Posted 02/24/2015 12:26 PM   
[quote="enjoy3DVision"]A further question is can I use OpenGL to implement the frame sequential stereo on GeForce card?[/quote] yes, it's possible. Depends on what you're trying to do.
enjoy3DVision said:A further question is can I use OpenGL to implement the frame sequential stereo on GeForce card?


yes, it's possible. Depends on what you're trying to do.

#5
Posted 02/24/2015 12:35 PM   
Also you are confusing different ideas there. The interlaced stereo, passive stereo, frame sequential are all about the hardware displaying the 3D. The quad-buffered stereo is a software layer to produce the stereo images. OpenGL can definitely do this. DirectX can do this as well, but they don't call it quad-buffered, it's the swap-chain. The way to think of it is that the 3D pipeline creates a stereo image in 4 buffers. Left/right for front buffer and then l/r for back buffer. It alternates between front and back to show you the next frames. Left and right can be routed to anything you want- passive, side-by-side, frame-sequential, over/under, interlaced, checkboard, anaglyph.
Also you are confusing different ideas there. The interlaced stereo, passive stereo, frame sequential are all about the hardware displaying the 3D.

The quad-buffered stereo is a software layer to produce the stereo images. OpenGL can definitely do this. DirectX can do this as well, but they don't call it quad-buffered, it's the swap-chain.

The way to think of it is that the 3D pipeline creates a stereo image in 4 buffers. Left/right for front buffer and then l/r for back buffer. It alternates between front and back to show you the next frames. Left and right can be routed to anything you want- passive, side-by-side, frame-sequential, over/under, interlaced, checkboard, anaglyph.

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

#6
Posted 02/24/2015 02:17 PM   
Pretty sure you'll need to look into this to get OpenGL working in S3D as 3D Vision Automatic doesn't directly support OpenGL. https://forums.geforce.com/default/topic/809612/3d-vision/-ogl-3d-vision-wrapper-is-now-open-source-/
Pretty sure you'll need to look into this to get OpenGL working in S3D as 3D Vision Automatic doesn't directly support OpenGL.

https://forums.geforce.com/default/topic/809612/3d-vision/-ogl-3d-vision-wrapper-is-now-open-source-/
#7
Posted 02/24/2015 04:33 PM   
[quote="bo3b"]The way to think of it is that the 3D pipeline creates a stereo image in 4 buffers. Left/right for front buffer and then l/r for back buffer. It alternates between front and back to show you the next frames. Left and right can be routed to anything you want- passive, side-by-side, frame-sequential, over/under, interlaced, checkboard, anaglyph.[/quote] Thanks for your clarification. 1. For the software layer side, can I use OpenGL to implement the stereo formats you mentioned above? If yes, would you please guide me a website or book talking about the implementation? 2. Does the implementation rely on specific GPU? For instance, if the displaying hardware is frame-sequential, and the GPU is GeForce nor Quadro, can I implement the frame-sequential format? Thanks in advance!
bo3b said:The way to think of it is that the 3D pipeline creates a stereo image in 4 buffers. Left/right for front buffer and then l/r for back buffer. It alternates between front and back to show you the next frames. Left and right can be routed to anything you want- passive, side-by-side, frame-sequential, over/under, interlaced, checkboard, anaglyph.


Thanks for your clarification.

1. For the software layer side, can I use OpenGL to implement the stereo formats you mentioned above?
If yes, would you please guide me a website or book talking about the implementation?

2. Does the implementation rely on specific GPU? For instance, if the displaying hardware is frame-sequential, and the GPU is GeForce nor Quadro, can I implement the frame-sequential format?

Thanks in advance!

#8
Posted 02/24/2015 06:38 PM   
You can implement the frame-sequential directly, and I think this should be possible while using OpenGL as the render engine. You would be using 3D Vision in that case, but using the Direct Mode, instead of Automatic Mode. (Automatic mode is a way to automatically make games work in 3D, since you are building the rendering, no need for it here). In Direct Mode, you call the driver directly to setup the buffers and manage the left/right eye switches. A reasonable white paper about the variants: http://www.nvidia.com/docs/IO/40505/WP-05482-001_v01-final.pdf I'm not certain, because I've not used OpenGL, but I think that the quad-buffered stereo that you normally produce with OpenGL will natively go to stereo using their Quadro cards in the Pro version. This is different than the consumer version of 3D Vision. Now if you want to target the consumer 3D Vision, you'd be doing this mapping from quad-buffered stereo to the output, using the nvapi. This is certainly doable, but don't expect this to be a slam dunk, as you'd need to write that mapping layer that they already did. I don't think the 'Pro' aspect means that it's harder or anything, that's just their way of marketing it to Pro users and allows them to jack up the price. Another thing worth noting that might give you that layer for free- try Helifax's OpenGL wrapper. This converts some games written for OpenGL to use 3D Vision, and is roughly what you are suggesting. [url]https://forums.geforce.com/default/topic/682130/3d-vision/-opengl-3d-vision-wrapper-enabling-3d-vision-in-opengl-apps/1/[/url]
You can implement the frame-sequential directly, and I think this should be possible while using OpenGL as the render engine.

You would be using 3D Vision in that case, but using the Direct Mode, instead of Automatic Mode. (Automatic mode is a way to automatically make games work in 3D, since you are building the rendering, no need for it here).

In Direct Mode, you call the driver directly to setup the buffers and manage the left/right eye switches.

A reasonable white paper about the variants:

http://www.nvidia.com/docs/IO/40505/WP-05482-001_v01-final.pdf


I'm not certain, because I've not used OpenGL, but I think that the quad-buffered stereo that you normally produce with OpenGL will natively go to stereo using their Quadro cards in the Pro version. This is different than the consumer version of 3D Vision.

Now if you want to target the consumer 3D Vision, you'd be doing this mapping from quad-buffered stereo to the output, using the nvapi. This is certainly doable, but don't expect this to be a slam dunk, as you'd need to write that mapping layer that they already did.

I don't think the 'Pro' aspect means that it's harder or anything, that's just their way of marketing it to Pro users and allows them to jack up the price.


Another thing worth noting that might give you that layer for free- try Helifax's OpenGL wrapper. This converts some games written for OpenGL to use 3D Vision, and is roughly what you are suggesting.

https://forums.geforce.com/default/topic/682130/3d-vision/-opengl-3d-vision-wrapper-enabling-3d-vision-in-opengl-apps/1/

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

#9
Posted 02/25/2015 10:06 AM   
nVidia Quadro Cards and 3DVision Pro supports 3D for OpenGL. ups...sorry bo3D already said this before.
nVidia Quadro Cards and 3DVision Pro supports 3D for OpenGL.

ups...sorry bo3D already said this before.

#10
Posted 02/25/2015 10:43 AM   
I read the paper mentioned by bo3b. I am not clear about how to synchronize the glass with the display on GeForce because the example in the paper is quad buffered stereo on Quadro GPU. On GeForce, let’s say we render a left eye image in the back buffer and then swap it to the front for display. The problem is after I call swapbuffer(), how to make the display synchronize with the glass. I think this synchronization functionality does not belong to OpenGL. Should I look into nvapi?
I read the paper mentioned by bo3b. I am not clear about how to synchronize the glass with the display on GeForce because the example in the paper is quad buffered stereo on Quadro GPU. On GeForce, let’s say we render a left eye image in the back buffer and then swap it to the front for display. The problem is after I call swapbuffer(), how to make the display synchronize with the glass. I think this synchronization functionality does not belong to OpenGL. Should I look into nvapi?

#11
Posted 02/26/2015 07:59 PM   
[quote="enjoy3DVision"]I read the paper mentioned by bo3b. I am not clear about how to synchronize the glass with the display on GeForce because the example in the paper is quad buffered stereo on Quadro GPU. On GeForce, let’s say we render a left eye image in the back buffer and then swap it to the front for display. The problem is after I call swapbuffer(), how to make the display synchronize with the glass. I think this synchronization functionality does not belong to OpenGL. Should I look into nvapi?[/quote] My friend if you are developing your OWN app in OGL then the answer is simple... [code] glGetBooleanv(GL_STEREO, &g_valid3D); if (g_valid3D) { glEnable(GL_STEREO); } .... // Activate the left back buffer glDrawBuffer(GL_BACK_LEFT); // Do drawing of LEFT frame here... // Activate the right back buffer glDrawBuffer(GL_BACK_RIGHT); // Do drawing of RIGHT frame here... // Swap the left and right buffer swapBuffers(); [/code] This is the so-called quad-buffering technique ( front+back buffer for left+ right eye = 4 buffers). This is supported on GTX cards... Take into account that you need to create 2 Cameras with 2 perspectives (one for each eye) when rendering left+ right eyes.... The rest you don't need to worry about as the driver will sync the screen with the glasses, etc... It's just easy as that. If you want frame sequential (for some weird reason) the 3D Vision driver doesn't support that on either GTX or QUADRO cards... Don't believe me? See this post: https://forums.geforce.com/default/topic/572432/-test-request-stereo-3d-opengl-application/ Get the app (written by me using the approach described) and see if it works on your GTX card + 3D Vision:)
enjoy3DVision said:I read the paper mentioned by bo3b. I am not clear about how to synchronize the glass with the display on GeForce because the example in the paper is quad buffered stereo on Quadro GPU. On GeForce, let’s say we render a left eye image in the back buffer and then swap it to the front for display. The problem is after I call swapbuffer(), how to make the display synchronize with the glass. I think this synchronization functionality does not belong to OpenGL. Should I look into nvapi?


My friend if you are developing your OWN app in OGL then the answer is simple...

glGetBooleanv(GL_STEREO, &g_valid3D);
if (g_valid3D)
{
glEnable(GL_STEREO);
}
....
// Activate the left back buffer
glDrawBuffer(GL_BACK_LEFT);
// Do drawing of LEFT frame here...

// Activate the right back buffer
glDrawBuffer(GL_BACK_RIGHT);
// Do drawing of RIGHT frame here...

// Swap the left and right buffer
swapBuffers();


This is the so-called quad-buffering technique ( front+back buffer for left+ right eye = 4 buffers).
This is supported on GTX cards...
Take into account that you need to create 2 Cameras with 2 perspectives (one for each eye) when rendering left+ right eyes....

The rest you don't need to worry about as the driver will sync the screen with the glasses, etc...

It's just easy as that.

If you want frame sequential (for some weird reason) the 3D Vision driver doesn't support that on either GTX or QUADRO cards...

Don't believe me? See this post: https://forums.geforce.com/default/topic/572432/-test-request-stereo-3d-opengl-application/

Get the app (written by me using the approach described) and see if it works on your GTX card + 3D Vision:)

1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc


My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com

(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)

#12
Posted 02/26/2015 11:59 PM   
Thank you so much helifax! I will try your application after my 3D vision laptop is available. I will send to you my detailed report later. I want to write my own OpenGL stereo application. I have a couple of questions. 1. You mean we can use the quad-buffering in GeForce GTX cards? 2. The quad-buffering is added to GTX card? 3. For non-GTX Geforce card, the quad-buffering does not work, right? 4. If it does not work, we need to use the Opengl-3d-vision-bridge, right? 5. How about the GTX card for notebook? Can I write my own OpenGL stereo application using quad-buffering technique in the notebook equipped with a GTX card? Thanks a lot!
Thank you so much helifax!

I will try your application after my 3D vision laptop is available.
I will send to you my detailed report later.

I want to write my own OpenGL stereo application.
I have a couple of questions.

1. You mean we can use the quad-buffering in GeForce GTX cards?
2. The quad-buffering is added to GTX card?
3. For non-GTX Geforce card, the quad-buffering does not work, right?
4. If it does not work, we need to use the Opengl-3d-vision-bridge, right?
5. How about the GTX card for notebook? Can I write my own OpenGL stereo application using quad-buffering technique in the notebook equipped with a GTX card?

Thanks a lot!

#13
Posted 02/27/2015 06:57 AM   
[quote="enjoy3DVision"]Thank you so much helifax! I will try your application after my 3D vision laptop is available. I will send to you my detailed report later. I want to write my own OpenGL stereo application. I have a couple of questions. 1. You mean we can use the quad-buffering in GeForce GTX cards? 2. The quad-buffering is added to GTX card? 3. For non-GTX Geforce card, the quad-buffering does not work, right? 4. If it does not work, we need to use the Opengl-3d-vision-bridge, right? 5. How about the GTX card for notebook? Can I write my own OpenGL stereo application using quad-buffering technique in the notebook equipped with a GTX card? Thanks a lot! [/quote] Quad-buffering stereo was added like 2 years ago in the 330.xx branch. Sorry, can't remember the exact driver when it was introduced. It works for both desktop and laptop GPUS. (but not surround or Mozaic configurations as nvidia calls them). The quad-buffering method is not proprietary to nvidia. So for example if you have an AMD gpu and their drivers support OGL quadbuffering it will work there as well. What people are saying around here is that nvidia doesn't provide 3D Vision Automatic for OpenGL. 3D Vision AUtomatic is a set of hooks and other complicated stuff that the driver does automatically for DX apps in order to automatically stereorize apps that weren't created with stereo support in the first place. ------ So no, there isn't any 3D Vision Automatic for OGL. So yes, there isn't any 3D Vision Automatic for DX. ------ Yes, there is full 3D Vision stereo support for OGL in your OWN app. Yes, there is full 3D Vision stereo support for DX in your OWN app. ------ If you have an OGL app that is already "built" and you can't rebuild it you need to "mimic" the functionality of 3D Vision Automatic under OGL. The only way I was able to do this was to use the OGL-DX interoop functionality provided by Nvapi. Hope this clarifies a bit more the situation;))
enjoy3DVision said:Thank you so much helifax!

I will try your application after my 3D vision laptop is available.
I will send to you my detailed report later.

I want to write my own OpenGL stereo application.
I have a couple of questions.

1. You mean we can use the quad-buffering in GeForce GTX cards?
2. The quad-buffering is added to GTX card?
3. For non-GTX Geforce card, the quad-buffering does not work, right?
4. If it does not work, we need to use the Opengl-3d-vision-bridge, right?
5. How about the GTX card for notebook? Can I write my own OpenGL stereo application using quad-buffering technique in the notebook equipped with a GTX card?

Thanks a lot!




Quad-buffering stereo was added like 2 years ago in the 330.xx branch. Sorry, can't remember the exact driver when it was introduced. It works for both desktop and laptop GPUS. (but not surround or Mozaic configurations as nvidia calls them).

The quad-buffering method is not proprietary to nvidia. So for example if you have an AMD gpu and their drivers support OGL quadbuffering it will work there as well.

What people are saying around here is that nvidia doesn't provide 3D Vision Automatic for OpenGL. 3D Vision AUtomatic is a set of hooks and other complicated stuff that the driver does automatically for DX apps in order to automatically stereorize apps that weren't created with stereo support in the first place.

------
So no, there isn't any 3D Vision Automatic for OGL.
So yes, there isn't any 3D Vision Automatic for DX.

------
Yes, there is full 3D Vision stereo support for OGL in your OWN app.
Yes, there is full 3D Vision stereo support for DX in your OWN app.

------
If you have an OGL app that is already "built" and you can't rebuild it you need to "mimic" the functionality of 3D Vision Automatic under OGL. The only way I was able to do this was to use the OGL-DX interoop functionality provided by Nvapi.

Hope this clarifies a bit more the situation;))

1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc


My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com

(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)

#14
Posted 02/27/2015 09:40 AM   
Hi, helifax Your program can run on my desktop (not 3D vision) with a GeForce GTX Titan Black GPU. I think this verify what you said that the quad-buffering is supported in GeForce card. When I switch to stereo mode I did not notice (without a shutter glass) any change of the picture. I think this is because my monitor does not support 120HZ. One more question is when I buy a 3D vision ready laptop, how to know the GeForce GPU supports quad-buffering? Thanks.
Hi, helifax
Your program can run on my desktop (not 3D vision) with a GeForce GTX Titan Black GPU.
I think this verify what you said that the quad-buffering is supported in GeForce card.

When I switch to stereo mode I did not notice (without a shutter glass) any change of the picture.
I think this is because my monitor does not support 120HZ.

One more question is when I buy a 3D vision ready laptop, how to know the GeForce GPU supports quad-buffering?

Thanks.

#15
Posted 02/27/2015 06:53 PM   
  1 / 2    
Scroll To Top