Technical questions on theory of 3d image generation
This is an attempt to understand the generation of the separate eye-views and specifically the difference between separation and convergence.

This is what I think I've been able to figure out just from observing the 3d view while changing 3d settings:

Theory of Convergence and Separation controls:

So you have a 3d virtual world and two virtual cameras representing your eyes in that world. It looks to me like when you adjust convergence, what happens is that the cameras move away or towards each other but here's the thing: the cameras don't rotate. They both keep their current direction of view so that as they get further apart, your eyes need to cross more in order to compensate. So increasing Convergence means separating the cameras more. It's a little counterintuitive since the cameras don't converge but it makes sense because your eyes do converge.

Observing an increase in Separation, it looks like two things happen. The cameras both separate and rotate to keep the same focal point (convergence point, zero-parallax point, screen-depth location).

Can someone who really knows more about this tell me if all that is really correct? It's all guesswork on my part.

By the way, iZ3D's 3d driver seems to have a different kind of convergence control. Say you have a 2-projector system and slide one projector sideways a little. That changes apparent convergence without moving the virtual cameras in the game's rendering system. That's what it looks like to me when I use iZ3D's convergnce control. I think it would be good if nvidia included both kinds of convergence in it's drivers. It could help with some 3d reticle problems when they are at a bad depth. This type of convergence control works pretty good for iZ3D but I'm not really sure which kind of cenvergence is better so why not have both.

That's it for now.
This is an attempt to understand the generation of the separate eye-views and specifically the difference between separation and convergence.



This is what I think I've been able to figure out just from observing the 3d view while changing 3d settings:



Theory of Convergence and Separation controls:



So you have a 3d virtual world and two virtual cameras representing your eyes in that world. It looks to me like when you adjust convergence, what happens is that the cameras move away or towards each other but here's the thing: the cameras don't rotate. They both keep their current direction of view so that as they get further apart, your eyes need to cross more in order to compensate. So increasing Convergence means separating the cameras more. It's a little counterintuitive since the cameras don't converge but it makes sense because your eyes do converge.



Observing an increase in Separation, it looks like two things happen. The cameras both separate and rotate to keep the same focal point (convergence point, zero-parallax point, screen-depth location).



Can someone who really knows more about this tell me if all that is really correct? It's all guesswork on my part.



By the way, iZ3D's 3d driver seems to have a different kind of convergence control. Say you have a 2-projector system and slide one projector sideways a little. That changes apparent convergence without moving the virtual cameras in the game's rendering system. That's what it looks like to me when I use iZ3D's convergnce control. I think it would be good if nvidia included both kinds of convergence in it's drivers. It could help with some 3d reticle problems when they are at a bad depth. This type of convergence control works pretty good for iZ3D but I'm not really sure which kind of cenvergence is better so why not have both.



That's it for now.

#1
Posted 05/04/2009 09:37 PM   
First, here is the standard guide for understanding stereoscopic 3D settings in games:

[url="http://www.mtbs3d.com/cgi-bin/newsletter.cgi?news_id=44/"]http://www.mtbs3d.com/cgi-bin/newsletter.cgi?news_id=44/[/url]

However, I think you are looking for an explanation of the actual interaction. Take a look at this diagram:

[img]http://www.mtbs3d.com/gallery/albums/userpics/10002/3dcapturemap.jpg[/img]

The separation is the distance between the two cameras - a very physical image capture. Convergence is an effect that is applied AFTER the image is captured. The cameras and their position are not adjusted, but the two complete images are offset or pushed closer together or apart to compensate for inside and outside screen effects.

You can try this with a simple stereoscopic photo. Take any side by side image, and adjust how close and far the images are from each other. Depending on how much separation you have to work with, you will be able to mix inside and outside screen effects.

Regards,
Chopper
First, here is the standard guide for understanding stereoscopic 3D settings in games:



http://www.mtbs3d.com/cgi-bin/newsletter.cgi?news_id=44/



However, I think you are looking for an explanation of the actual interaction. Take a look at this diagram:



Image



The separation is the distance between the two cameras - a very physical image capture. Convergence is an effect that is applied AFTER the image is captured. The cameras and their position are not adjusted, but the two complete images are offset or pushed closer together or apart to compensate for inside and outside screen effects.



You can try this with a simple stereoscopic photo. Take any side by side image, and adjust how close and far the images are from each other. Depending on how much separation you have to work with, you will be able to mix inside and outside screen effects.



Regards,

Chopper

#2
Posted 05/04/2009 11:10 PM   
Thanks for the reply Chopper. I know you're one of the greats on these forums and it helps but let me specify that I'm asking about nvidia's implementation of the general principles.

"The separation is the distance between the two cameras." --- agreed, but I still believe that nvidia's implementation of separation control also involves camera rotation. If nvidia's separation control was separation only, then the focal point would change. The virtual cameras must rotate inward as separation increases in order to keep the same focal point. Either that or else instead of rotation, the images need to slide sideways as separation increases in order to keep the same focal point. That's all I can think of.

"Convergence is an effect that is applied AFTER the image is captured. The cameras and their position are not adjusted, but the two complete images are offset or pushed closer together or apart to compensate for inside and outside screen effects." --- This describes the way I think iZ3D's driver works when you adjust convergence with it. But I still think that nvidia's implementation of convergence control moves the cameras without any rotation. Try this in a game with a 90-degree turn in a hallway: stand in the corner of the hallway and look down it in either direction. Move sideways and closer to the wall facing you so that if you increase separation alot, one eye will look down the hallway and the other will be looking at the wall up close. Then try to find a position/setting where adjusting convergence causes one of your eyes to look around the corner. This would prove that the virtual cameras do move when you adjust convergence. This is what I have observed and this is why I think what I do about these things.

Basically, I'm telling you that you can use either separation or convergence controls to peek around a corner in a game with one eye. This tells me that the virtual cameras do move during the adjustment.

I suppose I really need someone from nvidia to answer this. Maybe it's a secret. :o
Thanks for the reply Chopper. I know you're one of the greats on these forums and it helps but let me specify that I'm asking about nvidia's implementation of the general principles.



"The separation is the distance between the two cameras." --- agreed, but I still believe that nvidia's implementation of separation control also involves camera rotation. If nvidia's separation control was separation only, then the focal point would change. The virtual cameras must rotate inward as separation increases in order to keep the same focal point. Either that or else instead of rotation, the images need to slide sideways as separation increases in order to keep the same focal point. That's all I can think of.



"Convergence is an effect that is applied AFTER the image is captured. The cameras and their position are not adjusted, but the two complete images are offset or pushed closer together or apart to compensate for inside and outside screen effects." --- This describes the way I think iZ3D's driver works when you adjust convergence with it. But I still think that nvidia's implementation of convergence control moves the cameras without any rotation. Try this in a game with a 90-degree turn in a hallway: stand in the corner of the hallway and look down it in either direction. Move sideways and closer to the wall facing you so that if you increase separation alot, one eye will look down the hallway and the other will be looking at the wall up close. Then try to find a position/setting where adjusting convergence causes one of your eyes to look around the corner. This would prove that the virtual cameras do move when you adjust convergence. This is what I have observed and this is why I think what I do about these things.



Basically, I'm telling you that you can use either separation or convergence controls to peek around a corner in a game with one eye. This tells me that the virtual cameras do move during the adjustment.



I suppose I really need someone from nvidia to answer this. Maybe it's a secret. :o

#3
Posted 05/04/2009 11:50 PM   
On further contemplation of your image, I've found something confusing but I think I understand it correctly...

There are two things to talk about: the cameras (locations and orientations) and the rendered images.

As you increase the separation of the rendered images, left-eye-view to the left and right-eye-view to the right in this example, the view goes away from you. That much is clear, but note that this is different from the separation of the cameras. As you increase the separation of the cameras, your eyes must cross to compensate as you maintain focus on the same object. As a camera moves left, the scene seen moves right. So sliding the right-eye-image right causes eye-divergence but sliding the right-eye camera right causes eye-convergence.

Clear?

This is why it's confusing, because using this method, you increase the camera's separation in order to increase the viewer's eye's convergence. That's what I think the Convergece hotkeys do.

You would think that the Separation controls control ONLY the separation of the cameras but they don't. They control both the separation and also the rotation (convergence) of the cameras. That's what I think anyway.

Furthermore, you would think that the Convergence controls control convergence and they do, but not the convergence of the cameras, just the convergence of the viewer's eyes. They don't control the convergence of the cameras, just the separation of the cameras. Controlling the separation of the cameras controls the convergence of the viewer's eyes.

Clear as mud, eh?
Isn't this fun? :'(

By the way, I think iZ3D's separation controls also work differently than nvidia's because theirs changes focus as you change separation and nvidia's does not. So basically iZ3D does things the "standard way" that you describe and nvidia does things a little differently. I think I like nvidia's better even though iZ3D's may be better in some ways.

I rechecked this and I'm wrong. iZ3D's separation looks exactly like nvidia's separation and so they both keep focus constant as you adjust separation. However, I was not wrong about the difference in convergence control. It's obvious if you compare them.
On further contemplation of your image, I've found something confusing but I think I understand it correctly...



There are two things to talk about: the cameras (locations and orientations) and the rendered images.



As you increase the separation of the rendered images, left-eye-view to the left and right-eye-view to the right in this example, the view goes away from you. That much is clear, but note that this is different from the separation of the cameras. As you increase the separation of the cameras, your eyes must cross to compensate as you maintain focus on the same object. As a camera moves left, the scene seen moves right. So sliding the right-eye-image right causes eye-divergence but sliding the right-eye camera right causes eye-convergence.



Clear?



This is why it's confusing, because using this method, you increase the camera's separation in order to increase the viewer's eye's convergence. That's what I think the Convergece hotkeys do.



You would think that the Separation controls control ONLY the separation of the cameras but they don't. They control both the separation and also the rotation (convergence) of the cameras. That's what I think anyway.



Furthermore, you would think that the Convergence controls control convergence and they do, but not the convergence of the cameras, just the convergence of the viewer's eyes. They don't control the convergence of the cameras, just the separation of the cameras. Controlling the separation of the cameras controls the convergence of the viewer's eyes.



Clear as mud, eh?

Isn't this fun? :'(



By the way, I think iZ3D's separation controls also work differently than nvidia's because theirs changes focus as you change separation and nvidia's does not. So basically iZ3D does things the "standard way" that you describe and nvidia does things a little differently. I think I like nvidia's better even though iZ3D's may be better in some ways.



I rechecked this and I'm wrong. iZ3D's separation looks exactly like nvidia's separation and so they both keep focus constant as you adjust separation. However, I was not wrong about the difference in convergence control. It's obvious if you compare them.

#4
Posted 05/05/2009 12:09 AM   
Hi Iondrive,

The shared diagram for convergence and separation has nothing to do with the choice of driver solution - it is universal.

[img]http://tua.nickleigh.com/images/MrScott.jpg[/img]

You can't break the laws of physics! (Star Trek fans will get it! /thumbup.gif' class='bbc_emoticon' alt=':thumbup:' />)

The camera position should never be changed or "toed in" (angled inward) to create a convergence effect. They are always straight forward and parallel.

On paper, this is how convergence SHOULD work for both solutions. If you were to take a stereo photograph, this is how convergence would be adjusted.

It's not in my expertise to give a definitive answer on the mechanics of what is happening in the drivers, but I think that in both cases, instead of the image's convergence being handled with complete whole images (like a photograph), it is handled on an object by object basis within the gaming image. Since there are varying levels of success here, this is why convergence does not hold up equally for all games and for all solutions. However, this is just an educated guess. It's a good question.

Thank you for your kind words. :">

Regards,
Chopper
Hi Iondrive,



The shared diagram for convergence and separation has nothing to do with the choice of driver solution - it is universal.



Image



You can't break the laws of physics! (Star Trek fans will get it! /thumbup.gif' class='bbc_emoticon' alt=':thumbup:' />)



The camera position should never be changed or "toed in" (angled inward) to create a convergence effect. They are always straight forward and parallel.



On paper, this is how convergence SHOULD work for both solutions. If you were to take a stereo photograph, this is how convergence would be adjusted.



It's not in my expertise to give a definitive answer on the mechanics of what is happening in the drivers, but I think that in both cases, instead of the image's convergence being handled with complete whole images (like a photograph), it is handled on an object by object basis within the gaming image. Since there are varying levels of success here, this is why convergence does not hold up equally for all games and for all solutions. However, this is just an educated guess. It's a good question.



Thank you for your kind words. :">



Regards,

Chopper

#5
Posted 05/05/2009 09:37 AM   
Chopper is correct, but there has to be toe-in in order to create deapth for humans.

"Toe-in" is actually done by your eyes, not the drivers and hardware..



While there is some monocular cues for deapth perception, the biggest impact on human deapth perception comes from our stereo vision. Convergence in the human eye is from rotation. Literally there is "toe-in" convergence at work with nvidia 3d vision, it's just done by your eyes.
Chopper is correct, but there has to be toe-in in order to create deapth for humans.



"Toe-in" is actually done by your eyes, not the drivers and hardware..







While there is some monocular cues for deapth perception, the biggest impact on human deapth perception comes from our stereo vision. Convergence in the human eye is from rotation. Literally there is "toe-in" convergence at work with nvidia 3d vision, it's just done by your eyes.

#6
Posted 05/05/2009 02:12 PM   
Proof that nvidia's camera moves during convergence control adjustments:
OK, there's alot I'ld like to say now but let's take it slow. Easiest thing first. That's proving that the cameras move when you adjust convergence. Forget theory for a sec and do a test. Start up a game, turn on stereo 3d, stand in front of an obstacle, max out separation (optional but it helps to magnify the effect), hold down a convergence hotkey and close one eye. You will see that the content of the image is changing. It's not just sliding left or right because you will be able to peek around the obstacle eventually. That proves that the camera moved. Can you at least agree with me on that? That's with nvidia's driver.

I just tried both nvidia's and iZ3D's driver and iZ3D's doesn't work that way. With iZ3D, you can't peek around corners using convergence controls no matter how long you hold it down, so that tells me that they're just sliding the rendered eye-views in opposite directions and not moving the cameras.


OK forget what that just proved. Now I have a question for you Chopper to see if I understand you right. So if you had the job of writing a stereo-3d rendering program, would you:
1) have separation controls that only change the separation between the cameras and do nothiing else?
and 2) have convergence controls that only slide each rendered eye-view in opposite directions so that the cameras never rotate?

There's nothing wrong with that implementation, but someone else could write a stereo-3d rendering program that does rotate the cameras and they could still call that a convergence control because it's not against the law. I suppose you could say it was mislabeled if you believe convergence must be controlled in only one way.

I have more to say but I think I should wait for a response before I do.

Isn't this fun?
If it's not fun then don't do it.

oops, one more possibility. I'm talking about the old nvidia drivers: 30.87 through 162.50. They all work the same as far as sep/conv is concerned. It's possible they've changed their implementation but I really doubt it. Still, it's better if someone verifies that you can peek around corners using convergence controls under vista/3d-vision.
Proof that nvidia's camera moves during convergence control adjustments:

OK, there's alot I'ld like to say now but let's take it slow. Easiest thing first. That's proving that the cameras move when you adjust convergence. Forget theory for a sec and do a test. Start up a game, turn on stereo 3d, stand in front of an obstacle, max out separation (optional but it helps to magnify the effect), hold down a convergence hotkey and close one eye. You will see that the content of the image is changing. It's not just sliding left or right because you will be able to peek around the obstacle eventually. That proves that the camera moved. Can you at least agree with me on that? That's with nvidia's driver.



I just tried both nvidia's and iZ3D's driver and iZ3D's doesn't work that way. With iZ3D, you can't peek around corners using convergence controls no matter how long you hold it down, so that tells me that they're just sliding the rendered eye-views in opposite directions and not moving the cameras.





OK forget what that just proved. Now I have a question for you Chopper to see if I understand you right. So if you had the job of writing a stereo-3d rendering program, would you:

1) have separation controls that only change the separation between the cameras and do nothiing else?

and 2) have convergence controls that only slide each rendered eye-view in opposite directions so that the cameras never rotate?



There's nothing wrong with that implementation, but someone else could write a stereo-3d rendering program that does rotate the cameras and they could still call that a convergence control because it's not against the law. I suppose you could say it was mislabeled if you believe convergence must be controlled in only one way.



I have more to say but I think I should wait for a response before I do.



Isn't this fun?

If it's not fun then don't do it.



oops, one more possibility. I'm talking about the old nvidia drivers: 30.87 through 162.50. They all work the same as far as sep/conv is concerned. It's possible they've changed their implementation but I really doubt it. Still, it's better if someone verifies that you can peek around corners using convergence controls under vista/3d-vision.

#7
Posted 05/05/2009 04:04 PM   
[quote name='iondrive' post='537777' date='May 5 2009, 12:04 PM']Proof that nvidia's camera moves during convergence control adjustments:
OK, there's alot I'ld like to say now but let's take it slow. Easiest thing first. That's proving that the cameras move when you adjust convergence. Forget theory for a sec and do a test. Start up a game, turn on stereo 3d, stand in front of an obstacle, max out separation (optional but it helps to magnify the effect), hold down a convergence hotkey and close one eye. You will see that the content of the image is changing. It's not just sliding left or right because you will be able to peek around the obstacle eventually. That proves that the camera moved. Can you at least agree with me on that? That's with nvidia's driver.

I just tried both nvidia's and iZ3D's driver and iZ3D's doesn't work that way. With iZ3D, you can't peek around corners using convergence controls no matter how long you hold it down, so that tells me that they're just sliding the rendered eye-views in opposite directions and not moving the cameras.


OK forget what that just proved. Now I have a question for you Chopper to see if I understand you right. So if you had the job of writing a stereo-3d rendering program, would you:
1) have separation controls that only change the separation between the cameras and do nothiing else?
and 2) have convergence controls that only slide each rendered eye-view in opposite directions so that the cameras never rotate?

There's nothing wrong with that implementation, but someone else could write a stereo-3d rendering program that does rotate the cameras and they could still call that a convergence control because it's not against the law. I suppose you could say it was mislabeled if you believe convergence must be controlled in only one way.

I have more to say but I think I should wait for a response before I do.

Isn't this fun?
If it's not fun then don't do it.

oops, one more possibility. I'm talking about the old nvidia drivers: 30.87 through 162.50. They all work the same as far as sep/conv is concerned. It's possible they've changed their implementation but I really doubt it. Still, it's better if someone verifies that you can peek around corners using convergence controls under vista/3d-vision.[/quote]


There are actually 2 controls for how nvidia displays the stereo 3d view.

One is <----|----> wher it controls the deapth of the image.
The other is foward and backward where it controls relationship to the center point.


IMO, what you are seeing is the <-> viewpoint growing to <--------------------------> which would be the same you stood at a corner irl and your head sudenly grew to be 5 feet wide; you'd be able to see around the corner too.

It could be that as you move the images <-----------------> and forward that it also turn the camera agle in towards a percieved center point; I can't really be 100% certain.
[quote name='iondrive' post='537777' date='May 5 2009, 12:04 PM']Proof that nvidia's camera moves during convergence control adjustments:

OK, there's alot I'ld like to say now but let's take it slow. Easiest thing first. That's proving that the cameras move when you adjust convergence. Forget theory for a sec and do a test. Start up a game, turn on stereo 3d, stand in front of an obstacle, max out separation (optional but it helps to magnify the effect), hold down a convergence hotkey and close one eye. You will see that the content of the image is changing. It's not just sliding left or right because you will be able to peek around the obstacle eventually. That proves that the camera moved. Can you at least agree with me on that? That's with nvidia's driver.



I just tried both nvidia's and iZ3D's driver and iZ3D's doesn't work that way. With iZ3D, you can't peek around corners using convergence controls no matter how long you hold it down, so that tells me that they're just sliding the rendered eye-views in opposite directions and not moving the cameras.





OK forget what that just proved. Now I have a question for you Chopper to see if I understand you right. So if you had the job of writing a stereo-3d rendering program, would you:

1) have separation controls that only change the separation between the cameras and do nothiing else?

and 2) have convergence controls that only slide each rendered eye-view in opposite directions so that the cameras never rotate?



There's nothing wrong with that implementation, but someone else could write a stereo-3d rendering program that does rotate the cameras and they could still call that a convergence control because it's not against the law. I suppose you could say it was mislabeled if you believe convergence must be controlled in only one way.



I have more to say but I think I should wait for a response before I do.



Isn't this fun?

If it's not fun then don't do it.



oops, one more possibility. I'm talking about the old nvidia drivers: 30.87 through 162.50. They all work the same as far as sep/conv is concerned. It's possible they've changed their implementation but I really doubt it. Still, it's better if someone verifies that you can peek around corners using convergence controls under vista/3d-vision.





There are actually 2 controls for how nvidia displays the stereo 3d view.



One is <----|----> wher it controls the deapth of the image.

The other is foward and backward where it controls relationship to the center point.





IMO, what you are seeing is the <-> viewpoint growing to <--------------------------> which would be the same you stood at a corner irl and your head sudenly grew to be 5 feet wide; you'd be able to see around the corner too.



It could be that as you move the images <-----------------> and forward that it also turn the camera agle in towards a percieved center point; I can't really be 100% certain.

#8
Posted 05/05/2009 06:43 PM   
I know there are two main controls labelled Convergence and Separation. I wanted to go slowly so I was only talking about one control at a time.
Regarding using controls to peek around objects, my point is that you can use either separation or convergence controls to do it and that prooves that the cameras do move laterally using either control. Both controls affect the separation distance between the cameras except that the separation control is more like a multiplier. Convergence control can't do it if Separation is zero.

Let's clarify what I'm asking...
I'm not asking what Separation is, I'm asking what nvidia's separation controls do. They do more than adjust the separation of the cameras and I can proove it with a thought experiment compared to what really happens when you use the controls. That's for later. For now I think we all agree that it's the distance between the cameras although I think someone else could define it differently if they wanted to for their own program.

Furthermore I'm not asking what Convergence is, I'm asking what nvidia's convergence controls do. I think it's very clear from the test I mentioned in my previous post that the virtual cameras do move laterally when you adjust convergence. So the remaining questions concerning the convergence controls are:
Exactly what else does the convergence control do besides move the cameras laterally?
Does it rotate the cameras or does it slide the rendered image left/right or does it do something else or nothing else?
How can one proove that it rotates the cameras OR how can one proove that it doesn't rotate the cameras? We need a good test that I haven't thought of yet. Please post if you're up to the challenge.

The remaining questions concerning the separation controls are similar. It moves the cameras apart or together but what else does it do in addition to that?

Proof that nvidia's separation controls do more than separate the cameras:
We can all agree that if a camera moves to the right, the things it sees all move to the left in it's field of view. This is just like if you strafe right in a game, your targets (if still) and all scenery move to the left across your field of view. So far, so good. I expect full agreement at this point. Now, in a stereo-3d rendering system there's a left and right camera. If you move the cameras apart, the right camera moves to the right. This means that for still scenery, like a room with a floor covered with traffic cones, ALL things apparently move to the left. That's what you would see if you closed only your left eye and got video from the right camera sent to your right eye. But that's not what happens with nvidia's "increase separation" hotkey if you set convergence to the center of the room. If you did that and closed your left eye and used the "increase separation" hotkey, cones behind the center apparently move right and cones in front of the center apparently move left. You can see this in any 3d game by converging on a crate or something and then playing with the separation controls. Scenery in front and back of the thing move in opposite directions. That's the proof. If lateral camera movement is the only thing that happens, it wouldn't look like that.

I have more to say but I really have to save it for later.
I know there are two main controls labelled Convergence and Separation. I wanted to go slowly so I was only talking about one control at a time.

Regarding using controls to peek around objects, my point is that you can use either separation or convergence controls to do it and that prooves that the cameras do move laterally using either control. Both controls affect the separation distance between the cameras except that the separation control is more like a multiplier. Convergence control can't do it if Separation is zero.



Let's clarify what I'm asking...

I'm not asking what Separation is, I'm asking what nvidia's separation controls do. They do more than adjust the separation of the cameras and I can proove it with a thought experiment compared to what really happens when you use the controls. That's for later. For now I think we all agree that it's the distance between the cameras although I think someone else could define it differently if they wanted to for their own program.



Furthermore I'm not asking what Convergence is, I'm asking what nvidia's convergence controls do. I think it's very clear from the test I mentioned in my previous post that the virtual cameras do move laterally when you adjust convergence. So the remaining questions concerning the convergence controls are:

Exactly what else does the convergence control do besides move the cameras laterally?

Does it rotate the cameras or does it slide the rendered image left/right or does it do something else or nothing else?

How can one proove that it rotates the cameras OR how can one proove that it doesn't rotate the cameras? We need a good test that I haven't thought of yet. Please post if you're up to the challenge.



The remaining questions concerning the separation controls are similar. It moves the cameras apart or together but what else does it do in addition to that?



Proof that nvidia's separation controls do more than separate the cameras:

We can all agree that if a camera moves to the right, the things it sees all move to the left in it's field of view. This is just like if you strafe right in a game, your targets (if still) and all scenery move to the left across your field of view. So far, so good. I expect full agreement at this point. Now, in a stereo-3d rendering system there's a left and right camera. If you move the cameras apart, the right camera moves to the right. This means that for still scenery, like a room with a floor covered with traffic cones, ALL things apparently move to the left. That's what you would see if you closed only your left eye and got video from the right camera sent to your right eye. But that's not what happens with nvidia's "increase separation" hotkey if you set convergence to the center of the room. If you did that and closed your left eye and used the "increase separation" hotkey, cones behind the center apparently move right and cones in front of the center apparently move left. You can see this in any 3d game by converging on a crate or something and then playing with the separation controls. Scenery in front and back of the thing move in opposite directions. That's the proof. If lateral camera movement is the only thing that happens, it wouldn't look like that.



I have more to say but I really have to save it for later.

#9
Posted 05/05/2009 10:59 PM   
OK, well this is getting tedious but the more I think about it, the more I'm convinced I'm right. Here's some more evidence (not really proof). We all know what it looks like to strafe sideways in a game so try this: start up a 3d game and strafe sideways and memorize what that looks like. Then go back to where you started and close one eye with s3d on and hold down one of your convergence controls and see what that looks like. It looks like strafing, just like if your camera was moving sideways. Now adjust separation. Does that look like strafing? No, it looks like circle-strafing. Keep doing this until you agree with me. /haha.gif' class='bbc_emoticon' alt=':haha:' /> Once again, which control looks more like strafing, Convergence or Separation? The one that LOOKS more like strafing, looks that way because it IS more like strafing. That's using the Convergence control. So basically I'm telling you that nvidia chose to label their controls differently. The Convergence control changes the separation between the cameras without rotating them and that's the conventinal definition for separation. The Separation controls do change the separation between the cameras but they at least "appear" to toe-in/out as you adjust it. It can not keep the same focus otherwise. It has to do something in order to keep the same focus. Just like circle strafing, you have to turn in order to keep the same target if that target is stationary.

OK, I think I'm beating a dead horse now so if anyone else has any technical questions about 3d image generation, feel free to post them here too and get us going in a new direction.
OK, well this is getting tedious but the more I think about it, the more I'm convinced I'm right. Here's some more evidence (not really proof). We all know what it looks like to strafe sideways in a game so try this: start up a 3d game and strafe sideways and memorize what that looks like. Then go back to where you started and close one eye with s3d on and hold down one of your convergence controls and see what that looks like. It looks like strafing, just like if your camera was moving sideways. Now adjust separation. Does that look like strafing? No, it looks like circle-strafing. Keep doing this until you agree with me. /haha.gif' class='bbc_emoticon' alt=':haha:' /> Once again, which control looks more like strafing, Convergence or Separation? The one that LOOKS more like strafing, looks that way because it IS more like strafing. That's using the Convergence control. So basically I'm telling you that nvidia chose to label their controls differently. The Convergence control changes the separation between the cameras without rotating them and that's the conventinal definition for separation. The Separation controls do change the separation between the cameras but they at least "appear" to toe-in/out as you adjust it. It can not keep the same focus otherwise. It has to do something in order to keep the same focus. Just like circle strafing, you have to turn in order to keep the same target if that target is stationary.



OK, I think I'm beating a dead horse now so if anyone else has any technical questions about 3d image generation, feel free to post them here too and get us going in a new direction.

#10
Posted 05/06/2009 11:02 AM   
GOT IT!!!

Let's start with a question. Everyone who understands the above diagram should get this right:

Does increasing the separation between the cameras cause the viewer's eyes to converge?

The right camera goes right so the scenery it sees goes left across its field of view. Therefore the veiwer's right eye sees the scenery move left so his right eye turns left to follow it. Similarly the left camera moves left causing the viewer's left eye to turn right. Right-eye turning left and left-eye turninig right means they converge. So yes, increasing camera separation does cause the viewer's eyes to converge. This is what happens in nvidia's convergence hotkey function. It brings the entire scene forward since the viewer's eyes converge.

The subsequent question you should ask is this: But then if the convergence controls separate the cameras, doesn't that change the total depth since increasing the separation does change the amount of percieved total depth, not just the near/far location of the scene? Well, the answer is yes, it does. I just tried it and observed it. Pushing the scene away by using the decrease-convergence hotkey does push it away but also flattens it out. You can observe this yourself. Keep pushing the scene away and the cameras get closer to each other until they are both in the same place. Then you have 2 separate views with identical content and so no depth inside that scene. You can continue pushing the scene away and the cameras will swap locations. Do that a bit and turn the glasses around so the right lens covers the left eye and you will see a "normal" 3d image. This proves the cameras have swapped locations. Put it back to normal when your done. This satisfies me about what the convergence controls do.

On to Separation:
I tried something regarding the separation controls and have come to the conclusion that the cameras do not ever rotate. What happens when you use the separation controls is two things: a change in camera separation and a counteracting sideways translation of the rendered image (convergence). In other words, the separation controls do both separation and a counteracting convergence (convergence in the standard chopper-indicated sense). This is how the separation controls are able to maintain focus on the current zero-parallax plane. As shown above, separation of cameras by itself causes a change in the zero-parallax plane (change in viewer's eye convergence across the entire scene). Something is needed to counteract that and that process is not camrea rotation but a classic convergence process instead. I will describe the experiment if someone asks.

Why things get wider:
By the way, have you ever wondered why things in the scene get a little wider when you increase separation? That's from the projection done after the cameras are separated. I'm not prepared to explain that right now but many people will understand it once they've been told. Basically it's just a result of the new geometry of the situation between the object, the projection-plane, and the new camera position. The camera slid sideways so the object "cast a wider shadow" across the projection plane.

HURRAY. It's good to be at the end of something and have gotten some understanding of what's going on in detail.
Somebody give me a high-five.

Later all.

By the way, I think it would be interesting to see the results of a 3d-rendering system that does do camera rotation instead of classic convergence/image-translation. I wonder if it could be better in some ways. It seems to make sense that the cameras should mimic a real persons eye movements.
GOT IT!!!



Let's start with a question. Everyone who understands the above diagram should get this right:



Does increasing the separation between the cameras cause the viewer's eyes to converge?



The right camera goes right so the scenery it sees goes left across its field of view. Therefore the veiwer's right eye sees the scenery move left so his right eye turns left to follow it. Similarly the left camera moves left causing the viewer's left eye to turn right. Right-eye turning left and left-eye turninig right means they converge. So yes, increasing camera separation does cause the viewer's eyes to converge. This is what happens in nvidia's convergence hotkey function. It brings the entire scene forward since the viewer's eyes converge.



The subsequent question you should ask is this: But then if the convergence controls separate the cameras, doesn't that change the total depth since increasing the separation does change the amount of percieved total depth, not just the near/far location of the scene? Well, the answer is yes, it does. I just tried it and observed it. Pushing the scene away by using the decrease-convergence hotkey does push it away but also flattens it out. You can observe this yourself. Keep pushing the scene away and the cameras get closer to each other until they are both in the same place. Then you have 2 separate views with identical content and so no depth inside that scene. You can continue pushing the scene away and the cameras will swap locations. Do that a bit and turn the glasses around so the right lens covers the left eye and you will see a "normal" 3d image. This proves the cameras have swapped locations. Put it back to normal when your done. This satisfies me about what the convergence controls do.



On to Separation:

I tried something regarding the separation controls and have come to the conclusion that the cameras do not ever rotate. What happens when you use the separation controls is two things: a change in camera separation and a counteracting sideways translation of the rendered image (convergence). In other words, the separation controls do both separation and a counteracting convergence (convergence in the standard chopper-indicated sense). This is how the separation controls are able to maintain focus on the current zero-parallax plane. As shown above, separation of cameras by itself causes a change in the zero-parallax plane (change in viewer's eye convergence across the entire scene). Something is needed to counteract that and that process is not camrea rotation but a classic convergence process instead. I will describe the experiment if someone asks.



Why things get wider:

By the way, have you ever wondered why things in the scene get a little wider when you increase separation? That's from the projection done after the cameras are separated. I'm not prepared to explain that right now but many people will understand it once they've been told. Basically it's just a result of the new geometry of the situation between the object, the projection-plane, and the new camera position. The camera slid sideways so the object "cast a wider shadow" across the projection plane.



HURRAY. It's good to be at the end of something and have gotten some understanding of what's going on in detail.

Somebody give me a high-five.



Later all.



By the way, I think it would be interesting to see the results of a 3d-rendering system that does do camera rotation instead of classic convergence/image-translation. I wonder if it could be better in some ways. It seems to make sense that the cameras should mimic a real persons eye movements.

#11
Posted 05/07/2009 08:55 AM   
Hands over a high five with a huge smile! :D
Hands over a high five with a huge smile! :D

Image

Mb: Asus P5W DH Deluxe

Cpu: C2D E6600

Gb: Nvidia 7900GT + 8800GTX

3D:100" passive projector polarized setup + 22" IZ3D

Stereodrivers: Iz3d & Tridef ignition and nvidia old school.

#12
Posted 05/07/2009 11:51 AM   
Thanks Likay,
I thought I was going to have to high-five myself eventually. I was fully prepared to do that after waiting for like two weeks or something because sometimes you just have to do for yourself things that you can't get someone else to do for you.

Subject: Convergence and separation controls counteracting each other with regards to total depth (distance between cameras):
You know what? Since having figured this out, I now see that I should have considerred something else I've noticed at times but didn't really think about. Have you ever noticed that after you adjust separation to get good depth, then when you use the convergence controls to push the scene back, you lose the good depth that you just set. I thought this was a function of a more distant scene but now I see that the loss of depth is because the convergence controls moved the cameras closer together and that's what caused the loss of depth. I've often alternated between increasing sep, then decreasing conv, and then needing to increase sep again. Sometimes things go too far and you need to start over. This can happen in the opposite direction too where increasing conv creates too much depth and so on. It's annoying. At least now I understand what's going on when that happens.

I forgive them for their labels/functions. Why it's understandable:
I don't really blame nvidia for labelling/designing these things this way but I would have liked it if they had told us what was happening during the rendering process. I don't blame them because the convergence control does control the convergence of the viewer's eyes even though it doesn't use the conventional method. And the separation controls do control separation in a sense but the sense is different. The sense is that it controls the separation of the vanishing points for each eye. What I mean is that if you have two cameras centered on a star and move them sideways away from each other without turning them, then the star remains in the center of each camera's field of view since the star is effectively at infinity. And if you could draw parallel lines around you in the direction of that star, then they would appear to converge at that star at what's called a vanishing point. So moving the cameras sideways changes each camera's perspective, but the vanishing point stays in the center of each camera's field of view. That's what normal separation would do but nvidia's separation controls control the separation of the vanishing points if you overlay each camera's video on top of one another. This is why there is the registry setting for MonitorSize. In one sense, things at infinity on your display should be 2.5 inches apart (the distance between your eyes), so if your screen is 5 inches wide, that means the vanishing-point-separation sould be 50% of the screen's width. If your screen is 25 inches wide, the vanishing-point-separation should be one-tenth the width of the screen. So the video is different based on screen-size. This is why it can be tricky to make 3d videos because you don't know the size of the viewer's screen. I think 3d videos can still be good though even if they're not optimal. But from practical experience I've come to believe that that issue is not that important and I often will play a game in 3d with a vanishing-point-separation of more than 2.5 inches. I think the brain gets 3d info more from the content of the eye's video rather than from the convergence of the eyes. It's more of an image-analysis function than a eye-direction-analysis function.

Summarizing:
nvidia's convergence controls control the separation between the cameras and that results in controlling the depth of the scene and convergence of the viewer's eyes.
nvidia's separation controls change both the separation between the cameras and a counteracting image-shift (conventional convergence) that keeps the focal plane (zero parallax point) constant. This results in controlling the total depth of field as well as the distance between vanishing points at infinity for each eye.

iZ3D's separation control looks the same as nvidia's but their convergence control is the conventional image shift.

whew, good-bye.
Thanks Likay,

I thought I was going to have to high-five myself eventually. I was fully prepared to do that after waiting for like two weeks or something because sometimes you just have to do for yourself things that you can't get someone else to do for you.



Subject: Convergence and separation controls counteracting each other with regards to total depth (distance between cameras):

You know what? Since having figured this out, I now see that I should have considerred something else I've noticed at times but didn't really think about. Have you ever noticed that after you adjust separation to get good depth, then when you use the convergence controls to push the scene back, you lose the good depth that you just set. I thought this was a function of a more distant scene but now I see that the loss of depth is because the convergence controls moved the cameras closer together and that's what caused the loss of depth. I've often alternated between increasing sep, then decreasing conv, and then needing to increase sep again. Sometimes things go too far and you need to start over. This can happen in the opposite direction too where increasing conv creates too much depth and so on. It's annoying. At least now I understand what's going on when that happens.



I forgive them for their labels/functions. Why it's understandable:

I don't really blame nvidia for labelling/designing these things this way but I would have liked it if they had told us what was happening during the rendering process. I don't blame them because the convergence control does control the convergence of the viewer's eyes even though it doesn't use the conventional method. And the separation controls do control separation in a sense but the sense is different. The sense is that it controls the separation of the vanishing points for each eye. What I mean is that if you have two cameras centered on a star and move them sideways away from each other without turning them, then the star remains in the center of each camera's field of view since the star is effectively at infinity. And if you could draw parallel lines around you in the direction of that star, then they would appear to converge at that star at what's called a vanishing point. So moving the cameras sideways changes each camera's perspective, but the vanishing point stays in the center of each camera's field of view. That's what normal separation would do but nvidia's separation controls control the separation of the vanishing points if you overlay each camera's video on top of one another. This is why there is the registry setting for MonitorSize. In one sense, things at infinity on your display should be 2.5 inches apart (the distance between your eyes), so if your screen is 5 inches wide, that means the vanishing-point-separation sould be 50% of the screen's width. If your screen is 25 inches wide, the vanishing-point-separation should be one-tenth the width of the screen. So the video is different based on screen-size. This is why it can be tricky to make 3d videos because you don't know the size of the viewer's screen. I think 3d videos can still be good though even if they're not optimal. But from practical experience I've come to believe that that issue is not that important and I often will play a game in 3d with a vanishing-point-separation of more than 2.5 inches. I think the brain gets 3d info more from the content of the eye's video rather than from the convergence of the eyes. It's more of an image-analysis function than a eye-direction-analysis function.



Summarizing:

nvidia's convergence controls control the separation between the cameras and that results in controlling the depth of the scene and convergence of the viewer's eyes.

nvidia's separation controls change both the separation between the cameras and a counteracting image-shift (conventional convergence) that keeps the focal plane (zero parallax point) constant. This results in controlling the total depth of field as well as the distance between vanishing points at infinity for each eye.



iZ3D's separation control looks the same as nvidia's but their convergence control is the conventional image shift.



whew, good-bye.

#13
Posted 05/10/2009 06:50 PM   
Scroll To Top