Please help me fix the 60FPS @ 120Hz issue once and for all!
3 / 6
[b]Guys, I have thought of a good way to demonstrate that you do not in fact need to view the same frame (from a different perspective) to be able to view 3D.
It's simple: Activate your 3D glasses and put them on. Walk around your room!
Can you see everything in perfect fluid 3D or not?
Guess what? No 2 "frames" you are seeing are the same, but everything is perfect, fluid 3D!
Furthermore, you are seeing it all at true 120fps ;-)[/b]
Guys, I have thought of a good way to demonstrate that you do not in fact need to view the same frame (from a different perspective) to be able to view 3D.
It's simple: Activate your 3D glasses and put them on. Walk around your room!
Can you see everything in perfect fluid 3D or not?
Guess what? No 2 "frames" you are seeing are the same, but everything is perfect, fluid 3D!
Furthermore, you are seeing it all at true 120fps ;-)
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
Wouldn't it break the 3D effect (ie add anomalies)? It seems like this would be akin to something like 1080i. And interlacing adds a jaggy effect. And even if it doesn't add that, it seems like it could add some screwiness. Stereo 3D is the left eye seeing one perspective and the right eye a slightly different. The mind then merges those two perspectives (from the same moment in time) into a single one. This is how reality works. Having your rendered stereo 3D decide to show the left and right eye perspective from different time stamps seems like it could create issues with the 3D.
EDIT: Well you got us with that last post. Still, I do wonder if it would create any interlacing artifacts with that method.
Wouldn't it break the 3D effect (ie add anomalies)? It seems like this would be akin to something like 1080i. And interlacing adds a jaggy effect. And even if it doesn't add that, it seems like it could add some screwiness. Stereo 3D is the left eye seeing one perspective and the right eye a slightly different. The mind then merges those two perspectives (from the same moment in time) into a single one. This is how reality works. Having your rendered stereo 3D decide to show the left and right eye perspective from different time stamps seems like it could create issues with the 3D.
EDIT: Well you got us with that last post. Still, I do wonder if it would create any interlacing artifacts with that method.
I think the "issue" comes from where programs such as "fraps" hooks onto the DX stream and differences in what is actually happening.
In an overly simplistic view the game engine will make "draw calls" to the DX driver. Somewhere in that stream of commands is a call/command that only happens once per frame. This is what programs like "fraps" hook onto. They (fraps) basically count how many times per second the unique call is being made by the application program (game).
So on a 2D system without the limits of vsync, the application program is free to make new frames as quickly as its able to which if it is higher than the monitors vsync causes screen tearing. If we add vsync then the application program will wait to issue another 'draw command' or otherwise ask the DX driver to update the frame buffer. On 60Hz monitors you'll be stuck at 60FPS, 120Hz monitors 120FPS.
You are correct to assume that on a "S3D" system some "trickery" is involved.
It is my understanding that on 3DVision, Tridef and even with many/most applications which do their own 3D rendering, the scene is rendered ONCE and then the 3D driver moves the "camera" for the two viewpoints to create the S3D image we see. (at least in most cases the "heavy hitting" is done once and the details for each view is filled in)
Since the GPU is not having to re-render the whole scene entirely the GPU load for S3D @ 60 frames per eye (per second) will be less than the GPU load for 120 frames (per second) for both eyes but (depending on S3D conversion method) much higher than 60 frames per second for both eyes.
Since "fraps" is hooking onto an application call/command to DX that only marks the 'base frame' of the scene that the 3D driver will turn into two unique frames (using some trickery) or the application program internal S3D engine only makes the call/command that "fraps" hooks onto ONCE per monoscopic scene view (pre stereoscopic 3D conversion) "fraps" reports 1/2 the actual viewable unique frame rate.
While "fraps" is somewhat S3D "aware" in that it can record S3D video when 3DVision is being used it is still unable to measure the actual viewable unique frame rate and also fails when application programs such as "Tomb Raider" use their own unique form of S3D conversion/rendering and is only able to record one eye of the S3D stream.
I'm sure others will jump in and correct me on my (miss)understanding of the process.
I think the "issue" comes from where programs such as "fraps" hooks onto the DX stream and differences in what is actually happening.
In an overly simplistic view the game engine will make "draw calls" to the DX driver. Somewhere in that stream of commands is a call/command that only happens once per frame. This is what programs like "fraps" hook onto. They (fraps) basically count how many times per second the unique call is being made by the application program (game).
So on a 2D system without the limits of vsync, the application program is free to make new frames as quickly as its able to which if it is higher than the monitors vsync causes screen tearing. If we add vsync then the application program will wait to issue another 'draw command' or otherwise ask the DX driver to update the frame buffer. On 60Hz monitors you'll be stuck at 60FPS, 120Hz monitors 120FPS.
You are correct to assume that on a "S3D" system some "trickery" is involved.
It is my understanding that on 3DVision, Tridef and even with many/most applications which do their own 3D rendering, the scene is rendered ONCE and then the 3D driver moves the "camera" for the two viewpoints to create the S3D image we see. (at least in most cases the "heavy hitting" is done once and the details for each view is filled in)
Since the GPU is not having to re-render the whole scene entirely the GPU load for S3D @ 60 frames per eye (per second) will be less than the GPU load for 120 frames (per second) for both eyes but (depending on S3D conversion method) much higher than 60 frames per second for both eyes.
Since "fraps" is hooking onto an application call/command to DX that only marks the 'base frame' of the scene that the 3D driver will turn into two unique frames (using some trickery) or the application program internal S3D engine only makes the call/command that "fraps" hooks onto ONCE per monoscopic scene view (pre stereoscopic 3D conversion) "fraps" reports 1/2 the actual viewable unique frame rate.
While "fraps" is somewhat S3D "aware" in that it can record S3D video when 3DVision is being used it is still unable to measure the actual viewable unique frame rate and also fails when application programs such as "Tomb Raider" use their own unique form of S3D conversion/rendering and is only able to record one eye of the S3D stream.
I'm sure others will jump in and correct me on my (miss)understanding of the process.
[quote="D-Man11"]Well I know on a 60Hz Display if I enable 3D Discover, fraps shows 60FPS vs showing a drop to 30FPS.
I've always assumed frames were being discarded.[/quote]
That's a completely different technique though. It's not full HD 3D. It's basically polarized 3D. Which means half the pixels are being seen with the red and the other half with blue. The glasses are separating the two perspectives. So, yes, it's a 1080p picture being rendered at 60fps. But the actual resolution is halved as a result. The left eye is getting half of that picture and the right eye is getting the other. That's the whole point of active shutters vs polarization. It's an easy way to get full 1080p 3D. You'd need a 4K screen to get full 1080P 3D in stereo on a passive display.
EDIT: I'm sure you realized all that, I'm just saying it's a completely different technique being used. So of course fraps wouldn't show its FPS as being halved. That's not how the technique works. LEFT/RIGHT is both being displayed at the same time.
D-Man11 said:Well I know on a 60Hz Display if I enable 3D Discover, fraps shows 60FPS vs showing a drop to 30FPS.
I've always assumed frames were being discarded.
That's a completely different technique though. It's not full HD 3D. It's basically polarized 3D. Which means half the pixels are being seen with the red and the other half with blue. The glasses are separating the two perspectives. So, yes, it's a 1080p picture being rendered at 60fps. But the actual resolution is halved as a result. The left eye is getting half of that picture and the right eye is getting the other. That's the whole point of active shutters vs polarization. It's an easy way to get full 1080p 3D. You'd need a 4K screen to get full 1080P 3D in stereo on a passive display.
EDIT: I'm sure you realized all that, I'm just saying it's a completely different technique being used. So of course fraps wouldn't show its FPS as being halved. That's not how the technique works. LEFT/RIGHT is both being displayed at the same time.
Hi mbloof,
Thank you for the eloquent explanation.
What would you propose would be an ideal method to measure the true unique frame rate in a game?
The point may be moot as at this point, I believe everyone is quite convinced that rendered frames are indeed doubled to give 2 perspectives of the same view, as you describe.
From what I understand, the question now is if S3D can be improved by using consecutively different rendered frames, which would increase the fluidity of the experience.
The experiment in my previous post indicates that it indeed would, assuming:
1. The game engine is capable of 120fps
2. The user has enough power in their hardware to take advantage.
So, the question which I made the thread about rears its head: Is it possible to do this, or is the driver hard coded to not allow such a thing.
From your description, it seems that I am going to be disappointed with the answer :(
What would you propose would be an ideal method to measure the true unique frame rate in a game?
The point may be moot as at this point, I believe everyone is quite convinced that rendered frames are indeed doubled to give 2 perspectives of the same view, as you describe.
From what I understand, the question now is if S3D can be improved by using consecutively different rendered frames, which would increase the fluidity of the experience.
The experiment in my previous post indicates that it indeed would, assuming:
1. The game engine is capable of 120fps
2. The user has enough power in their hardware to take advantage.
So, the question which I made the thread about rears its head: Is it possible to do this, or is the driver hard coded to not allow such a thing.
From your description, it seems that I am going to be disappointed with the answer :(
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
[quote="Paul33993"][quote="D-Man11"]Well I know on a 60Hz Display if I enable 3D Discover, fraps shows 60FPS vs showing a drop to 30FPS.
I've always assumed frames were being discarded.[/quote]
That's a completely different technique though. It's not full HD 3D. It's basically polarized 3D. Which means half the pixels are being seen with the red and the other half with blue. The glasses are separating the two perspectives. So, yes, it's a 1080p picture being rendered at 60fps. But the actual resolution is halved as a result. The left eye is getting half of that picture and the right eye is getting the other. That's the whole point of active shutters vs polarization. It's an easy way to get full 1080p 3D. You'd need a 4K screen to get full 1080P 3D in stereo on a passive display.
EDIT: I'm sure you realized all that, I'm just saying it's a completely different technique being used. So of course fraps wouldn't show its FPS as being halved. That's not how the technique works. LEFT/RIGHT is both being displayed at the same time.[/quote]
Huh what? It's not polarised 3D. It is a passive format, but is not polarized nor interlaced.
D-Man11 said:Well I know on a 60Hz Display if I enable 3D Discover, fraps shows 60FPS vs showing a drop to 30FPS.
I've always assumed frames were being discarded.
That's a completely different technique though. It's not full HD 3D. It's basically polarized 3D. Which means half the pixels are being seen with the red and the other half with blue. The glasses are separating the two perspectives. So, yes, it's a 1080p picture being rendered at 60fps. But the actual resolution is halved as a result. The left eye is getting half of that picture and the right eye is getting the other. That's the whole point of active shutters vs polarization. It's an easy way to get full 1080p 3D. You'd need a 4K screen to get full 1080P 3D in stereo on a passive display.
EDIT: I'm sure you realized all that, I'm just saying it's a completely different technique being used. So of course fraps wouldn't show its FPS as being halved. That's not how the technique works. LEFT/RIGHT is both being displayed at the same time.
Huh what? It's not polarised 3D. It is a passive format, but is not polarized nor interlaced.
[quote="RAGEdemon"]Hi mbloof,
Thank you for the eloquent explanation.
What would you propose would be an ideal method to measure the true unique frame rate in a game?
The point may be moot as at this point, I believe everyone is quite convinced that rendered frames are indeed doubled to give 2 perspectives of the same view, as you describe.
From what I understand, the question now is if S3D can be improved by using consecutively different rendered frames, which would increase the fluidity of the experience.
The experiment in my previous post indicates that it indeed would, assuming:
1. The game engine is capable of 120fps
2. The user has enough power in their hardware to take advantage.
So, the question which I made the thread about rears its head: Is it possible to do this, or is the driver hard coded to not allow such a thing.
From your description, it seems that I am going to be disappointed with the answer :([/quote]
Actually, this question (somewhat) has already been asked and answered if we take the overall "frame rate" question into account a number of technology websites have dropped using programs such as "fraps" all together because of its limitations and inability to measure the actual frame rate squirting out the digital video connectors of a GPU card and have resorted to specialized software +hardware that actually looks at the signal squirting out the connector to be able to measure the GPU frame rate performance properly.
I tend to think of it this way:
Back in the old days a complex 2D/3D scene might of took minutes/hours/days to calculate the wire frame geometery composed of up to a gawdzillion triangles which underlay what we see on our screen. However once the entire wire frame was calculated we could fairly easily move (generally with our mouse) the camera perspective or rotate the 3D wire frame model fairly quickly. The addition of texture to the wire frame and the calculating culling the portions of the resulting image which are not viewable took only a fraction of the time required to generate the wire frame itself.
When and how the S3D image is generated from the 3D monoscopic base image (as described above) is dependent on the S3D engine in use and what (if any) "hooks" are available in the video driver itself. None of them would have to actually re-generate the baseline wire frame model itself as for that moment in virtual time, our actor and other screen elements have not moved, only the position of the camera and resultant viewing angle has. Therefore the "heavy hitting" of generating the baseline wire frame model has already been done.
In today's world this is what we are seeing happening at simply amazing (compared to 198x-199x technology) speeds. The game engine freezes time and places our character (and his/her field of view) somewhere in a 3D model space and then places walls+other objects into this space along with dynamic objects+actors (items that might be moving). Somewhere along the line texture is added to the wire frame, non displayable portions culled and the resulting frame is copied to the video output buffer so we can enjoy it on our screen.
Another way of looking at it is I can get/attach a "3D lens" to my 2D camera. The lens itself is actually TWO lenses coupled together with other lenses and mirrors which split the 2D viewpoint of my DSLR into a L+R stereoscopic pair. My fastest DSLR can capture 7FPS. That 7FPS does not magically become 14FPS when I attach the "3D lens" to my camera as it still can only record 7FPS however with the "3D lens" attached I can record two perspectives per frame.
The driver (in our case 3DVision) is "hard coded" to attempt to calculate two different view point perspectives (stereoscopic) from a series monoscopic calls/commands from the application/game engine.
If it were instead attempting to alternate the position the camera view on every other frame the game engine requested, the result (as others have mentioned) would look like crap and not be S3D.
Make sense?
What would you propose would be an ideal method to measure the true unique frame rate in a game?
The point may be moot as at this point, I believe everyone is quite convinced that rendered frames are indeed doubled to give 2 perspectives of the same view, as you describe.
From what I understand, the question now is if S3D can be improved by using consecutively different rendered frames, which would increase the fluidity of the experience.
The experiment in my previous post indicates that it indeed would, assuming:
1. The game engine is capable of 120fps
2. The user has enough power in their hardware to take advantage.
So, the question which I made the thread about rears its head: Is it possible to do this, or is the driver hard coded to not allow such a thing.
From your description, it seems that I am going to be disappointed with the answer :(
Actually, this question (somewhat) has already been asked and answered if we take the overall "frame rate" question into account a number of technology websites have dropped using programs such as "fraps" all together because of its limitations and inability to measure the actual frame rate squirting out the digital video connectors of a GPU card and have resorted to specialized software +hardware that actually looks at the signal squirting out the connector to be able to measure the GPU frame rate performance properly.
I tend to think of it this way:
Back in the old days a complex 2D/3D scene might of took minutes/hours/days to calculate the wire frame geometery composed of up to a gawdzillion triangles which underlay what we see on our screen. However once the entire wire frame was calculated we could fairly easily move (generally with our mouse) the camera perspective or rotate the 3D wire frame model fairly quickly. The addition of texture to the wire frame and the calculating culling the portions of the resulting image which are not viewable took only a fraction of the time required to generate the wire frame itself.
When and how the S3D image is generated from the 3D monoscopic base image (as described above) is dependent on the S3D engine in use and what (if any) "hooks" are available in the video driver itself. None of them would have to actually re-generate the baseline wire frame model itself as for that moment in virtual time, our actor and other screen elements have not moved, only the position of the camera and resultant viewing angle has. Therefore the "heavy hitting" of generating the baseline wire frame model has already been done.
In today's world this is what we are seeing happening at simply amazing (compared to 198x-199x technology) speeds. The game engine freezes time and places our character (and his/her field of view) somewhere in a 3D model space and then places walls+other objects into this space along with dynamic objects+actors (items that might be moving). Somewhere along the line texture is added to the wire frame, non displayable portions culled and the resulting frame is copied to the video output buffer so we can enjoy it on our screen.
Another way of looking at it is I can get/attach a "3D lens" to my 2D camera. The lens itself is actually TWO lenses coupled together with other lenses and mirrors which split the 2D viewpoint of my DSLR into a L+R stereoscopic pair. My fastest DSLR can capture 7FPS. That 7FPS does not magically become 14FPS when I attach the "3D lens" to my camera as it still can only record 7FPS however with the "3D lens" attached I can record two perspectives per frame.
The driver (in our case 3DVision) is "hard coded" to attempt to calculate two different view point perspectives (stereoscopic) from a series monoscopic calls/commands from the application/game engine.
If it were instead attempting to alternate the position the camera view on every other frame the game engine requested, the result (as others have mentioned) would look like crap and not be S3D.
Hi mbloof,
Unfortunately, no, it doesn't "Make sense" :)
You are assuming that you are taking a snapshot for one single moment in time, and making 2 different perspective views from it.
I am saying that the entire scene be generated for each eye from scratch.
i.e. a scene be generated, one image output by moving the perspective for the left eye (Origin + X). Then dump the entire scene and make an entirely new one, move to the perspective for the other eye (Origin - X), and so on.
The resultant image will not be "crap", i can assure you. Please try my experiment detailed in the first post on this page yourself.
Your DSLR example is comparing apples to oranges.
A more appropriate analogy would be if you had 2 DSLRs, both snapping simultaneously, but interleaved so that one takes a picture exactly half way between the 2 pictures that the other camera is taking.
The resultant set of 2x 7 images each would indeed be "14FPS" if viewed in sequence.
I am also familliar with the abandonment of FRAPS on some review sites. IIRC, the change was more to do with the measurement of micro stutter and frame timing, and not so much as FRAPS not showing the correct FPS.
The measurement of FPS at this point is of no real concern as it has been confirmed that the FPS is indeed halved.
As I said in one of my previous posts, I think it would be helpful if we stop thinking of S3D as a set of side-by-side JPS images in sequence.
If we instead think of a video feed at 60fps to one eye (where your brain cannot pick each frame apart), and another video feed at 60fps to the other eye. The frames used in these video feeds do not need to be the same image (from a different perspective) because your brain cannot pick out individual frames. As far as the brain is concerned, you are receiving fluid motion to both eyes. Here, if the frames are "leap frogging" you will be experiencing 120FPS, albeit one frame per one eye at a time.
As I said, why not try it? Put on your 3D glasses and toggle them on. Walk around your room environment. You will agree that it indeed does not look like "crap". You will be seeing different progressing frame times for each eye where no 2 frames are the same, and all that at 120fps; granted it will only be one frame per eye instead of one frame per both eyes in "actual" reality that we are used to :)
You are assuming that you are taking a snapshot for one single moment in time, and making 2 different perspective views from it.
I am saying that the entire scene be generated for each eye from scratch.
i.e. a scene be generated, one image output by moving the perspective for the left eye (Origin + X). Then dump the entire scene and make an entirely new one, move to the perspective for the other eye (Origin - X), and so on.
The resultant image will not be "crap", i can assure you. Please try my experiment detailed in the first post on this page yourself.
Your DSLR example is comparing apples to oranges.
A more appropriate analogy would be if you had 2 DSLRs, both snapping simultaneously, but interleaved so that one takes a picture exactly half way between the 2 pictures that the other camera is taking.
The resultant set of 2x 7 images each would indeed be "14FPS" if viewed in sequence.
I am also familliar with the abandonment of FRAPS on some review sites. IIRC, the change was more to do with the measurement of micro stutter and frame timing, and not so much as FRAPS not showing the correct FPS.
The measurement of FPS at this point is of no real concern as it has been confirmed that the FPS is indeed halved.
As I said in one of my previous posts, I think it would be helpful if we stop thinking of S3D as a set of side-by-side JPS images in sequence.
If we instead think of a video feed at 60fps to one eye (where your brain cannot pick each frame apart), and another video feed at 60fps to the other eye. The frames used in these video feeds do not need to be the same image (from a different perspective) because your brain cannot pick out individual frames. As far as the brain is concerned, you are receiving fluid motion to both eyes. Here, if the frames are "leap frogging" you will be experiencing 120FPS, albeit one frame per one eye at a time.
As I said, why not try it? Put on your 3D glasses and toggle them on. Walk around your room environment. You will agree that it indeed does not look like "crap". You will be seeing different progressing frame times for each eye where no 2 frames are the same, and all that at 120fps; granted it will only be one frame per eye instead of one frame per both eyes in "actual" reality that we are used to :)
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
I'm sorry but S3D IS two different perspectives of the same scene. (this is how our eyes work)
If/when elements in the scene are only visible in one eye OR have moved contrary to where the 3D placement and our brain 'thinks' they ought to be you have doggie do results.
Just look what happens when developers tie 2D lighting to the camera position. It looks fine in 2D as a single frame but when the camera is moved to create a S3D image you might have reflections/shadows/light in one eye which ether don't align with the reflections/shadow/light in the other or WORSE YET one eye will show no reflections/shadow/lights and the other eye will.
If the application/game was rendering each S3D view point from 'scratch' they would still have to "freeze time" and render two different view points of the same STATIC scene. This is because our brain sees the "depth" of an object (with no other visual clues) by the difference (or not) in the horizontal offset our L+R eyes see.
Take the following example:
1.R....................L
2..R..................L
3...R................L
4....R..............L
5.....R............L < Deeper into the screen
6......R..........L
7.......R........L
8........R......L
9.........R....L
10.........R..L
11..........RL < we can call this the point of convergence
12.........L..R
13........L....R
14.......L......R
15......L........R
16.....L..........R < Closer to us or "pop out"
...Left Eye.....Right Eye
If an object is not STATIC and were moved left/right or closer/further away between L-R images it would be rather confusing (headache) for our brain if it rendered in S3D at all or an object moving in a straight line (left to right or right to left) would appear to be incorrectly moving towards or away from us in the 3D space.
As others have mentioned, this simply does not work.
I'm sorry but S3D IS two different perspectives of the same scene. (this is how our eyes work)
If/when elements in the scene are only visible in one eye OR have moved contrary to where the 3D placement and our brain 'thinks' they ought to be you have doggie do results.
Just look what happens when developers tie 2D lighting to the camera position. It looks fine in 2D as a single frame but when the camera is moved to create a S3D image you might have reflections/shadows/light in one eye which ether don't align with the reflections/shadow/light in the other or WORSE YET one eye will show no reflections/shadow/lights and the other eye will.
If the application/game was rendering each S3D view point from 'scratch' they would still have to "freeze time" and render two different view points of the same STATIC scene. This is because our brain sees the "depth" of an object (with no other visual clues) by the difference (or not) in the horizontal offset our L+R eyes see.
Take the following example:
1.R....................L
2..R..................L
3...R................L
4....R..............L
5.....R............L < Deeper into the screen
6......R..........L
7.......R........L
8........R......L
9.........R....L
10.........R..L
11..........RL < we can call this the point of convergence
12.........L..R
13........L....R
14.......L......R
15......L........R
16.....L..........R < Closer to us or "pop out"
...Left Eye.....Right Eye
If an object is not STATIC and were moved left/right or closer/further away between L-R images it would be rather confusing (headache) for our brain if it rendered in S3D at all or an object moving in a straight line (left to right or right to left) would appear to be incorrectly moving towards or away from us in the 3D space.
As others have mentioned, this simply does not work.
Hi mbloof,
What you are again saying is that it wont work, not so much for technical reasons but because the brain would not be able to process the images.
Have you tried my experiment on the top post at the top of this page? It clearly demonstrates that it does work, and unfortunately you are wrong. Have you tried moving an object across your field of view with 3D glasses turned on? Did you have "headaches" perceiving it or were you confused by it?
Others may have mentioned it but majority opinion doesn't necessarily equal correct opinion. I have my experiment at the top of this page to back up my position, which I invite you to replicate. I also kindly invite you to show me evidence to the contrary :)
Your diagram looks rather impressive but please would you clarify how it shows motion using "leap frogging" versus "catch-up"? I just see a simple illustration of how 3D works, which I have seen and made myself numerous times over the last 15 years of my passionate interest in S3D.
I welcome being proven wrong; but solely by facts obtained from experimentation, rather than conjecture. I am a believer in the proper scientific method ;-)
Warm regards.
What you are again saying is that it wont work, not so much for technical reasons but because the brain would not be able to process the images.
Have you tried my experiment on the top post at the top of this page? It clearly demonstrates that it does work, and unfortunately you are wrong. Have you tried moving an object across your field of view with 3D glasses turned on? Did you have "headaches" perceiving it or were you confused by it?
Others may have mentioned it but majority opinion doesn't necessarily equal correct opinion. I have my experiment at the top of this page to back up my position, which I invite you to replicate. I also kindly invite you to show me evidence to the contrary :)
Your diagram looks rather impressive but please would you clarify how it shows motion using "leap frogging" versus "catch-up"? I just see a simple illustration of how 3D works, which I have seen and made myself numerous times over the last 15 years of my passionate interest in S3D.
I welcome being proven wrong; but solely by facts obtained from experimentation, rather than conjecture. I am a believer in the proper scientific method ;-)
Warm regards.
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
[quote="mbloof"]It is my understanding that on 3DVision, Tridef and even with many/most applications which do their own 3D rendering, the scene is rendered ONCE and then the 3D driver moves the "camera" for the two viewpoints to create the S3D image we see. (at least in most cases the "heavy hitting" is done once and the details for each view is filled in)
[/quote]
Actually this is not correct.
In stereo 3D you need to setup 2 perspective projections (one for each eye) and DRAW the same scene 1 TIME using the projection matrix for each each = Same scene rendered 2 times from different perspective.
Since Fraps is counting the number of drawcalls it will say 120 calls per second thus giving 120fps which is false sincee 60 draw calls are for left eye and 60 for right eye.
Ofc NOT all thing are rendered two times like Shadow maps for example (according to nVidia - personally haven't tried it yet)
Also in deferred rendering some other consideration need to be taken in regards to the G-buffer.
But most of the times you draw 2 times the same frame.
mbloof said:It is my understanding that on 3DVision, Tridef and even with many/most applications which do their own 3D rendering, the scene is rendered ONCE and then the 3D driver moves the "camera" for the two viewpoints to create the S3D image we see. (at least in most cases the "heavy hitting" is done once and the details for each view is filled in)
Actually this is not correct.
In stereo 3D you need to setup 2 perspective projections (one for each eye) and DRAW the same scene 1 TIME using the projection matrix for each each = Same scene rendered 2 times from different perspective.
Since Fraps is counting the number of drawcalls it will say 120 calls per second thus giving 120fps which is false sincee 60 draw calls are for left eye and 60 for right eye.
Ofc NOT all thing are rendered two times like Shadow maps for example (according to nVidia - personally haven't tried it yet)
Also in deferred rendering some other consideration need to be taken in regards to the G-buffer.
But most of the times you draw 2 times the same frame.
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
Your experiment is to switch on my 3D glasses and look at my room? My 3D room with my 3D viewing eyes?
Were you expecting to suddenly see the world in 2D when you looked away from your monitor with those glasses on?
What you are comparing is a 3D environment with essentially a veil over my eyes blinking on and off 120x a second, vs watching a 2D screen producing a stereoscopic image that relies purely on pulsing light and alternating frames with very precise timing.
The reason you think 24fps is "fluid motion" for film is because your brain fills in the gaps, and gets used to the jitter. 24fps in film jitters like crazy and its horrible the first time you watch it, but eventually your brain compensates.
Your experiment is to switch on my 3D glasses and look at my room? My 3D room with my 3D viewing eyes?
Were you expecting to suddenly see the world in 2D when you looked away from your monitor with those glasses on?
What you are comparing is a 3D environment with essentially a veil over my eyes blinking on and off 120x a second, vs watching a 2D screen producing a stereoscopic image that relies purely on pulsing light and alternating frames with very precise timing.
The reason you think 24fps is "fluid motion" for film is because your brain fills in the gaps, and gets used to the jitter. 24fps in film jitters like crazy and its horrible the first time you watch it, but eventually your brain compensates.
[quote="Foulplay99"]Your experiment as you put it, is to switch on my 3D glasses and look at my room? My 3D room with my 3D viewing eyes? This is a joke right?
This proves nothing when you compare the trickery employed on a 3D monitor to give you the illusion of a 3D image. Were you expecting to suddenly see the world in 2D when you looked away from your monitor with those glasses on?
What you are comparing is a 3D environment with essentially a veil over my eyes blinking on and off 120x a second, vs watching a 2D screen producing a stereoscopic image that relies purely on pulsing light and alternating frames with very precise timing.
The reason you think 24fps is "fluid motion" for film is because your brain fills in the gaps, and gets used to the jitter. 24fps in film jitters like crazy and its horrible the first time you watch it, but eventually your brain compensates.[/quote]
What he's saying does makes sense actually. Put your glasses on and wave your hand in front of your face.
Foulplay99 said:Your experiment as you put it, is to switch on my 3D glasses and look at my room? My 3D room with my 3D viewing eyes? This is a joke right?
This proves nothing when you compare the trickery employed on a 3D monitor to give you the illusion of a 3D image. Were you expecting to suddenly see the world in 2D when you looked away from your monitor with those glasses on?
What you are comparing is a 3D environment with essentially a veil over my eyes blinking on and off 120x a second, vs watching a 2D screen producing a stereoscopic image that relies purely on pulsing light and alternating frames with very precise timing.
The reason you think 24fps is "fluid motion" for film is because your brain fills in the gaps, and gets used to the jitter. 24fps in film jitters like crazy and its horrible the first time you watch it, but eventually your brain compensates.
What he's saying does makes sense actually. Put your glasses on and wave your hand in front of your face.
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
It's simple: Activate your 3D glasses and put them on. Walk around your room!
Can you see everything in perfect fluid 3D or not?
Guess what? No 2 "frames" you are seeing are the same, but everything is perfect, fluid 3D!
Furthermore, you are seeing it all at true 120fps ;-)
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
EDIT: Well you got us with that last post. Still, I do wonder if it would create any interlacing artifacts with that method.
In an overly simplistic view the game engine will make "draw calls" to the DX driver. Somewhere in that stream of commands is a call/command that only happens once per frame. This is what programs like "fraps" hook onto. They (fraps) basically count how many times per second the unique call is being made by the application program (game).
So on a 2D system without the limits of vsync, the application program is free to make new frames as quickly as its able to which if it is higher than the monitors vsync causes screen tearing. If we add vsync then the application program will wait to issue another 'draw command' or otherwise ask the DX driver to update the frame buffer. On 60Hz monitors you'll be stuck at 60FPS, 120Hz monitors 120FPS.
You are correct to assume that on a "S3D" system some "trickery" is involved.
It is my understanding that on 3DVision, Tridef and even with many/most applications which do their own 3D rendering, the scene is rendered ONCE and then the 3D driver moves the "camera" for the two viewpoints to create the S3D image we see. (at least in most cases the "heavy hitting" is done once and the details for each view is filled in)
Since the GPU is not having to re-render the whole scene entirely the GPU load for S3D @ 60 frames per eye (per second) will be less than the GPU load for 120 frames (per second) for both eyes but (depending on S3D conversion method) much higher than 60 frames per second for both eyes.
Since "fraps" is hooking onto an application call/command to DX that only marks the 'base frame' of the scene that the 3D driver will turn into two unique frames (using some trickery) or the application program internal S3D engine only makes the call/command that "fraps" hooks onto ONCE per monoscopic scene view (pre stereoscopic 3D conversion) "fraps" reports 1/2 the actual viewable unique frame rate.
While "fraps" is somewhat S3D "aware" in that it can record S3D video when 3DVision is being used it is still unable to measure the actual viewable unique frame rate and also fails when application programs such as "Tomb Raider" use their own unique form of S3D conversion/rendering and is only able to record one eye of the S3D stream.
I'm sure others will jump in and correct me on my (miss)understanding of the process.
i7-2600K-4.5Ghz/Corsair H100i/8GB/GTX780SC-SLI/Win7-64/1200W-PSU/Samsung 840-500GB SSD/Coolermaster-Tower/Benq 1080ST @ 100"
I've always assumed frames were being discarded.
That's a completely different technique though. It's not full HD 3D. It's basically polarized 3D. Which means half the pixels are being seen with the red and the other half with blue. The glasses are separating the two perspectives. So, yes, it's a 1080p picture being rendered at 60fps. But the actual resolution is halved as a result. The left eye is getting half of that picture and the right eye is getting the other. That's the whole point of active shutters vs polarization. It's an easy way to get full 1080p 3D. You'd need a 4K screen to get full 1080P 3D in stereo on a passive display.
EDIT: I'm sure you realized all that, I'm just saying it's a completely different technique being used. So of course fraps wouldn't show its FPS as being halved. That's not how the technique works. LEFT/RIGHT is both being displayed at the same time.
Thank you for the eloquent explanation.
What would you propose would be an ideal method to measure the true unique frame rate in a game?
The point may be moot as at this point, I believe everyone is quite convinced that rendered frames are indeed doubled to give 2 perspectives of the same view, as you describe.
From what I understand, the question now is if S3D can be improved by using consecutively different rendered frames, which would increase the fluidity of the experience.
The experiment in my previous post indicates that it indeed would, assuming:
1. The game engine is capable of 120fps
2. The user has enough power in their hardware to take advantage.
So, the question which I made the thread about rears its head: Is it possible to do this, or is the driver hard coded to not allow such a thing.
From your description, it seems that I am going to be disappointed with the answer :(
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
Huh what? It's not polarised 3D. It is a passive format, but is not polarized nor interlaced.
Actually, this question (somewhat) has already been asked and answered if we take the overall "frame rate" question into account a number of technology websites have dropped using programs such as "fraps" all together because of its limitations and inability to measure the actual frame rate squirting out the digital video connectors of a GPU card and have resorted to specialized software +hardware that actually looks at the signal squirting out the connector to be able to measure the GPU frame rate performance properly.
I tend to think of it this way:
Back in the old days a complex 2D/3D scene might of took minutes/hours/days to calculate the wire frame geometery composed of up to a gawdzillion triangles which underlay what we see on our screen. However once the entire wire frame was calculated we could fairly easily move (generally with our mouse) the camera perspective or rotate the 3D wire frame model fairly quickly. The addition of texture to the wire frame and the calculating culling the portions of the resulting image which are not viewable took only a fraction of the time required to generate the wire frame itself.
When and how the S3D image is generated from the 3D monoscopic base image (as described above) is dependent on the S3D engine in use and what (if any) "hooks" are available in the video driver itself. None of them would have to actually re-generate the baseline wire frame model itself as for that moment in virtual time, our actor and other screen elements have not moved, only the position of the camera and resultant viewing angle has. Therefore the "heavy hitting" of generating the baseline wire frame model has already been done.
In today's world this is what we are seeing happening at simply amazing (compared to 198x-199x technology) speeds. The game engine freezes time and places our character (and his/her field of view) somewhere in a 3D model space and then places walls+other objects into this space along with dynamic objects+actors (items that might be moving). Somewhere along the line texture is added to the wire frame, non displayable portions culled and the resulting frame is copied to the video output buffer so we can enjoy it on our screen.
Another way of looking at it is I can get/attach a "3D lens" to my 2D camera. The lens itself is actually TWO lenses coupled together with other lenses and mirrors which split the 2D viewpoint of my DSLR into a L+R stereoscopic pair. My fastest DSLR can capture 7FPS. That 7FPS does not magically become 14FPS when I attach the "3D lens" to my camera as it still can only record 7FPS however with the "3D lens" attached I can record two perspectives per frame.
The driver (in our case 3DVision) is "hard coded" to attempt to calculate two different view point perspectives (stereoscopic) from a series monoscopic calls/commands from the application/game engine.
If it were instead attempting to alternate the position the camera view on every other frame the game engine requested, the result (as others have mentioned) would look like crap and not be S3D.
Make sense?
i7-2600K-4.5Ghz/Corsair H100i/8GB/GTX780SC-SLI/Win7-64/1200W-PSU/Samsung 840-500GB SSD/Coolermaster-Tower/Benq 1080ST @ 100"
Unfortunately, no, it doesn't "Make sense" :)
You are assuming that you are taking a snapshot for one single moment in time, and making 2 different perspective views from it.
I am saying that the entire scene be generated for each eye from scratch.
i.e. a scene be generated, one image output by moving the perspective for the left eye (Origin + X). Then dump the entire scene and make an entirely new one, move to the perspective for the other eye (Origin - X), and so on.
The resultant image will not be "crap", i can assure you. Please try my experiment detailed in the first post on this page yourself.
Your DSLR example is comparing apples to oranges.
A more appropriate analogy would be if you had 2 DSLRs, both snapping simultaneously, but interleaved so that one takes a picture exactly half way between the 2 pictures that the other camera is taking.
The resultant set of 2x 7 images each would indeed be "14FPS" if viewed in sequence.
I am also familliar with the abandonment of FRAPS on some review sites. IIRC, the change was more to do with the measurement of micro stutter and frame timing, and not so much as FRAPS not showing the correct FPS.
The measurement of FPS at this point is of no real concern as it has been confirmed that the FPS is indeed halved.
As I said in one of my previous posts, I think it would be helpful if we stop thinking of S3D as a set of side-by-side JPS images in sequence.
If we instead think of a video feed at 60fps to one eye (where your brain cannot pick each frame apart), and another video feed at 60fps to the other eye. The frames used in these video feeds do not need to be the same image (from a different perspective) because your brain cannot pick out individual frames. As far as the brain is concerned, you are receiving fluid motion to both eyes. Here, if the frames are "leap frogging" you will be experiencing 120FPS, albeit one frame per one eye at a time.
As I said, why not try it? Put on your 3D glasses and toggle them on. Walk around your room environment. You will agree that it indeed does not look like "crap". You will be seeing different progressing frame times for each eye where no 2 frames are the same, and all that at 120fps; granted it will only be one frame per eye instead of one frame per both eyes in "actual" reality that we are used to :)
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
If/when elements in the scene are only visible in one eye OR have moved contrary to where the 3D placement and our brain 'thinks' they ought to be you have doggie do results.
Just look what happens when developers tie 2D lighting to the camera position. It looks fine in 2D as a single frame but when the camera is moved to create a S3D image you might have reflections/shadows/light in one eye which ether don't align with the reflections/shadow/light in the other or WORSE YET one eye will show no reflections/shadow/lights and the other eye will.
If the application/game was rendering each S3D view point from 'scratch' they would still have to "freeze time" and render two different view points of the same STATIC scene. This is because our brain sees the "depth" of an object (with no other visual clues) by the difference (or not) in the horizontal offset our L+R eyes see.
Take the following example:
1.R....................L
2..R..................L
3...R................L
4....R..............L
5.....R............L < Deeper into the screen
6......R..........L
7.......R........L
8........R......L
9.........R....L
10.........R..L
11..........RL < we can call this the point of convergence
12.........L..R
13........L....R
14.......L......R
15......L........R
16.....L..........R < Closer to us or "pop out"
...Left Eye.....Right Eye
If an object is not STATIC and were moved left/right or closer/further away between L-R images it would be rather confusing (headache) for our brain if it rendered in S3D at all or an object moving in a straight line (left to right or right to left) would appear to be incorrectly moving towards or away from us in the 3D space.
As others have mentioned, this simply does not work.
i7-2600K-4.5Ghz/Corsair H100i/8GB/GTX780SC-SLI/Win7-64/1200W-PSU/Samsung 840-500GB SSD/Coolermaster-Tower/Benq 1080ST @ 100"
What you are again saying is that it wont work, not so much for technical reasons but because the brain would not be able to process the images.
Have you tried my experiment on the top post at the top of this page? It clearly demonstrates that it does work, and unfortunately you are wrong. Have you tried moving an object across your field of view with 3D glasses turned on? Did you have "headaches" perceiving it or were you confused by it?
Others may have mentioned it but majority opinion doesn't necessarily equal correct opinion. I have my experiment at the top of this page to back up my position, which I invite you to replicate. I also kindly invite you to show me evidence to the contrary :)
Your diagram looks rather impressive but please would you clarify how it shows motion using "leap frogging" versus "catch-up"? I just see a simple illustration of how 3D works, which I have seen and made myself numerous times over the last 15 years of my passionate interest in S3D.
I welcome being proven wrong; but solely by facts obtained from experimentation, rather than conjecture. I am a believer in the proper scientific method ;-)
Warm regards.
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
Actually this is not correct.
In stereo 3D you need to setup 2 perspective projections (one for each eye) and DRAW the same scene 1 TIME using the projection matrix for each each = Same scene rendered 2 times from different perspective.
Since Fraps is counting the number of drawcalls it will say 120 calls per second thus giving 120fps which is false sincee 60 draw calls are for left eye and 60 for right eye.
Ofc NOT all thing are rendered two times like Shadow maps for example (according to nVidia - personally haven't tried it yet)
Also in deferred rendering some other consideration need to be taken in regards to the G-buffer.
But most of the times you draw 2 times the same frame.
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
Were you expecting to suddenly see the world in 2D when you looked away from your monitor with those glasses on?
What you are comparing is a 3D environment with essentially a veil over my eyes blinking on and off 120x a second, vs watching a 2D screen producing a stereoscopic image that relies purely on pulsing light and alternating frames with very precise timing.
The reason you think 24fps is "fluid motion" for film is because your brain fills in the gaps, and gets used to the jitter. 24fps in film jitters like crazy and its horrible the first time you watch it, but eventually your brain compensates.
i7 4790k @ 4.6 - 16GB RAM - 2x SLI Titan X
27" ASUS ROG SWIFT, 28" - 65" Samsung UHD8200 4k 3DTV - Oculus Rift CV1 - 34" Acer Predator X34 Ultrawide
Old kit:
i5 2500k @ 4.4 - 8gb RAM
Acer H5360BD projector
GTX 580, SLI 670, GTX 980 EVGA SC
Acer XB280HK 4k 60hz
Oculus DK2
What he's saying does makes sense actually. Put your glasses on and wave your hand in front of your face.
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)