Hi,
I've just been googling this, but couldn't find a conclusive answer, so thought I would ask here.
So, 3d vision roughly halves the fps as it has to show each eye a different frame, and thus the gpu has to work twice as hard. What happens though, if the game is cpu limited? Does the fps halving rule apply, or would I get the same fps as I would in 2d?
Assuming I was cpu limited in both 2d and 3d for a game, if I was getting 40 fps in a cpu limited game, would I now get roughly 20 in 3d? Or would it still be 40?
I realize that it probably isn't quite as simple as that, as running in 3d probably has some overhead for the cpu anyway, but is that on top of the 'half the fps' rule?
Ta
I've just been googling this, but couldn't find a conclusive answer, so thought I would ask here.
So, 3d vision roughly halves the fps as it has to show each eye a different frame, and thus the gpu has to work twice as hard. What happens though, if the game is cpu limited? Does the fps halving rule apply, or would I get the same fps as I would in 2d?
Assuming I was cpu limited in both 2d and 3d for a game, if I was getting 40 fps in a cpu limited game, would I now get roughly 20 in 3d? Or would it still be 40?
I realize that it probably isn't quite as simple as that, as running in 3d probably has some overhead for the cpu anyway, but is that on top of the 'half the fps' rule?
That's an interesting question.
Generally, 3D has between a 35% to 50% impact, but I have seen it much worse.
For gaming, CPUs haven't increased in performance at all really over the past 5 years, whereas GPUs have doubled in performance rapidly every few years.
Nowadays, we find ourselves in a peculiar situation, where we changed from GPU bound to CPU bound in a lot of games, especially ones which are not optimised for multi-threading.
I find that in newer games such as assassin's creed 4, and COD:Advanced Warfare, CPU limit is the real performance destroyer. As soon as 3D is enabled, the FPS plummet to sub 50%.
To answer your question, I get the impression that it is heavily game dependent. Some games it won't matter at all, some games it will be the main limiting factor.
Generally, 3D has between a 35% to 50% impact, but I have seen it much worse.
For gaming, CPUs haven't increased in performance at all really over the past 5 years, whereas GPUs have doubled in performance rapidly every few years.
Nowadays, we find ourselves in a peculiar situation, where we changed from GPU bound to CPU bound in a lot of games, especially ones which are not optimised for multi-threading.
I find that in newer games such as assassin's creed 4, and COD:Advanced Warfare, CPU limit is the real performance destroyer. As soon as 3D is enabled, the FPS plummet to sub 50%.
To answer your question, I get the impression that it is heavily game dependent. Some games it won't matter at all, some games it will be the main limiting factor.
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
Thanks for the input.
I'm leaning towards the 'half fps' rule not applying in cpu limited games. It seems to make sense to me that the cpu would only have to process 1 frame which is then displayed twice by the gpu (one for each eye at a different perspective). So, whilst the gpu is working roughly twice as hard, as far as everything else is concerned (fraps included) it only has to process at the speed of the fraps reported fps. So a 2d 40 fps in a cpu limited game would be 40 in 3d mode (assuming its still cpu limited and minus x for 3d vision cpu overhead), as oppose to 20.
I'm only guessing though; if someone knows better, please let me know!
I'm leaning towards the 'half fps' rule not applying in cpu limited games. It seems to make sense to me that the cpu would only have to process 1 frame which is then displayed twice by the gpu (one for each eye at a different perspective). So, whilst the gpu is working roughly twice as hard, as far as everything else is concerned (fraps included) it only has to process at the speed of the fraps reported fps. So a 2d 40 fps in a cpu limited game would be 40 in 3d mode (assuming its still cpu limited and minus x for 3d vision cpu overhead), as oppose to 20.
I'm only guessing though; if someone knows better, please let me know!
As far as I know, 3D vision still uses v-sync (non-adaptive). or has something changed??
Have not updated my NVidia drivers since many months ago.
Assuming the game is still cpu limited after 3D vision is enabled (which may be an incorrect assumption), then the frame rate with 3D vision enabled should be approximately the same (or maybe a little less) as running non-adaptive v-sync without 3D Vision. As long as "Enable stereoscopic 3D" is enabled in the NVidia control panel, even if you turn off 3D within a game, v-sync is still enabled.
But it could depend on the engine too.
Some engines limit frame rate or may respond in different ways when v-sync is enabled.
If you have not installed 3D Vision, then try setting v-sync to on (non-adaptive) in the NVidia control panel and check the frame rate. I would do it but I don't want to uninstall 3D vision and then later have to re-install it.
As far as I know, 3D vision still uses v-sync (non-adaptive). or has something changed??
Have not updated my NVidia drivers since many months ago.
Assuming the game is still cpu limited after 3D vision is enabled (which may be an incorrect assumption), then the frame rate with 3D vision enabled should be approximately the same (or maybe a little less) as running non-adaptive v-sync without 3D Vision. As long as "Enable stereoscopic 3D" is enabled in the NVidia control panel, even if you turn off 3D within a game, v-sync is still enabled.
But it could depend on the engine too.
Some engines limit frame rate or may respond in different ways when v-sync is enabled.
If you have not installed 3D Vision, then try setting v-sync to on (non-adaptive) in the NVidia control panel and check the frame rate. I would do it but I don't want to uninstall 3D vision and then later have to re-install it.
Well, we seem to be dealing with 2 other issues here :)
1. When going 3D, the GPU doesn't process an entirely different frame from a different perspective - it processes the same frame from a different perspective, which means that the performance hit is much lower than the 50% one would expect.
I did some calculations a while back and this was what was found:
[quote="RAGEdemon"]...from your results and the links that eqzitara has provided, it would seem that there is quite a variance from one game to the next.
If we take the performance hit from game to game including your results (not counting games which have obvious fps limits in or out of s3D), we have:
[(2Dfps-3Dfps)/2Dfps]*100 = % drop in performance due to S3D
GTX580
Batman Arkham Asylum: 53% drop
GTX780 Ti
Arma 3: 24%
BF4: 50%
CoD Ghosts: 42%
Shadow Warrior: 47%
WRC 4 FIA: 52%
Splinter Cell Blacklist: 50%
Total War Rome 2: 39%
GTX 690 SLI:
Batman Arkham City: 26%
Skyrim: 27%
Witcher 2: 54%+
From the results, it seems that modern games vary a great deal when using the 3D vision driver. What is interesting is that going SLI seems to significantly decrease the performance hit.
It is a curious find. Perhaps one should look into going SLI over getting a better single card if the price is the same, comparatively speaking.
[/quote]
2. As above, the 3D vision driver actually only displays 60fps, even on 120hz displays and 120fps capable engines. It just displays the same frame twice, (once each from different perspective).
It is certainly possible to display 120fps, with each frame being different. I discuss this exact principle in another thread, where I was able to provide proof of the concept too. This will likely need 2x the performance, and probably a higher CPU overhead, as now the CPU really is processing twice as many frames.
Unfortunately, the 3D Vision driver just doesn't work that way. Perhaps the next gen of 3D hardware/software might. i.e. the Rift.
Well, we seem to be dealing with 2 other issues here :)
1. When going 3D, the GPU doesn't process an entirely different frame from a different perspective - it processes the same frame from a different perspective, which means that the performance hit is much lower than the 50% one would expect.
I did some calculations a while back and this was what was found:
RAGEdemon said:...from your results and the links that eqzitara has provided, it would seem that there is quite a variance from one game to the next.
If we take the performance hit from game to game including your results (not counting games which have obvious fps limits in or out of s3D), we have:
[(2Dfps-3Dfps)/2Dfps]*100 = % drop in performance due to S3D
GTX580
Batman Arkham Asylum: 53% drop
GTX780 Ti
Arma 3: 24%
BF4: 50%
CoD Ghosts: 42%
Shadow Warrior: 47%
WRC 4 FIA: 52%
Splinter Cell Blacklist: 50%
Total War Rome 2: 39%
From the results, it seems that modern games vary a great deal when using the 3D vision driver. What is interesting is that going SLI seems to significantly decrease the performance hit.
It is a curious find. Perhaps one should look into going SLI over getting a better single card if the price is the same, comparatively speaking.
2. As above, the 3D vision driver actually only displays 60fps, even on 120hz displays and 120fps capable engines. It just displays the same frame twice, (once each from different perspective).
It is certainly possible to display 120fps, with each frame being different. I discuss this exact principle in another thread, where I was able to provide proof of the concept too. This will likely need 2x the performance, and probably a higher CPU overhead, as now the CPU really is processing twice as many frames.
Unfortunately, the 3D Vision driver just doesn't work that way. Perhaps the next gen of 3D hardware/software might. i.e. the Rift.
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
I can confirm what you guys have been saying.
I was basically playing SWTOR in 3d (which looks amazing), but was getting fps in the 30's and 40's in crowded areas, which I assumed was cpu limited, and was wondering if I were to turn off 3d if I would get 60. Being an MMO this was hard for me to really test, but I was able to yesterday, and the frame rate was the same in 2d and 3d, in the same crowded location. I can confirm it was cpu limited also, in both cases, as the fps counter built into SWTOR changes colour depending on whether its GPU or CPU bound.
I have recently installed a 5920k too, so I wonder what sort of frames people on older machines must be getting in the same situations.
Thanks for your help
I was basically playing SWTOR in 3d (which looks amazing), but was getting fps in the 30's and 40's in crowded areas, which I assumed was cpu limited, and was wondering if I were to turn off 3d if I would get 60. Being an MMO this was hard for me to really test, but I was able to yesterday, and the frame rate was the same in 2d and 3d, in the same crowded location. I can confirm it was cpu limited also, in both cases, as the fps counter built into SWTOR changes colour depending on whether its GPU or CPU bound.
I have recently installed a 5920k too, so I wonder what sort of frames people on older machines must be getting in the same situations.
You might want to use MSI Afterburner's on screen display to see the GPU usage under the 2 scenarios. You will likely find that even though the FPS doesn't change much, the GPU usage will change from say 50% to 80% when 3D is enabled.
Also, just having 3D enabled in the control panel but not toggled on int he game can make a huge difference. Try disabling 3D in the cotrol panel for a more accurate comparison.
Unfortunately, for gaming, a 5 year old i-series cpu at 4GHz is going to give you pretty much identical performance as a latest gen cpu at 4GHz. The only difference is that later gens may have more cores and will be more power efficient. Single thread performance hasn't changed really at all over the past half a decade, and is unlikely to do so any time in the future as silicon has hit the 5GHz barrier.
Perhaps when we can break it to increase the frequencies by leaps and bounds once again like the old days, we will see the performance once again increasing.
Games are generally fundamentally single threaded due primarily to the curent gen APIs being used. Due to AMD's innovations with the MANTLE API, one other hope is DX12 and GLnext which both reduce the cpu overhead greatly and allow the developer to distribute the workload to multiple threads properly.
Once games start to be properly programmed for nextgen consoles and their multi cores using DX12 and GLnext, multi-core CPUs will become more relevant; but again, the performance between 5 year old CPUs and current gen CPUs will still be about the same.
You might want to use MSI Afterburner's on screen display to see the GPU usage under the 2 scenarios. You will likely find that even though the FPS doesn't change much, the GPU usage will change from say 50% to 80% when 3D is enabled.
Also, just having 3D enabled in the control panel but not toggled on int he game can make a huge difference. Try disabling 3D in the cotrol panel for a more accurate comparison.
Unfortunately, for gaming, a 5 year old i-series cpu at 4GHz is going to give you pretty much identical performance as a latest gen cpu at 4GHz. The only difference is that later gens may have more cores and will be more power efficient. Single thread performance hasn't changed really at all over the past half a decade, and is unlikely to do so any time in the future as silicon has hit the 5GHz barrier.
Perhaps when we can break it to increase the frequencies by leaps and bounds once again like the old days, we will see the performance once again increasing.
Games are generally fundamentally single threaded due primarily to the curent gen APIs being used. Due to AMD's innovations with the MANTLE API, one other hope is DX12 and GLnext which both reduce the cpu overhead greatly and allow the developer to distribute the workload to multiple threads properly.
Once games start to be properly programmed for nextgen consoles and their multi cores using DX12 and GLnext, multi-core CPUs will become more relevant; but again, the performance between 5 year old CPUs and current gen CPUs will still be about the same.
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
I've just been googling this, but couldn't find a conclusive answer, so thought I would ask here.
So, 3d vision roughly halves the fps as it has to show each eye a different frame, and thus the gpu has to work twice as hard. What happens though, if the game is cpu limited? Does the fps halving rule apply, or would I get the same fps as I would in 2d?
Assuming I was cpu limited in both 2d and 3d for a game, if I was getting 40 fps in a cpu limited game, would I now get roughly 20 in 3d? Or would it still be 40?
I realize that it probably isn't quite as simple as that, as running in 3d probably has some overhead for the cpu anyway, but is that on top of the 'half the fps' rule?
Ta
Generally, 3D has between a 35% to 50% impact, but I have seen it much worse.
For gaming, CPUs haven't increased in performance at all really over the past 5 years, whereas GPUs have doubled in performance rapidly every few years.
Nowadays, we find ourselves in a peculiar situation, where we changed from GPU bound to CPU bound in a lot of games, especially ones which are not optimised for multi-threading.
I find that in newer games such as assassin's creed 4, and COD:Advanced Warfare, CPU limit is the real performance destroyer. As soon as 3D is enabled, the FPS plummet to sub 50%.
To answer your question, I get the impression that it is heavily game dependent. Some games it won't matter at all, some games it will be the main limiting factor.
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
I'm leaning towards the 'half fps' rule not applying in cpu limited games. It seems to make sense to me that the cpu would only have to process 1 frame which is then displayed twice by the gpu (one for each eye at a different perspective). So, whilst the gpu is working roughly twice as hard, as far as everything else is concerned (fraps included) it only has to process at the speed of the fraps reported fps. So a 2d 40 fps in a cpu limited game would be 40 in 3d mode (assuming its still cpu limited and minus x for 3d vision cpu overhead), as oppose to 20.
I'm only guessing though; if someone knows better, please let me know!
Have not updated my NVidia drivers since many months ago.
Assuming the game is still cpu limited after 3D vision is enabled (which may be an incorrect assumption), then the frame rate with 3D vision enabled should be approximately the same (or maybe a little less) as running non-adaptive v-sync without 3D Vision. As long as "Enable stereoscopic 3D" is enabled in the NVidia control panel, even if you turn off 3D within a game, v-sync is still enabled.
But it could depend on the engine too.
Some engines limit frame rate or may respond in different ways when v-sync is enabled.
If you have not installed 3D Vision, then try setting v-sync to on (non-adaptive) in the NVidia control panel and check the frame rate. I would do it but I don't want to uninstall 3D vision and then later have to re-install it.
Thief 1/2/gold in 3D
https://forums.geforce.com/default/topic/523535/3d-vision/thief-1-2-and-system-shock-2-perfect-3d-with-unofficial-patch-1-19
http://photos.3dvisionlive.com/Partol/album/509eb580a3e067153c000020/
[Acer GD245HQ - 1920x1080 120Hz] [Nvidia 3D Vision]
[MSI H81M-P33 with Pentium G3258 @ 4.4GHz and Zalman CNPS5X}[Transcend 2x2GB DDR3]
[Asus GTX 750 Ti @ 1350MHz] [Intel SSD 330 - 240GB]
[Creative Titanium HD + Beyerdynamic DT 880 (250ohm) headphones] [Windows 7 64bit]
1. When going 3D, the GPU doesn't process an entirely different frame from a different perspective - it processes the same frame from a different perspective, which means that the performance hit is much lower than the 50% one would expect.
I did some calculations a while back and this was what was found:
2. As above, the 3D vision driver actually only displays 60fps, even on 120hz displays and 120fps capable engines. It just displays the same frame twice, (once each from different perspective).
It is certainly possible to display 120fps, with each frame being different. I discuss this exact principle in another thread, where I was able to provide proof of the concept too. This will likely need 2x the performance, and probably a higher CPU overhead, as now the CPU really is processing twice as many frames.
Unfortunately, the 3D Vision driver just doesn't work that way. Perhaps the next gen of 3D hardware/software might. i.e. the Rift.
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
I was basically playing SWTOR in 3d (which looks amazing), but was getting fps in the 30's and 40's in crowded areas, which I assumed was cpu limited, and was wondering if I were to turn off 3d if I would get 60. Being an MMO this was hard for me to really test, but I was able to yesterday, and the frame rate was the same in 2d and 3d, in the same crowded location. I can confirm it was cpu limited also, in both cases, as the fps counter built into SWTOR changes colour depending on whether its GPU or CPU bound.
I have recently installed a 5920k too, so I wonder what sort of frames people on older machines must be getting in the same situations.
Thanks for your help
Also, just having 3D enabled in the control panel but not toggled on int he game can make a huge difference. Try disabling 3D in the cotrol panel for a more accurate comparison.
Unfortunately, for gaming, a 5 year old i-series cpu at 4GHz is going to give you pretty much identical performance as a latest gen cpu at 4GHz. The only difference is that later gens may have more cores and will be more power efficient. Single thread performance hasn't changed really at all over the past half a decade, and is unlikely to do so any time in the future as silicon has hit the 5GHz barrier.
Perhaps when we can break it to increase the frequencies by leaps and bounds once again like the old days, we will see the performance once again increasing.
Games are generally fundamentally single threaded due primarily to the curent gen APIs being used. Due to AMD's innovations with the MANTLE API, one other hope is DX12 and GLnext which both reduce the cpu overhead greatly and allow the developer to distribute the workload to multiple threads properly.
Once games start to be properly programmed for nextgen consoles and their multi cores using DX12 and GLnext, multi-core CPUs will become more relevant; but again, the performance between 5 year old CPUs and current gen CPUs will still be about the same.
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.