Hi folks, been using 3D Vision for a while now but just registered for the forum. I'm playing Skyrim at the moment and have been paying close attention to the TESVAL/Skyboost CPU optimization plugins that were recently released by Arisu and AlexanderBlade. There's been a lot of talk about CPU and GPU bottlenecking in their Bethesda forum threads. I just upgraded from a 9800GT to a GTX560, still using a 3.2 ghz core 2 duo, and have seen amazing performance increases (not surprisingly) but it's made me wonder what effect 3D Vision has on CPU load. I haven't been able to find much information on the subject, but one thing that I'd guess could be happening is that the CPU processes information for the scene, sends that info to the GPU, which then renders both the left and right eye frames, meaning it roughly doubles GPU load while not really effecting CPU load. Anybody more informed than myself have a better idea of how it all works? What I'm wondering is whether 3D Vision systems become GPU bottlenecked easier than non-3D Vision.
Hi folks, been using 3D Vision for a while now but just registered for the forum. I'm playing Skyrim at the moment and have been paying close attention to the TESVAL/Skyboost CPU optimization plugins that were recently released by Arisu and AlexanderBlade. There's been a lot of talk about CPU and GPU bottlenecking in their Bethesda forum threads. I just upgraded from a 9800GT to a GTX560, still using a 3.2 ghz core 2 duo, and have seen amazing performance increases (not surprisingly) but it's made me wonder what effect 3D Vision has on CPU load. I haven't been able to find much information on the subject, but one thing that I'd guess could be happening is that the CPU processes information for the scene, sends that info to the GPU, which then renders both the left and right eye frames, meaning it roughly doubles GPU load while not really effecting CPU load. Anybody more informed than myself have a better idea of how it all works? What I'm wondering is whether 3D Vision systems become GPU bottlenecked easier than non-3D Vision.
Yes, in general most users who are not achieving 60FPS per eye will be GPU bottlenecked before they're CPU limited because stereo 3D roughly doubles the GPU load compared to normal 2D Vsync (60Hz) loads. Forced Vsync in 3D with a 60FPS cap per eye on faster rigs prevents CPU limitations from becoming a major issue, however, but faster rigs that are not achieving 60FPS per eye may be suffering from CPU or game engine limitations which can be a problem.
With native or active 3D rendering, there is significant CPU overhead as the CPU is tasked to generate nearly double the draw calls as 2D. From Nvidia's own whitepaper, which is worth a read if you're interested in 3D stereo issues:
http://developer.download.nvidia.com/whitepapers/2010/3D_Vision_Best_Practices_Guide.pdf
[quote]Active stereoization incurs runtime costs as well. For example, most titles already budget their draw call count to get the maximum fidelity possible while maintaining playable frame rates. Active stereoization will result in substantially more draw calls-up to twice as many—which can result in an application becoming severely CPU limited.[/quote]
According to Nvidia, 3D Vision uses passive stereoization as their stereo 3D driver analyzes draw calls and converts that into two stereo frames using stereo heuristics. This process is not free, but there's a few key benefits to doing it this way: 1) its not nearly as expensive as doubling the draw calls within the game engine when using dual camera views and 2) the stereo driver threads are decoupled from the game's rendering thread, which may already be bottlenecked, and also allows the service to run on a low-utilized CPU core rather than a heavily loaded one. You can actually observe the overhead of the Nvidia stereoscopic service by running Window's built-in Resource Monitor and spotlighting nvSCPAPIsvr.exe which will also isolate the Stereo Service CPU %. I've never seen CPU usage on these go over 5-10%; using the Nvidia laser sight is more taxing however.
But performance is the big advantage of Nvidia's stereo driver as it has a relatively low CPU overhead that maximizes GPU performance, especially since the stereo driver is able to issue draw calls to NVAPI directly. Compare this to native engine 3D solutions like Battlefield 3 and Avatar the game where the game engine can't issue draw calls fast enough and GPU utilization may not be as high as expected as a result. Also, with other solutions like DDD that allow more manipulation of draw calls, the resulting CPU overhead of their driver incurs greater performance penalties. I'm not sure whether its this driver bottleneck or not having direct NVAPI access that prevents them from supporting SLI however, but that further compounds the difference in performance compared to 3D Vision. Its not just a problem with Nvidia using DDD however, you see similar complaints with AMD using DDD showing very little scaling in CF.
In your case, we know Skyrim is heavily CPU limited and will fully tax a dual core CPU. Any 3D Vision overhead (or any other system process for that matter) is stealing cycles from Skyrim, potentially reducing performance of your GPU. I would download something like MSI Afterburner and keep an eye on GPU utilization. If you're not getting close to 100% GPU utilization in areas you are not getting 60FPS, then you're CPU limited and would likely benefit from an upgrade. Users like me with multi-GPU set-ups see this even more, as I only get 30-40FPS in cities and GPUs sitting in the 40-50% utilization range.
Yes, in general most users who are not achieving 60FPS per eye will be GPU bottlenecked before they're CPU limited because stereo 3D roughly doubles the GPU load compared to normal 2D Vsync (60Hz) loads. Forced Vsync in 3D with a 60FPS cap per eye on faster rigs prevents CPU limitations from becoming a major issue, however, but faster rigs that are not achieving 60FPS per eye may be suffering from CPU or game engine limitations which can be a problem.
With native or active 3D rendering, there is significant CPU overhead as the CPU is tasked to generate nearly double the draw calls as 2D. From Nvidia's own whitepaper, which is worth a read if you're interested in 3D stereo issues:
Active stereoization incurs runtime costs as well. For example, most titles already budget their draw call count to get the maximum fidelity possible while maintaining playable frame rates. Active stereoization will result in substantially more draw calls-up to twice as many—which can result in an application becoming severely CPU limited.
According to Nvidia, 3D Vision uses passive stereoization as their stereo 3D driver analyzes draw calls and converts that into two stereo frames using stereo heuristics. This process is not free, but there's a few key benefits to doing it this way: 1) its not nearly as expensive as doubling the draw calls within the game engine when using dual camera views and 2) the stereo driver threads are decoupled from the game's rendering thread, which may already be bottlenecked, and also allows the service to run on a low-utilized CPU core rather than a heavily loaded one. You can actually observe the overhead of the Nvidia stereoscopic service by running Window's built-in Resource Monitor and spotlighting nvSCPAPIsvr.exe which will also isolate the Stereo Service CPU %. I've never seen CPU usage on these go over 5-10%; using the Nvidia laser sight is more taxing however.
But performance is the big advantage of Nvidia's stereo driver as it has a relatively low CPU overhead that maximizes GPU performance, especially since the stereo driver is able to issue draw calls to NVAPI directly. Compare this to native engine 3D solutions like Battlefield 3 and Avatar the game where the game engine can't issue draw calls fast enough and GPU utilization may not be as high as expected as a result. Also, with other solutions like DDD that allow more manipulation of draw calls, the resulting CPU overhead of their driver incurs greater performance penalties. I'm not sure whether its this driver bottleneck or not having direct NVAPI access that prevents them from supporting SLI however, but that further compounds the difference in performance compared to 3D Vision. Its not just a problem with Nvidia using DDD however, you see similar complaints with AMD using DDD showing very little scaling in CF.
In your case, we know Skyrim is heavily CPU limited and will fully tax a dual core CPU. Any 3D Vision overhead (or any other system process for that matter) is stealing cycles from Skyrim, potentially reducing performance of your GPU. I would download something like MSI Afterburner and keep an eye on GPU utilization. If you're not getting close to 100% GPU utilization in areas you are not getting 60FPS, then you're CPU limited and would likely benefit from an upgrade. Users like me with multi-GPU set-ups see this even more, as I only get 30-40FPS in cities and GPUs sitting in the 40-50% utilization range.
The simple answer is, 3D = cpu same as 2d, and doubles gpu need. I have had 3d vision for a year, and checked 50+ games at 720p (acer h5360) amd b955 at 3.5, gtx460.
The simple answer is, 3D = cpu same as 2d, and doubles gpu need. I have had 3d vision for a year, and checked 50+ games at 720p (acer h5360) amd b955 at 3.5, gtx460.
Yup, informative and very interesting. I did try out MSI Afterburner, however its overlay is prevented by Boris Vorontsov's Antifreeze injector (there's apparently a fix for that, but it wasn't working for me.) I'll probably temporarily disable the injector to check things out, but from what you've said it sounds like upgrading my GPU was a better choice than buying a new processor, especially for non-multicore games like Skyrim (I'm a modder, so I'll be tied up with that one game for a while.) It also makes me wonder why developers would bother with native 3D engines, unless it better distributes the load between video card and processor (they have to find something to do with all those cores I guess.)
Yup, informative and very interesting. I did try out MSI Afterburner, however its overlay is prevented by Boris Vorontsov's Antifreeze injector (there's apparently a fix for that, but it wasn't working for me.) I'll probably temporarily disable the injector to check things out, but from what you've said it sounds like upgrading my GPU was a better choice than buying a new processor, especially for non-multicore games like Skyrim (I'm a modder, so I'll be tied up with that one game for a while.) It also makes me wonder why developers would bother with native 3D engines, unless it better distributes the load between video card and processor (they have to find something to do with all those cores I guess.)
With native or active 3D rendering, there is significant CPU overhead as the CPU is tasked to generate nearly double the draw calls as 2D. From Nvidia's own whitepaper, which is worth a read if you're interested in 3D stereo issues:
http://developer.download.nvidia.com/whitepapers/2010/3D_Vision_Best_Practices_Guide.pdf
[quote]Active stereoization incurs runtime costs as well. For example, most titles already budget their draw call count to get the maximum fidelity possible while maintaining playable frame rates. Active stereoization will result in substantially more draw calls-up to twice as many—which can result in an application becoming severely CPU limited.[/quote]
According to Nvidia, 3D Vision uses passive stereoization as their stereo 3D driver analyzes draw calls and converts that into two stereo frames using stereo heuristics. This process is not free, but there's a few key benefits to doing it this way: 1) its not nearly as expensive as doubling the draw calls within the game engine when using dual camera views and 2) the stereo driver threads are decoupled from the game's rendering thread, which may already be bottlenecked, and also allows the service to run on a low-utilized CPU core rather than a heavily loaded one. You can actually observe the overhead of the Nvidia stereoscopic service by running Window's built-in Resource Monitor and spotlighting nvSCPAPIsvr.exe which will also isolate the Stereo Service CPU %. I've never seen CPU usage on these go over 5-10%; using the Nvidia laser sight is more taxing however.
But performance is the big advantage of Nvidia's stereo driver as it has a relatively low CPU overhead that maximizes GPU performance, especially since the stereo driver is able to issue draw calls to NVAPI directly. Compare this to native engine 3D solutions like Battlefield 3 and Avatar the game where the game engine can't issue draw calls fast enough and GPU utilization may not be as high as expected as a result. Also, with other solutions like DDD that allow more manipulation of draw calls, the resulting CPU overhead of their driver incurs greater performance penalties. I'm not sure whether its this driver bottleneck or not having direct NVAPI access that prevents them from supporting SLI however, but that further compounds the difference in performance compared to 3D Vision. Its not just a problem with Nvidia using DDD however, you see similar complaints with AMD using DDD showing very little scaling in CF.
In your case, we know Skyrim is heavily CPU limited and will fully tax a dual core CPU. Any 3D Vision overhead (or any other system process for that matter) is stealing cycles from Skyrim, potentially reducing performance of your GPU. I would download something like MSI Afterburner and keep an eye on GPU utilization. If you're not getting close to 100% GPU utilization in areas you are not getting 60FPS, then you're CPU limited and would likely benefit from an upgrade. Users like me with multi-GPU set-ups see this even more, as I only get 30-40FPS in cities and GPUs sitting in the 40-50% utilization range.
Hope that helps answer some of your questions.
With native or active 3D rendering, there is significant CPU overhead as the CPU is tasked to generate nearly double the draw calls as 2D. From Nvidia's own whitepaper, which is worth a read if you're interested in 3D stereo issues:
http://developer.download.nvidia.com/whitepapers/2010/3D_Vision_Best_Practices_Guide.pdf
According to Nvidia, 3D Vision uses passive stereoization as their stereo 3D driver analyzes draw calls and converts that into two stereo frames using stereo heuristics. This process is not free, but there's a few key benefits to doing it this way: 1) its not nearly as expensive as doubling the draw calls within the game engine when using dual camera views and 2) the stereo driver threads are decoupled from the game's rendering thread, which may already be bottlenecked, and also allows the service to run on a low-utilized CPU core rather than a heavily loaded one. You can actually observe the overhead of the Nvidia stereoscopic service by running Window's built-in Resource Monitor and spotlighting nvSCPAPIsvr.exe which will also isolate the Stereo Service CPU %. I've never seen CPU usage on these go over 5-10%; using the Nvidia laser sight is more taxing however.
But performance is the big advantage of Nvidia's stereo driver as it has a relatively low CPU overhead that maximizes GPU performance, especially since the stereo driver is able to issue draw calls to NVAPI directly. Compare this to native engine 3D solutions like Battlefield 3 and Avatar the game where the game engine can't issue draw calls fast enough and GPU utilization may not be as high as expected as a result. Also, with other solutions like DDD that allow more manipulation of draw calls, the resulting CPU overhead of their driver incurs greater performance penalties. I'm not sure whether its this driver bottleneck or not having direct NVAPI access that prevents them from supporting SLI however, but that further compounds the difference in performance compared to 3D Vision. Its not just a problem with Nvidia using DDD however, you see similar complaints with AMD using DDD showing very little scaling in CF.
In your case, we know Skyrim is heavily CPU limited and will fully tax a dual core CPU. Any 3D Vision overhead (or any other system process for that matter) is stealing cycles from Skyrim, potentially reducing performance of your GPU. I would download something like MSI Afterburner and keep an eye on GPU utilization. If you're not getting close to 100% GPU utilization in areas you are not getting 60FPS, then you're CPU limited and would likely benefit from an upgrade. Users like me with multi-GPU set-ups see this even more, as I only get 30-40FPS in cities and GPUs sitting in the 40-50% utilization range.
Hope that helps answer some of your questions.
-=HeliX=- Mod 3DV Game Fixes
My 3D Vision Games List Ratings
Intel Core i7 5930K @4.5GHz | Gigabyte X99 Gaming 5 | Win10 x64 Pro | Corsair H105
Nvidia GeForce Titan X SLI Hybrid | ROG Swift PG278Q 144Hz + 3D Vision/G-Sync | 32GB Adata DDR4 2666
Intel Samsung 950Pro SSD | Samsung EVO 4x1 RAID 0 |
Yamaha VX-677 A/V Receiver | Polk Audio RM6880 7.1 | LG Blu-Ray
Auzen X-Fi HT HD | Logitech G710/G502/G27 | Corsair Air 540 | EVGA P2-1200W