So it's Nvidia's shitty drivers?
Why would anyone ever invest in Nvidia's future propriety 'gameworks' technology when they can't even keep their current technologies up to date!?!
This is a show stopping deal breaker! That and the capped resolutions.
720p these days is the UHD equivalent of 640x480.
Who's steering this car crash? Heads need to roll, IMO.
Rez caps and basic core support is a MUST! It's not a luxury! It's basic implementation FFS!
In Direct X 12 games performance AMD are killing it!
It seems that AMD have excellent hardware and really bad drivers. That's why they are like buses, red and crash all the time, anyway.....
Nvidia have good hardware but amazing drivers. Direct 12 takes away the rewards of driver optimisation and this has left Nvidia with there pants around their ankles!
That and Async compute.
Nvidia has lost touch IMO!
3 core limit! hah
Why would anyone ever invest in Nvidia's future propriety 'gameworks' technology when they can't even keep their current technologies up to date!?!
This is a show stopping deal breaker! That and the capped resolutions.
720p these days is the UHD equivalent of 640x480.
Who's steering this car crash? Heads need to roll, IMO.
Rez caps and basic core support is a MUST! It's not a luxury! It's basic implementation FFS!
In Direct X 12 games performance AMD are killing it!
It seems that AMD have excellent hardware and really bad drivers. That's why they are like buses, red and crash all the time, anyway.....
Nvidia have good hardware but amazing drivers. Direct 12 takes away the rewards of driver optimisation and this has left Nvidia with there pants around their ankles!
That and Async compute.
[quote="masterotaku"]The Witcher 3 screenshots. Keep in mind that fps were variable even standing still. Moving Geralt caused fps drops in all situations, going down to 60fps in 2D without 3Dmigoto. I also have a mod that increases LOD. It eats a few fps, but no more than 5 in 2D.
1- 1080p, 2D, 3Dmigoto disabled (82.5fps):
[img]http://u.cubeupload.com/masterotaku/witcher311080p2D.jpg[/img]
2- 900p, 3D, with 3Dmigoto enabled (46.2fps):
[img]http://u.cubeupload.com/masterotaku/witcher32900p3D.jpg[/img]
3- 900p, 2D after disabling 3D ingame, 3Dmigoto enabled (76fps) (I just noticed that I'm in a different place. But the CPU usage is the important part):
[img]http://u.cubeupload.com/masterotaku/witcher33900p3Dto2D.jpg[/img]
4- 900p, 3D, 3Dmigoto disabled (53.1fps):
[img]http://u.cubeupload.com/masterotaku/witcher34900p3Dno3Dm.jpg[/img]
5- 900p. 2D after disabling 3D ingame, 3Dmigoto disabled (83.8fps):
[img]http://u.cubeupload.com/masterotaku/witcher35900p3Dto2Dn.jpg[/img]
I hope that an overclocked i7 Cannonlake will be able to do 60fps in 3D perfectly next year...[/quote]
So this shows that the driver is WORKING correctly:)
You get HALF the FPS in 3D VS 2D right? But lower CPU usage Right?
NOW, I wonder why... it's SIMPLE: The GPU needs to render the same VIEW 2 TIMES which takes the stress OF THE CPU. I can't see any CPU or 3 core limitation there...
What I would expect from a 3 core limitation would be :
2D - 4/8 Cores - all are used.
3D - only 3 Cores from 4/8 are used. All remaining are flat 0% used.
What you showed here states exactly the opposite, that there is no limitation (Like I explained above) ;)
Keep in mind that a game works on Timers and it needs to be in sync, hence the lower CPU usage in 3D as well.
Also, try to do the same without 3DMigoto. Just have 3D Vision Enabled in NvPanel. You will see the same.
Is 3D Vision AUTOMATIC, that still HOOKS in the back and does a lot of Computing and so on. (Check my thread were I moved my wrapper from 3D Vision Automatic - Since it was just enabling 3D Vision for me - to 3D Vision Automatic. I gained 10 FPS in 3D in SURROUND - That is around 20+ FPS in 3D in single screen...)
This is a KNOWN thing and was said billions of times back in 2009 when 3D Vision was released. That 3D Vision Automatic adds an overhead to FPS. (I think the low CPU usage is due to sync times with the monitor or something like this. I know I've read about it like 6 years ago... Problem was, until now we didn't had the GPU to push so hard;))
Also, it is weird but I run the game at steady 40FPS in 3D Surround on 2x980TI and GPUS are always 99% and CPU is anywhere from 60-80% on all cores... Could it be related to 1000 series or I don't know ?
In any case I see in both cases 2D vs 3D is used the same %. Which kinda says that the GPU is not able to get to 99% usage (CPU bottleneck ?)
I wonder if 3D Vision doesn't "uses" these 2D metrics... And instead of Pushing the GPU further it actually stays lock at the usage of 2D and instead the CPU is used less because of the 2 Perspectives rendering... when it should to the other way around: Keep CPU usage and stress the GPU more;)
In any case, I think the 3D Vision Driver needs a bit of Profiling and Updating + Love ^_^
masterotaku said:The Witcher 3 screenshots. Keep in mind that fps were variable even standing still. Moving Geralt caused fps drops in all situations, going down to 60fps in 2D without 3Dmigoto. I also have a mod that increases LOD. It eats a few fps, but no more than 5 in 2D.
1- 1080p, 2D, 3Dmigoto disabled (82.5fps):
2- 900p, 3D, with 3Dmigoto enabled (46.2fps):
3- 900p, 2D after disabling 3D ingame, 3Dmigoto enabled (76fps) (I just noticed that I'm in a different place. But the CPU usage is the important part):
4- 900p, 3D, 3Dmigoto disabled (53.1fps):
5- 900p. 2D after disabling 3D ingame, 3Dmigoto disabled (83.8fps):
I hope that an overclocked i7 Cannonlake will be able to do 60fps in 3D perfectly next year...
So this shows that the driver is WORKING correctly:)
You get HALF the FPS in 3D VS 2D right? But lower CPU usage Right?
NOW, I wonder why... it's SIMPLE: The GPU needs to render the same VIEW 2 TIMES which takes the stress OF THE CPU. I can't see any CPU or 3 core limitation there...
What I would expect from a 3 core limitation would be :
2D - 4/8 Cores - all are used.
3D - only 3 Cores from 4/8 are used. All remaining are flat 0% used.
What you showed here states exactly the opposite, that there is no limitation (Like I explained above) ;)
Keep in mind that a game works on Timers and it needs to be in sync, hence the lower CPU usage in 3D as well.
Also, try to do the same without 3DMigoto. Just have 3D Vision Enabled in NvPanel. You will see the same.
Is 3D Vision AUTOMATIC, that still HOOKS in the back and does a lot of Computing and so on. (Check my thread were I moved my wrapper from 3D Vision Automatic - Since it was just enabling 3D Vision for me - to 3D Vision Automatic. I gained 10 FPS in 3D in SURROUND - That is around 20+ FPS in 3D in single screen...)
This is a KNOWN thing and was said billions of times back in 2009 when 3D Vision was released. That 3D Vision Automatic adds an overhead to FPS. (I think the low CPU usage is due to sync times with the monitor or something like this. I know I've read about it like 6 years ago... Problem was, until now we didn't had the GPU to push so hard;))
Also, it is weird but I run the game at steady 40FPS in 3D Surround on 2x980TI and GPUS are always 99% and CPU is anywhere from 60-80% on all cores... Could it be related to 1000 series or I don't know ?
In any case I see in both cases 2D vs 3D is used the same %. Which kinda says that the GPU is not able to get to 99% usage (CPU bottleneck ?)
I wonder if 3D Vision doesn't "uses" these 2D metrics... And instead of Pushing the GPU further it actually stays lock at the usage of 2D and instead the CPU is used less because of the 2 Perspectives rendering... when it should to the other way around: Keep CPU usage and stress the GPU more;)
In any case, I think the 3D Vision Driver needs a bit of Profiling and Updating + Love ^_^
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
[quote="GibsonRed"]So it's Nvidia's shitty drivers?
Why would anyone ever invest in Nvidia's future propriety 'gameworks' technology when they can't even keep their current technologies up to date!?!
This is a show stopping deal breaker! That and the capped resolutions.
720p these days is the UHD equivalent of 640x480.
Who's steering this car crash? Heads need to roll, IMO.
Rez caps and basic core support is a MUST! It's not a luxury! It's basic implementation FFS!
In Direct X 12 games performance AMD are killing it!
It seems that AMD have excellent hardware and really bad drivers. That's why they are like buses, red and crash all the time, anyway.....
Nvidia have good hardware but amazing drivers. Direct 12 takes away the rewards of driver optimisation and this has left Nvidia with there pants around their ankles!
That and Async compute.
Nvidia has lost touch IMO!
3 core limit! hah
[/quote]
I agree with some points and disagree with others;)
AMD doesn't have better hardware... different yes. It performs better in certain areas and worse in others.
I heard their drivers got better in the last year or so;)
Talking about limitations like 720p in 3D Play, Nvidia also limited the Number of monitors in Surround to 3 + 1 lol.
AMD when they invented Eyefinity (nvidia Copy-Pasted it later) gave you the ability to use 3x3, 6x6, 9x9 topoligies as well. Both Portrait and Landscape. Hell you can Even Do Portrait - Landscape - Portrait...
Nvidia didn't update their Surround support still:(
Regarding 3D VIsion driver, it might not be the most optimum solution, but is the only one we got:( And it still behaves amazing after 6+ years since release:) I just hope Nvidia will give it more love regarding improvements and new features;)
Why would anyone ever invest in Nvidia's future propriety 'gameworks' technology when they can't even keep their current technologies up to date!?!
This is a show stopping deal breaker! That and the capped resolutions.
720p these days is the UHD equivalent of 640x480.
Who's steering this car crash? Heads need to roll, IMO.
Rez caps and basic core support is a MUST! It's not a luxury! It's basic implementation FFS!
In Direct X 12 games performance AMD are killing it!
It seems that AMD have excellent hardware and really bad drivers. That's why they are like buses, red and crash all the time, anyway.....
Nvidia have good hardware but amazing drivers. Direct 12 takes away the rewards of driver optimisation and this has left Nvidia with there pants around their ankles!
That and Async compute.
Nvidia has lost touch IMO!
3 core limit! hah
I agree with some points and disagree with others;)
AMD doesn't have better hardware... different yes. It performs better in certain areas and worse in others.
I heard their drivers got better in the last year or so;)
Talking about limitations like 720p in 3D Play, Nvidia also limited the Number of monitors in Surround to 3 + 1 lol.
AMD when they invented Eyefinity (nvidia Copy-Pasted it later) gave you the ability to use 3x3, 6x6, 9x9 topoligies as well. Both Portrait and Landscape. Hell you can Even Do Portrait - Landscape - Portrait...
Nvidia didn't update their Surround support still:(
Regarding 3D VIsion driver, it might not be the most optimum solution, but is the only one we got:( And it still behaves amazing after 6+ years since release:) I just hope Nvidia will give it more love regarding improvements and new features;)
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
I personally am still leaning on there being something quirky going on between the 3dvision driver in certain applications and the new Pascal 10-series cards. Just seems odd to me that the ones with little issue are those with 9-series and myself and maybe a few others are saying they have stuttering or other issues. In my case the stuttering seems to no be a core load issue as I cannot reproduce the stuttering in 2D mode by disabling cores.
All I know is that on the games I am now having issues with I had NO stuttering when running the same in-game setttings on a 660ti w/337.88 on Windows 8.1. All other components are the same.
I know the 1060 is not underpowered so the stuttering is either a Windows 10 (non Anniversary) problem, a Nvidia driver problem, or some combination of both.
Unfortunately Nvidia is the only one who can truly answer that for sure.
I personally am still leaning on there being something quirky going on between the 3dvision driver in certain applications and the new Pascal 10-series cards. Just seems odd to me that the ones with little issue are those with 9-series and myself and maybe a few others are saying they have stuttering or other issues. In my case the stuttering seems to no be a core load issue as I cannot reproduce the stuttering in 2D mode by disabling cores.
All I know is that on the games I am now having issues with I had NO stuttering when running the same in-game setttings on a 660ti w/337.88 on Windows 8.1. All other components are the same.
I know the 1060 is not underpowered so the stuttering is either a Windows 10 (non Anniversary) problem, a Nvidia driver problem, or some combination of both.
Unfortunately Nvidia is the only one who can truly answer that for sure.
[quote="helifax"]
You get HALF the FPS in 3D VS 2D right? But lower CPU usage Right?[/quote]
Not so low as 50% performance, but more or less yes. In other open areas I can get 60fps per eye at 1080p usually, although I usually play at 1440p.
[quote="helifax"]
Also, try to do the same without 3DMigoto. Just have 3D Vision Enabled in NvPanel. You will see the same.[/quote]
I already did that. Screenshots 1, 4 and 5.
[quote="helifax"]Check my thread were I moved my wrapper from 3D Vision Automatic - Since it was just enabling 3D Vision for me - to 3D Vision Automatic.[/quote]
You mean 3D Vision Automatic to 3D Vision Direct, right?
[quote="helifax"]
Also, it is weird but I run the game at steady 40FPS in 3D Surround on 2x980TI and GPUS are always 99% and CPU is anywhere from 60-80% on all cores... Could it be related to 1000 series or I don't know ?[/quote]
Being GPU limited, your numbers sound right. I usually get 45fps at 2560x1440 in 3D (grass and shadows not at maximum, and hairworks disabled, although I use other graphical mods). So for you: 45*0.8*2*16/(9*3) = 42.667fps
*0.8 because of 980Ti vs my 1080
*2 because of SLI (assuming perfect scaling)
/3 because of surround
*16/9 because of 1440p vs 1080p (1.7777778 times more pixels)
The weird case was Metal Gear Solid V Ground Zeroes. I was using a config file modification to increase LOD levels A LOT. In 2D (before the 3D fix was made) I never got less than 60fps, while in 3D it could go as low as 22 or 24 fps.
This (3D):
[img]http://u.cubeupload.com/masterotaku/mgsgz3d.png[/img]
VS this (2D):
[img]http://u.cubeupload.com/masterotaku/mgsgz2d.png[/img]
Reducing LOD and shadows to High made the game mostly 60fps in 3D, except for some camera positions at 53fps that almost can't go higher even with many settings at low.
helifax said:
You get HALF the FPS in 3D VS 2D right? But lower CPU usage Right?
Not so low as 50% performance, but more or less yes. In other open areas I can get 60fps per eye at 1080p usually, although I usually play at 1440p.
helifax said:
Also, try to do the same without 3DMigoto. Just have 3D Vision Enabled in NvPanel. You will see the same.
I already did that. Screenshots 1, 4 and 5.
helifax said:Check my thread were I moved my wrapper from 3D Vision Automatic - Since it was just enabling 3D Vision for me - to 3D Vision Automatic.
You mean 3D Vision Automatic to 3D Vision Direct, right?
helifax said:
Also, it is weird but I run the game at steady 40FPS in 3D Surround on 2x980TI and GPUS are always 99% and CPU is anywhere from 60-80% on all cores... Could it be related to 1000 series or I don't know ?
Being GPU limited, your numbers sound right. I usually get 45fps at 2560x1440 in 3D (grass and shadows not at maximum, and hairworks disabled, although I use other graphical mods). So for you: 45*0.8*2*16/(9*3) = 42.667fps
*0.8 because of 980Ti vs my 1080
*2 because of SLI (assuming perfect scaling)
/3 because of surround
*16/9 because of 1440p vs 1080p (1.7777778 times more pixels)
The weird case was Metal Gear Solid V Ground Zeroes. I was using a config file modification to increase LOD levels A LOT. In 2D (before the 3D fix was made) I never got less than 60fps, while in 3D it could go as low as 22 or 24 fps.
This (3D):
VS this (2D):
Reducing LOD and shadows to High made the game mostly 60fps in 3D, except for some camera positions at 53fps that almost can't go higher even with many settings at low.
Yes,
3D Vision Direct basically gives you 2 buffers for Left and Right eyes in which you manually put the frames.
3D Vision Automatic does that but also has all the ton of extra code that makes the image 3D. My wrapper works exactly the same:
- First modifying the game engine - real-time to generate the 3D effect.
- Secondly present the modified results.
The first one always adds a CPU overhead as there are lots of other instructions & logic going in the back.
Now from what I experienced and we see every-day 3D Vision Automatic is a BEAST;)
Look at Witcher 3 for example. We only fixed broken shaders (which weren't that many different types). However, imagine taking the game in 2D and coding or making a wrapper to generate the same result;) Duplicating Render-Buffers, duplicating draw calls, duplicating and applying algorithms to a lot of stuff;)
All that takes a hit;)
In any case, curios that in 3D Vision you neither have the CPU used 99% or the GPU used 99% and you are still under 60FPS which makes no sense... It shows a bottleneck clearly... But where?
I don't think is the HDD being used 100% or the RAM/VRAM access time...
I think a lot more profiling is required there to figure out where the bottleneck is...
If there isn't any other bottleneck and indeed profiling the game with 3D Vision shows the 3D Vision driver just... "sits" there, then clearly the problem is there...
Maybe somebody left a "Sleep(x)" somewhere lol ^_^
Yes,
3D Vision Direct basically gives you 2 buffers for Left and Right eyes in which you manually put the frames.
3D Vision Automatic does that but also has all the ton of extra code that makes the image 3D. My wrapper works exactly the same:
- First modifying the game engine - real-time to generate the 3D effect.
- Secondly present the modified results.
The first one always adds a CPU overhead as there are lots of other instructions & logic going in the back.
Now from what I experienced and we see every-day 3D Vision Automatic is a BEAST;)
Look at Witcher 3 for example. We only fixed broken shaders (which weren't that many different types). However, imagine taking the game in 2D and coding or making a wrapper to generate the same result;) Duplicating Render-Buffers, duplicating draw calls, duplicating and applying algorithms to a lot of stuff;)
All that takes a hit;)
In any case, curios that in 3D Vision you neither have the CPU used 99% or the GPU used 99% and you are still under 60FPS which makes no sense... It shows a bottleneck clearly... But where?
I don't think is the HDD being used 100% or the RAM/VRAM access time...
I think a lot more profiling is required there to figure out where the bottleneck is...
If there isn't any other bottleneck and indeed profiling the game with 3D Vision shows the 3D Vision driver just... "sits" there, then clearly the problem is there...
Maybe somebody left a "Sleep(x)" somewhere lol ^_^
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
[quote="helifax"]
In any case, curios that in 3D Vision you neither have the CPU used 99% or the GPU used 99% and you are still under 60FPS which makes no sense... It shows a bottleneck clearly... But where?
I don't think is the HDD being used 100% or the RAM/VRAM access time...
I think a lot more profiling is required there to figure out where the bottleneck is...
If there isn't any other bottleneck and indeed profiling the game with 3D Vision shows the 3D Vision driver just... "sits" there, then clearly the problem is there...
Maybe somebody left a "Sleep(x)" somewhere lol ^_^
[/quote]
That is exactly what I see on the games I have issues with. Neither the GPU nor CPU appear to be the bottleneck as neither is running at 99%. However a bottleneck must exist as that is exactly what the stuttering seems to indicate.
Anyway would Nvidia ever consider spinning 3dVision off into a paid product? Either one time or subscription based. I know I would gladly pay for official continued support to make sure the newest games run in 3d and that all the lingering bugs get squashed. Without that added financial benefit I never see Nvidia really doing much except feigning support with a new CM profile here or there.
In any case, curios that in 3D Vision you neither have the CPU used 99% or the GPU used 99% and you are still under 60FPS which makes no sense... It shows a bottleneck clearly... But where?
I don't think is the HDD being used 100% or the RAM/VRAM access time...
I think a lot more profiling is required there to figure out where the bottleneck is...
If there isn't any other bottleneck and indeed profiling the game with 3D Vision shows the 3D Vision driver just... "sits" there, then clearly the problem is there...
Maybe somebody left a "Sleep(x)" somewhere lol ^_^
That is exactly what I see on the games I have issues with. Neither the GPU nor CPU appear to be the bottleneck as neither is running at 99%. However a bottleneck must exist as that is exactly what the stuttering seems to indicate.
Anyway would Nvidia ever consider spinning 3dVision off into a paid product? Either one time or subscription based. I know I would gladly pay for official continued support to make sure the newest games run in 3d and that all the lingering bugs get squashed. Without that added financial benefit I never see Nvidia really doing much except feigning support with a new CM profile here or there.
Did you guys contacted support or better yet contact "ManuelG" on the drivers section?
If you make a video showing exactly what and how is happening, they will open a bug and it will get fixed (if there is something to fix).
Because I am unable to reproduce this (even if I put my Witcher 3 on a SLI 980Ti @ 1x1080p in 3D) I get constant 60FPS in 3D :( If I use one 980Ti I get close to 60FPS but the GPU works 99%.
But from what I see everyone here is using a newer GPU from the 1000 series and a single Screen 1080p or more.
(The only exception is GTAV where I get constant 45FPS in 3D Surround, but I did that on my 780Tis as well and that one looks to be a game problem rather than a driver one).
In any case, I would recommend contacting ManuelG but with a proper reproduction scenario;) showing exactly what we discussed here! I don't think they are even aware;) and 3D Vision is not getting tested extensively...
Did you guys contacted support or better yet contact "ManuelG" on the drivers section?
If you make a video showing exactly what and how is happening, they will open a bug and it will get fixed (if there is something to fix).
Because I am unable to reproduce this (even if I put my Witcher 3 on a SLI 980Ti @ 1x1080p in 3D) I get constant 60FPS in 3D :( If I use one 980Ti I get close to 60FPS but the GPU works 99%.
But from what I see everyone here is using a newer GPU from the 1000 series and a single Screen 1080p or more.
(The only exception is GTAV where I get constant 45FPS in 3D Surround, but I did that on my 780Tis as well and that one looks to be a game problem rather than a driver one).
In any case, I would recommend contacting ManuelG but with a proper reproduction scenario;) showing exactly what we discussed here! I don't think they are even aware;) and 3D Vision is not getting tested extensively...
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
[quote="helifax"]Did you guys contacted support or better yet contact "ManuelG" on the drivers section?
If you make a video showing exactly what and how is happening, they will open a bug and it will get fixed (if there is something to fix).
Because I am unable to reproduce this (even if I put my Witcher 3 on a SLI 980Ti @ 1x1080p in 3D) I get constant 60FPS in 3D :( If I use one 980Ti I get close to 60FPS but the GPU works 99%.
But from what I see everyone here is using a newer GPU from the 1000 series and a single Screen 1080p or more.
(The only exception is GTAV where I get constant 45FPS in 3D Surround, but I did that on my 780Tis as well and that one looks to be a game problem rather than a driver one).
In any case, I would recommend contacting ManuelG but with a proper reproduction scenario;) showing exactly what we discussed here! I don't think they are even aware;) and 3D Vision is not getting tested extensively...[/quote]
I did and currently have an open ticket with support. I did not know about ManuelG, but I did post my issue in the official 370.72 driver thread and even opened a new thread on the driver support section and never received any response.
helifax said:Did you guys contacted support or better yet contact "ManuelG" on the drivers section?
If you make a video showing exactly what and how is happening, they will open a bug and it will get fixed (if there is something to fix).
Because I am unable to reproduce this (even if I put my Witcher 3 on a SLI 980Ti @ 1x1080p in 3D) I get constant 60FPS in 3D :( If I use one 980Ti I get close to 60FPS but the GPU works 99%.
But from what I see everyone here is using a newer GPU from the 1000 series and a single Screen 1080p or more.
(The only exception is GTAV where I get constant 45FPS in 3D Surround, but I did that on my 780Tis as well and that one looks to be a game problem rather than a driver one).
In any case, I would recommend contacting ManuelG but with a proper reproduction scenario;) showing exactly what we discussed here! I don't think they are even aware;) and 3D Vision is not getting tested extensively...
I did and currently have an open ticket with support. I did not know about ManuelG, but I did post my issue in the official 370.72 driver thread and even opened a new thread on the driver support section and never received any response.
[quote="helifax"]Yes,
In any case, curios that in 3D Vision you neither have the CPU used 99% or the GPU used 99% and you are still under 60FPS which makes no sense... It shows a bottleneck clearly... But where?
[/quote]
Hi helifax!
The bottleneck is the number of cores that the game is using mate :)
Let me explain by example:
If the game has lets say 4 threads, they will work on 4 cores. If you have a 6 core system, then the OS will spread the 4 threads over 6 cores. The OS will jump the 4 threads around on the 6 cores millions of times a second, so that when you look at the MSI OSD or some other software, it will look as though the CPU cores aren't being saturated as the CPU remains @ ~66% usage.
Don't be fooled however! This simply means that 4 cores out of the 6 are being completely saturated at any given time due to the 4 threads of the game, and the other 2 cores are redundant.
We can easily test this: We can Alt-Tab out of a game and set the affinity of the .exe in the task manager
[url]http://www.techrepublic.com/blog/windows-and-office/change-the-processor-affinity-setting-in-windows-7-to-gain-a-performance-edge/[/url]
to only use a certain number of cores.
Important note: Hyperthreading must be disabled! OR the experiment should ONLY use Odd number cores, i.e. in a 4 core system, only cres 0,2,4 should be used and the hyperthreading virtual cores 1,3,5 should be disabled!
When you disable 2 cores, you will notice that the remainder of the 4 cores are now completely saturated but alt-tabbing back into the game shows that the performance of the game nor the GPU usage will have deceased! If you disable any further cores, your FPS will have a massive hit as the GPU will no longer scale to 4 cores but only 3.
This is why in my tests, I have specifically shown what happens when the game .exe is limited to a certain combination of cores.
From my results, you can see that in 2D, the FPS scales very well with the GPU usage, as you increase the core count. We have to be very careful to ensure that the GPU usage is not anywhere near saturated of course.
But once 3D Vision is enabled, after 2-3 cores are enabled, enabling more cores does not increase GPU usage nor the FPS.
This experimental procedure proves that the 3D Vision driver is severely limiting the core usage of the .exe.
It's as though a 6 thread game is being forced to run in 3 threads by the 3D Vision driver, or a 4 thread game is being forced to run on 2 threads by the 3D Vision driver.
If there was a % overhead on each thread, the CPU utilisation would still remain the same as the % overhead would be on top of each thread. But what we are seeing here is that the threads themselves are being limited and reduced in number.
In summary, we can't look at the CPU as a whole. We have to look deeper at a single core at a time to investigate what is happening on each thread. Then we compare the findings to what the GPU usage says, and what the FPS counter says. Taking all 3 of these results together is the only way to reach any meaningful conclusion.
I have taken this one step further in my GTAV benchmarks.
We can hypothesise and then confirm that indeed the cores are being bottlenecked in number (not performance!).
We can do this by underclocking the CPU.
a. If indeed a core bottleneck exists, then we will see the exact same CPU usage (each thread is saturating some cores at any given time), the GPU will decrease in scaling (due to decreased CPU clock), and the FPS will decrease.
b. If, however, the CPU threads/cores are NOT the source of the bottlenect, then all we will see is a higher CPU usage (The CPU will just be used more to compensate for the lower frequency), the GPU will stay the same usage, and the FPS will stay the same.
My GTAV benchmarks are below. I will leave you to draw your own conclusions (Hint: We see scenario a. is true) :)
Results:
Intel Xeon x5660 @ 4.2GHz
1 cores = 12 fps, GPU1 GPU2
2 cores = 40 fps, 35% 46%, Total GPU Usage = 40%
3 cores = 50 fps, 37% 60%, Total GPU Usage = 48%
4 cores = 50 fps, 37% 60%, Total GPU Usage = 48%
5 cores = 50 fps, 37% 60%, Total GPU Usage = 48%
6 cores = 50 fps, 37% 60%, Total GPU Usage = 48%
6 cores 2D = 138fps 97% 98% (toggled off), Total GPU Usage = 97%
Intel Xeon x5660 @ 2.4GHz
1 cores = 7 fps, GPU1 GPU2
2 cores = 25 fps, 27% 46%, Total GPU Usage = 36%
3 cores = 36 fps, 33% 46%, Total GPU Usage = 39%
4 cores = 38 fps, 35% 47%, Total GPU Usage = 41%
5 cores = 38 fps, 35% 47%, Total GPU Usage = 41%
6 cores = 38 fps, 35% 47%, Total GPU Usage = 41%
6 cores 2D = 100fps 65% 65% (toggled off), Total GPU Usage = 65%
Note, these GTAV tests were done using 2x 970 in SLi - it's not a GPU generation issue.
In Summary, it is clear that when underclocking the CPU, the CPU utilisation remains the same, the GPU decreases in utilisation, and the FPS drops.
This conclusively proves that, for some reason, when the 3D Vision driver is activated, the game threads running on CPU cores are severely restricted in number.
I don't think it's a 3 core limit - I think it's a half-the-game-thread limit; i.e. 2D 6 threads are limited to 3D 3 Threads, and 2D 4 threads are limited to 3D 2 Threads.
helifax said:Yes,
In any case, curios that in 3D Vision you neither have the CPU used 99% or the GPU used 99% and you are still under 60FPS which makes no sense... It shows a bottleneck clearly... But where?
Hi helifax!
The bottleneck is the number of cores that the game is using mate :)
Let me explain by example:
If the game has lets say 4 threads, they will work on 4 cores. If you have a 6 core system, then the OS will spread the 4 threads over 6 cores. The OS will jump the 4 threads around on the 6 cores millions of times a second, so that when you look at the MSI OSD or some other software, it will look as though the CPU cores aren't being saturated as the CPU remains @ ~66% usage.
Don't be fooled however! This simply means that 4 cores out of the 6 are being completely saturated at any given time due to the 4 threads of the game, and the other 2 cores are redundant.
We can easily test this: We can Alt-Tab out of a game and set the affinity of the .exe in the task manager
Important note: Hyperthreading must be disabled! OR the experiment should ONLY use Odd number cores, i.e. in a 4 core system, only cres 0,2,4 should be used and the hyperthreading virtual cores 1,3,5 should be disabled!
When you disable 2 cores, you will notice that the remainder of the 4 cores are now completely saturated but alt-tabbing back into the game shows that the performance of the game nor the GPU usage will have deceased! If you disable any further cores, your FPS will have a massive hit as the GPU will no longer scale to 4 cores but only 3.
This is why in my tests, I have specifically shown what happens when the game .exe is limited to a certain combination of cores.
From my results, you can see that in 2D, the FPS scales very well with the GPU usage, as you increase the core count. We have to be very careful to ensure that the GPU usage is not anywhere near saturated of course.
But once 3D Vision is enabled, after 2-3 cores are enabled, enabling more cores does not increase GPU usage nor the FPS.
This experimental procedure proves that the 3D Vision driver is severely limiting the core usage of the .exe.
It's as though a 6 thread game is being forced to run in 3 threads by the 3D Vision driver, or a 4 thread game is being forced to run on 2 threads by the 3D Vision driver.
If there was a % overhead on each thread, the CPU utilisation would still remain the same as the % overhead would be on top of each thread. But what we are seeing here is that the threads themselves are being limited and reduced in number.
In summary, we can't look at the CPU as a whole. We have to look deeper at a single core at a time to investigate what is happening on each thread. Then we compare the findings to what the GPU usage says, and what the FPS counter says. Taking all 3 of these results together is the only way to reach any meaningful conclusion.
I have taken this one step further in my GTAV benchmarks.
We can hypothesise and then confirm that indeed the cores are being bottlenecked in number (not performance!).
We can do this by underclocking the CPU.
a. If indeed a core bottleneck exists, then we will see the exact same CPU usage (each thread is saturating some cores at any given time), the GPU will decrease in scaling (due to decreased CPU clock), and the FPS will decrease.
b. If, however, the CPU threads/cores are NOT the source of the bottlenect, then all we will see is a higher CPU usage (The CPU will just be used more to compensate for the lower frequency), the GPU will stay the same usage, and the FPS will stay the same.
My GTAV benchmarks are below. I will leave you to draw your own conclusions (Hint: We see scenario a. is true) :)
Note, these GTAV tests were done using 2x 970 in SLi - it's not a GPU generation issue.
In Summary, it is clear that when underclocking the CPU, the CPU utilisation remains the same, the GPU decreases in utilisation, and the FPS drops.
This conclusively proves that, for some reason, when the 3D Vision driver is activated, the game threads running on CPU cores are severely restricted in number.
I don't think it's a 3 core limit - I think it's a half-the-game-thread limit; i.e. 2D 6 threads are limited to 3D 3 Threads, and 2D 4 threads are limited to 3D 2 Threads.
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
[quote="helifax"]
Because I am unable to reproduce this (even if I put my Witcher 3 on a SLI 980Ti @ 1x1080p in 3D) I get constant 60FPS in 3D :( If I use one 980Ti I get close to 60FPS but the GPU works 99%.
[/quote]
This is because you are using an amazingly superb CPU overclocked to 5GHz mate :)
For examaple, in 3D in the Witcher 3, I get an absolute minimum of 40fps. Your CPU's IPC is 40% faster than mine, so 50fps + 40% will always be >60fps!
To reproduce our results, you have to:
1. Underclock your CPU to <3GHz.
2. Do the tests with a combination of different core affinities in task manager at a very low resolution so that the GPU isn't saturating / bottlenecking.
I guarantee you will reproduce our results.
(Also don't forget to disable vsync!) :)
helifax said:
Because I am unable to reproduce this (even if I put my Witcher 3 on a SLI 980Ti @ 1x1080p in 3D) I get constant 60FPS in 3D :( If I use one 980Ti I get close to 60FPS but the GPU works 99%.
This is because you are using an amazingly superb CPU overclocked to 5GHz mate :)
For examaple, in 3D in the Witcher 3, I get an absolute minimum of 40fps. Your CPU's IPC is 40% faster than mine, so 50fps + 40% will always be >60fps!
To reproduce our results, you have to:
1. Underclock your CPU to <3GHz.
2. Do the tests with a combination of different core affinities in task manager at a very low resolution so that the GPU isn't saturating / bottlenecking.
I guarantee you will reproduce our results.
(Also don't forget to disable vsync!) :)
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
Heh;)
Yeah, I forgot about it :)) Is clocked like that since I got it (silicon lottery win ^_^)
I can try that ^_^. It also very interesting that I monitor the "virtual cores" (the other 4 cores) and I also see activity on them, but is hard to say if that activity comes from the game or Windows for example;)
Also, can you let Manuel G (Manuel Guzman) know about it?:) Is actually very interested! Maybe they can make a few tests as well and find out what is going on;)
Heh;)
Yeah, I forgot about it :)) Is clocked like that since I got it (silicon lottery win ^_^)
I can try that ^_^. It also very interesting that I monitor the "virtual cores" (the other 4 cores) and I also see activity on them, but is hard to say if that activity comes from the game or Windows for example;)
Also, can you let Manuel G (Manuel Guzman) know about it?:) Is actually very interested! Maybe they can make a few tests as well and find out what is going on;)
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
Interesting enough, I came across this thread:))
[url]https://forums.geforce.com/default/topic/960821/geforce-drivers/official-372-70-game-ready-whql-display-driver-feedback-thread-released-8-30-16-/post/4974161/#4974161[/url]
The guy there says, I quote:
[quote]
Just wanted to report that with 372.70, Rise of Tomb Raider with SLI Titan XP I can get about 108fps(all max, no AA) avg 4k resolution(in-game benchmark) with DX12. However! With no overclocking, sometimes the screen freeze, but the game continues to run. To fix it, I just ALT-TAB or WIN-D to Desktop, and then ALT-TAB back to the game, it will continue to plays. Also I notice a bunch of artifacts on the screen from time to time.
In DX11(all max, no AA) at 4k res, I only get about 70fps, but much less problem with the game. Card only show about 60-70% usage per card.
[/quote]
Now, I wonder if the lower Utilisation is not something, somehow related to DX11? I don't know... I really hate these types of issues;)) which will take a lot of time to find out who is what:))
I know that 3D Vision works (but peformance is not great) under DX12. Still, we could do the same test just enabling it in NVPanel and see what we get?
I also posted in the Driver Feedback section about this thread and the issue we see;)
Just wanted to report that with 372.70, Rise of Tomb Raider with SLI Titan XP I can get about 108fps(all max, no AA) avg 4k resolution(in-game benchmark) with DX12. However! With no overclocking, sometimes the screen freeze, but the game continues to run. To fix it, I just ALT-TAB or WIN-D to Desktop, and then ALT-TAB back to the game, it will continue to plays. Also I notice a bunch of artifacts on the screen from time to time.
In DX11(all max, no AA) at 4k res, I only get about 70fps, but much less problem with the game. Card only show about 60-70% usage per card.
Now, I wonder if the lower Utilisation is not something, somehow related to DX11? I don't know... I really hate these types of issues;)) which will take a lot of time to find out who is what:))
I know that 3D Vision works (but peformance is not great) under DX12. Still, we could do the same test just enabling it in NVPanel and see what we get?
I also posted in the Driver Feedback section about this thread and the issue we see;)
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
Thank you. It would be interesting to get to the bottom of this problem.
In a previous post, I wondered...
...if it could be a draw calls limit issue which DX12 purports to solve i.e. could it be possible that with 3D Vision activated, there are twice the number of draw calls being utilised thereby doubly burdening the main game thread, which results in halving the number of cores being able to be used?
Unfortunately, I am not versed in coding etc like you helifax, or bo3b, so my technical knowledge is limited to this extent. I also do not have Windows 10 installed so unfortunately, can't do any DX12 tests.
Thank you. It would be interesting to get to the bottom of this problem.
In a previous post, I wondered...
...if it could be a draw calls limit issue which DX12 purports to solve i.e. could it be possible that with 3D Vision activated, there are twice the number of draw calls being utilised thereby doubly burdening the main game thread, which results in halving the number of cores being able to be used?
Unfortunately, I am not versed in coding etc like you helifax, or bo3b, so my technical knowledge is limited to this extent. I also do not have Windows 10 installed so unfortunately, can't do any DX12 tests.
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
Why would anyone ever invest in Nvidia's future propriety 'gameworks' technology when they can't even keep their current technologies up to date!?!
This is a show stopping deal breaker! That and the capped resolutions.
720p these days is the UHD equivalent of 640x480.
Who's steering this car crash? Heads need to roll, IMO.
Rez caps and basic core support is a MUST! It's not a luxury! It's basic implementation FFS!
In Direct X 12 games performance AMD are killing it!
It seems that AMD have excellent hardware and really bad drivers. That's why they are like buses, red and crash all the time, anyway.....
Nvidia have good hardware but amazing drivers. Direct 12 takes away the rewards of driver optimisation and this has left Nvidia with there pants around their ankles!
That and Async compute.
Nvidia has lost touch IMO!
3 core limit! hah
So this shows that the driver is WORKING correctly:)
You get HALF the FPS in 3D VS 2D right? But lower CPU usage Right?
NOW, I wonder why... it's SIMPLE: The GPU needs to render the same VIEW 2 TIMES which takes the stress OF THE CPU. I can't see any CPU or 3 core limitation there...
What I would expect from a 3 core limitation would be :
2D - 4/8 Cores - all are used.
3D - only 3 Cores from 4/8 are used. All remaining are flat 0% used.
What you showed here states exactly the opposite, that there is no limitation (Like I explained above) ;)
Keep in mind that a game works on Timers and it needs to be in sync, hence the lower CPU usage in 3D as well.
Also, try to do the same without 3DMigoto. Just have 3D Vision Enabled in NvPanel. You will see the same.
Is 3D Vision AUTOMATIC, that still HOOKS in the back and does a lot of Computing and so on. (Check my thread were I moved my wrapper from 3D Vision Automatic - Since it was just enabling 3D Vision for me - to 3D Vision Automatic. I gained 10 FPS in 3D in SURROUND - That is around 20+ FPS in 3D in single screen...)
This is a KNOWN thing and was said billions of times back in 2009 when 3D Vision was released. That 3D Vision Automatic adds an overhead to FPS. (I think the low CPU usage is due to sync times with the monitor or something like this. I know I've read about it like 6 years ago... Problem was, until now we didn't had the GPU to push so hard;))
Also, it is weird but I run the game at steady 40FPS in 3D Surround on 2x980TI and GPUS are always 99% and CPU is anywhere from 60-80% on all cores... Could it be related to 1000 series or I don't know ?
In any case I see in both cases 2D vs 3D is used the same %. Which kinda says that the GPU is not able to get to 99% usage (CPU bottleneck ?)
I wonder if 3D Vision doesn't "uses" these 2D metrics... And instead of Pushing the GPU further it actually stays lock at the usage of 2D and instead the CPU is used less because of the 2 Perspectives rendering... when it should to the other way around: Keep CPU usage and stress the GPU more;)
In any case, I think the 3D Vision Driver needs a bit of Profiling and Updating + Love ^_^
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
I agree with some points and disagree with others;)
AMD doesn't have better hardware... different yes. It performs better in certain areas and worse in others.
I heard their drivers got better in the last year or so;)
Talking about limitations like 720p in 3D Play, Nvidia also limited the Number of monitors in Surround to 3 + 1 lol.
AMD when they invented Eyefinity (nvidia Copy-Pasted it later) gave you the ability to use 3x3, 6x6, 9x9 topoligies as well. Both Portrait and Landscape. Hell you can Even Do Portrait - Landscape - Portrait...
Nvidia didn't update their Surround support still:(
Regarding 3D VIsion driver, it might not be the most optimum solution, but is the only one we got:( And it still behaves amazing after 6+ years since release:) I just hope Nvidia will give it more love regarding improvements and new features;)
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
All I know is that on the games I am now having issues with I had NO stuttering when running the same in-game setttings on a 660ti w/337.88 on Windows 8.1. All other components are the same.
I know the 1060 is not underpowered so the stuttering is either a Windows 10 (non Anniversary) problem, a Nvidia driver problem, or some combination of both.
Unfortunately Nvidia is the only one who can truly answer that for sure.
AMD FX-8350 4GHz
Gigabyte 990FXA-UD3 Rev 4.0
G-Skill PC3-10700- 16GB
Gigabyte Windforce GTX 1060 OC 6GB - 417.01
Creative Soundblaster Z
ViewSonic VX2268WM Black 22" 1680x1050 5ms 120Hz 3Dvision
Windows 10 x64 1709
Not so low as 50% performance, but more or less yes. In other open areas I can get 60fps per eye at 1080p usually, although I usually play at 1440p.
I already did that. Screenshots 1, 4 and 5.
You mean 3D Vision Automatic to 3D Vision Direct, right?
Being GPU limited, your numbers sound right. I usually get 45fps at 2560x1440 in 3D (grass and shadows not at maximum, and hairworks disabled, although I use other graphical mods). So for you: 45*0.8*2*16/(9*3) = 42.667fps
*0.8 because of 980Ti vs my 1080
*2 because of SLI (assuming perfect scaling)
/3 because of surround
*16/9 because of 1440p vs 1080p (1.7777778 times more pixels)
The weird case was Metal Gear Solid V Ground Zeroes. I was using a config file modification to increase LOD levels A LOT. In 2D (before the 3D fix was made) I never got less than 60fps, while in 3D it could go as low as 22 or 24 fps.
This (3D):
VS this (2D):
Reducing LOD and shadows to High made the game mostly 60fps in 3D, except for some camera positions at 53fps that almost can't go higher even with many settings at low.
CPU: Intel Core i7 7700K @ 4.9GHz
Motherboard: Gigabyte Aorus GA-Z270X-Gaming 5
RAM: GSKILL Ripjaws Z 16GB 3866MHz CL18
GPU: Gainward Phoenix 1080 GLH
Monitor: Asus PG278QR
Speakers: Logitech Z506
Donations account: masterotakusuko@gmail.com
3D Vision Direct basically gives you 2 buffers for Left and Right eyes in which you manually put the frames.
3D Vision Automatic does that but also has all the ton of extra code that makes the image 3D. My wrapper works exactly the same:
- First modifying the game engine - real-time to generate the 3D effect.
- Secondly present the modified results.
The first one always adds a CPU overhead as there are lots of other instructions & logic going in the back.
Now from what I experienced and we see every-day 3D Vision Automatic is a BEAST;)
Look at Witcher 3 for example. We only fixed broken shaders (which weren't that many different types). However, imagine taking the game in 2D and coding or making a wrapper to generate the same result;) Duplicating Render-Buffers, duplicating draw calls, duplicating and applying algorithms to a lot of stuff;)
All that takes a hit;)
In any case, curios that in 3D Vision you neither have the CPU used 99% or the GPU used 99% and you are still under 60FPS which makes no sense... It shows a bottleneck clearly... But where?
I don't think is the HDD being used 100% or the RAM/VRAM access time...
I think a lot more profiling is required there to figure out where the bottleneck is...
If there isn't any other bottleneck and indeed profiling the game with 3D Vision shows the 3D Vision driver just... "sits" there, then clearly the problem is there...
Maybe somebody left a "Sleep(x)" somewhere lol ^_^
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
That is exactly what I see on the games I have issues with. Neither the GPU nor CPU appear to be the bottleneck as neither is running at 99%. However a bottleneck must exist as that is exactly what the stuttering seems to indicate.
Anyway would Nvidia ever consider spinning 3dVision off into a paid product? Either one time or subscription based. I know I would gladly pay for official continued support to make sure the newest games run in 3d and that all the lingering bugs get squashed. Without that added financial benefit I never see Nvidia really doing much except feigning support with a new CM profile here or there.
AMD FX-8350 4GHz
Gigabyte 990FXA-UD3 Rev 4.0
G-Skill PC3-10700- 16GB
Gigabyte Windforce GTX 1060 OC 6GB - 417.01
Creative Soundblaster Z
ViewSonic VX2268WM Black 22" 1680x1050 5ms 120Hz 3Dvision
Windows 10 x64 1709
If you make a video showing exactly what and how is happening, they will open a bug and it will get fixed (if there is something to fix).
Because I am unable to reproduce this (even if I put my Witcher 3 on a SLI 980Ti @ 1x1080p in 3D) I get constant 60FPS in 3D :( If I use one 980Ti I get close to 60FPS but the GPU works 99%.
But from what I see everyone here is using a newer GPU from the 1000 series and a single Screen 1080p or more.
(The only exception is GTAV where I get constant 45FPS in 3D Surround, but I did that on my 780Tis as well and that one looks to be a game problem rather than a driver one).
In any case, I would recommend contacting ManuelG but with a proper reproduction scenario;) showing exactly what we discussed here! I don't think they are even aware;) and 3D Vision is not getting tested extensively...
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
I did and currently have an open ticket with support. I did not know about ManuelG, but I did post my issue in the official 370.72 driver thread and even opened a new thread on the driver support section and never received any response.
AMD FX-8350 4GHz
Gigabyte 990FXA-UD3 Rev 4.0
G-Skill PC3-10700- 16GB
Gigabyte Windforce GTX 1060 OC 6GB - 417.01
Creative Soundblaster Z
ViewSonic VX2268WM Black 22" 1680x1050 5ms 120Hz 3Dvision
Windows 10 x64 1709
Hi helifax!
The bottleneck is the number of cores that the game is using mate :)
Let me explain by example:
If the game has lets say 4 threads, they will work on 4 cores. If you have a 6 core system, then the OS will spread the 4 threads over 6 cores. The OS will jump the 4 threads around on the 6 cores millions of times a second, so that when you look at the MSI OSD or some other software, it will look as though the CPU cores aren't being saturated as the CPU remains @ ~66% usage.
Don't be fooled however! This simply means that 4 cores out of the 6 are being completely saturated at any given time due to the 4 threads of the game, and the other 2 cores are redundant.
We can easily test this: We can Alt-Tab out of a game and set the affinity of the .exe in the task manager
http://www.techrepublic.com/blog/windows-and-office/change-the-processor-affinity-setting-in-windows-7-to-gain-a-performance-edge/
to only use a certain number of cores.
Important note: Hyperthreading must be disabled! OR the experiment should ONLY use Odd number cores, i.e. in a 4 core system, only cres 0,2,4 should be used and the hyperthreading virtual cores 1,3,5 should be disabled!
When you disable 2 cores, you will notice that the remainder of the 4 cores are now completely saturated but alt-tabbing back into the game shows that the performance of the game nor the GPU usage will have deceased! If you disable any further cores, your FPS will have a massive hit as the GPU will no longer scale to 4 cores but only 3.
This is why in my tests, I have specifically shown what happens when the game .exe is limited to a certain combination of cores.
From my results, you can see that in 2D, the FPS scales very well with the GPU usage, as you increase the core count. We have to be very careful to ensure that the GPU usage is not anywhere near saturated of course.
But once 3D Vision is enabled, after 2-3 cores are enabled, enabling more cores does not increase GPU usage nor the FPS.
This experimental procedure proves that the 3D Vision driver is severely limiting the core usage of the .exe.
It's as though a 6 thread game is being forced to run in 3 threads by the 3D Vision driver, or a 4 thread game is being forced to run on 2 threads by the 3D Vision driver.
If there was a % overhead on each thread, the CPU utilisation would still remain the same as the % overhead would be on top of each thread. But what we are seeing here is that the threads themselves are being limited and reduced in number.
In summary, we can't look at the CPU as a whole. We have to look deeper at a single core at a time to investigate what is happening on each thread. Then we compare the findings to what the GPU usage says, and what the FPS counter says. Taking all 3 of these results together is the only way to reach any meaningful conclusion.
I have taken this one step further in my GTAV benchmarks.
We can hypothesise and then confirm that indeed the cores are being bottlenecked in number (not performance!).
We can do this by underclocking the CPU.
a. If indeed a core bottleneck exists, then we will see the exact same CPU usage (each thread is saturating some cores at any given time), the GPU will decrease in scaling (due to decreased CPU clock), and the FPS will decrease.
b. If, however, the CPU threads/cores are NOT the source of the bottlenect, then all we will see is a higher CPU usage (The CPU will just be used more to compensate for the lower frequency), the GPU will stay the same usage, and the FPS will stay the same.
My GTAV benchmarks are below. I will leave you to draw your own conclusions (Hint: We see scenario a. is true) :)
Results:
Intel Xeon x5660 @ 4.2GHz
1 cores = 12 fps, GPU1 GPU2
2 cores = 40 fps, 35% 46%, Total GPU Usage = 40%
3 cores = 50 fps, 37% 60%, Total GPU Usage = 48%
4 cores = 50 fps, 37% 60%, Total GPU Usage = 48%
5 cores = 50 fps, 37% 60%, Total GPU Usage = 48%
6 cores = 50 fps, 37% 60%, Total GPU Usage = 48%
6 cores 2D = 138fps 97% 98% (toggled off), Total GPU Usage = 97%
Intel Xeon x5660 @ 2.4GHz
1 cores = 7 fps, GPU1 GPU2
2 cores = 25 fps, 27% 46%, Total GPU Usage = 36%
3 cores = 36 fps, 33% 46%, Total GPU Usage = 39%
4 cores = 38 fps, 35% 47%, Total GPU Usage = 41%
5 cores = 38 fps, 35% 47%, Total GPU Usage = 41%
6 cores = 38 fps, 35% 47%, Total GPU Usage = 41%
6 cores 2D = 100fps 65% 65% (toggled off), Total GPU Usage = 65%
Note, these GTAV tests were done using 2x 970 in SLi - it's not a GPU generation issue.
In Summary, it is clear that when underclocking the CPU, the CPU utilisation remains the same, the GPU decreases in utilisation, and the FPS drops.
This conclusively proves that, for some reason, when the 3D Vision driver is activated, the game threads running on CPU cores are severely restricted in number.
I don't think it's a 3 core limit - I think it's a half-the-game-thread limit; i.e. 2D 6 threads are limited to 3D 3 Threads, and 2D 4 threads are limited to 3D 2 Threads.
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
This is because you are using an amazingly superb CPU overclocked to 5GHz mate :)
For examaple, in 3D in the Witcher 3, I get an absolute minimum of 40fps. Your CPU's IPC is 40% faster than mine, so 50fps + 40% will always be >60fps!
To reproduce our results, you have to:
1. Underclock your CPU to <3GHz.
2. Do the tests with a combination of different core affinities in task manager at a very low resolution so that the GPU isn't saturating / bottlenecking.
I guarantee you will reproduce our results.
(Also don't forget to disable vsync!) :)
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
Yeah, I forgot about it :)) Is clocked like that since I got it (silicon lottery win ^_^)
I can try that ^_^. It also very interesting that I monitor the "virtual cores" (the other 4 cores) and I also see activity on them, but is hard to say if that activity comes from the game or Windows for example;)
Also, can you let Manuel G (Manuel Guzman) know about it?:) Is actually very interested! Maybe they can make a few tests as well and find out what is going on;)
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
https://forums.geforce.com/default/topic/960821/geforce-drivers/official-372-70-game-ready-whql-display-driver-feedback-thread-released-8-30-16-/post/4974161/#4974161
The guy there says, I quote:
Now, I wonder if the lower Utilisation is not something, somehow related to DX11? I don't know... I really hate these types of issues;)) which will take a lot of time to find out who is what:))
I know that 3D Vision works (but peformance is not great) under DX12. Still, we could do the same test just enabling it in NVPanel and see what we get?
I also posted in the Driver Feedback section about this thread and the issue we see;)
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
In a previous post, I wondered...
...if it could be a draw calls limit issue which DX12 purports to solve i.e. could it be possible that with 3D Vision activated, there are twice the number of draw calls being utilised thereby doubly burdening the main game thread, which results in halving the number of cores being able to be used?
Unfortunately, I am not versed in coding etc like you helifax, or bo3b, so my technical knowledge is limited to this extent. I also do not have Windows 10 installed so unfortunately, can't do any DX12 tests.
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.