3D Vision CPU Bottelneck: Gathering Information thread.
  21 / 22    
CPU benchmarks should be done with the most powerful single GPU at the lowest possible resolution (preferably 1280x720 to preserve a 16:9 aspect ratio) and highest settings, unless some of them are a clear GPU bottleneck that doesn't affect the CPU. I consider this benchmark as invalid.
CPU benchmarks should be done with the most powerful single GPU at the lowest possible resolution (preferably 1280x720 to preserve a 16:9 aspect ratio) and highest settings, unless some of them are a clear GPU bottleneck that doesn't affect the CPU.

I consider this benchmark as invalid.

CPU: Intel Core i7 7700K @ 4.9GHz
Motherboard: Gigabyte Aorus GA-Z270X-Gaming 5
RAM: GSKILL Ripjaws Z 16GB 3866MHz CL18
GPU: Gainward Phoenix 1080 GLH
Monitor: Asus PG278QR
Speakers: Logitech Z506
Donations account: masterotakusuko@gmail.com

Posted 09/11/2018 05:41 PM   
Thanks for passing the info Dugom, it's appreciated. Again, as masterotaku says, the results are invalid. Quite alarmingly so - those walls of top bars having exactly the same FPS is a very hard GPU bottleneck :)
Thanks for passing the info Dugom, it's appreciated. Again, as masterotaku says, the results are invalid. Quite alarmingly so - those walls of top bars having exactly the same FPS is a very hard GPU bottleneck :)

Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.

Posted 09/11/2018 05:52 PM   
[quote="masterotaku"]CPU benchmarks should be done with the most powerful single GPU at the lowest possible resolution (preferably 1280x720 to preserve a 16:9 aspect ratio) and highest settings, unless some of them are a clear GPU bottleneck that doesn't affect the CPU. I consider this benchmark as invalid.[/quote][quote="RAGEdemon"]Thanks for passing the info Dugom, it's appreciated. Again, as masterotaku says, the results are invalid. Quite alarmingly so - those walls of top bars having exactly the same FPS is a very hard GPU bottleneck :) [/quote]Of course, but who plays in 720 ? This graphs, show in the game which CPU doesnt bottlenecked your GPU (which you can select on the website). In 4K, almost all GPU are bottlenek, so CPUs aren't really relevant. GameGPU is using around 25 GPUs (Also for the "CPU limited" graph), for each game review, he use Maxed out settings in 1920x1080, and a scene where it is hard for the CPU. It is probably representative of most actual gaming settings used. This way you can know when you lose FPS because of your CPU, and how much, so you can decide to upgrade to get all your GPU potential. With SLi/NvLink and now RTX 2000 series, this CPU bottleneck will be a PITA... .
masterotaku said:CPU benchmarks should be done with the most powerful single GPU at the lowest possible resolution (preferably 1280x720 to preserve a 16:9 aspect ratio) and highest settings, unless some of them are a clear GPU bottleneck that doesn't affect the CPU.
I consider this benchmark as invalid.
RAGEdemon said:Thanks for passing the info Dugom, it's appreciated. Again, as masterotaku says, the results are invalid. Quite alarmingly so - those walls of top bars having exactly the same FPS is a very hard GPU bottleneck :)
Of course, but who plays in 720 ? This graphs, show in the game which CPU doesnt bottlenecked your GPU (which you can select on the website). In 4K, almost all GPU are bottlenek, so CPUs aren't really relevant.

GameGPU is using around 25 GPUs (Also for the "CPU limited" graph), for each game review, he use Maxed out settings in 1920x1080, and a scene where it is hard for the CPU.

It is probably representative of most actual gaming settings used.

This way you can know when you lose FPS because of your CPU, and how much, so you can decide to upgrade to get all your GPU potential.

With SLi/NvLink and now RTX 2000 series, this CPU bottleneck will be a PITA...




.
<double>
<double>
Afraid it's not that simple mate. [quote="Dugom"]but who plays in 720 ?[/quote] This question is unfortunately irrelevant, even though at first it might sound pertinent, and this is why... [quote="Dugom"]With SLi/NvLink and now RTX 2000 series, this CPU bottleneck will be a PITA...[/quote] We don't have to wait for RTX 2000, or even RTX 5000 5 years from now, to find out the performance/bottleneck of the CPU and games - all we have to do is reduce the resolution to simulate scenarios where the GPU is not bottlenecking the CPU - it has nothing to do with what resolution people play at because even that is completely subjective - some games will perform well, some won't - one can't extrapolate across the board. Furthermore, any such limited benchmark would be useless for predicting future gaming performance. For a proper CPU or memory benchmark, one needs to simulate an infinitely powerful GPU. Simply put, reducing the game resolution workload by a half pretty much exactly replicates a future GPU which is 2x as fast as current GPUs. The tests by gamegpu are mostly invalid because of this - Not the load on the GPU, nor the load on the CPU, are constant throughout any gaming benchmark - at random moments, the GPU bottlenecks the CPU and there is no way to tell when and for how long by those graphs - it could be 5% of the benchmark, 25%, or 95% of the benchmark. The only clue we have is when there is a wall of bars having exact figures - this is where the GPU was bottlenecking the CPU through 100% of the benchmark. [b][u]Summary:[/u][/b] [color="orange"][u]We should never be comparing whether:[/u] 1. CPU X paired with 2. GPU Y gives us 3. Z fps in 4. BlahBlah game. This is because this can only ever tell us how that particular CPU, performs paired with that particular GPU, with that particular game, - not very useful. i. It's so bad because it can't even tell us when and for how long the GPU is bottlenecking the CPU or vice versa. ii. Nor can it properly differentiate between a 5 GHz 4 core 7700K and a 50 GHz 40 core future CPU - they will both show the same or similar FPS, depending on how badly the GPU is being a bottleneck. [/color] [color="green"][u]We should only ever consider how:[/u] 1. CPU X1 2. compares with CPU X2 3. in a scenareo where the GPU is infinitely powerful (i.e. tested at very low resolution) 4. in an averaged trend of games progressively having better and better multithreaded support. This is far more useful, because it tells us how the CPU, will perform in ANY game (past, present, and future), paired with ANY GPU (past, present, and future). Using our above hypothetical 50 GHz CPU case as an example, it will indeed actually tell us that the 50GHz 40 core CPU is X times faster than a 7700K.[/color] One small peace of information can be gleaned from gamegpu tests, especially with their BFV benchmarks recently however - it looks like 8 core CPUs are having a real life, tangible and noticeable impact already on even current gen games. I came across the news that "Singaporean Retailer 'Bizgram' Newest Price-list Includes Intel 9th Generation Pricing, when Converted to USD and the 7% GST removed is: $452 for the i9-9900k (8C/16T) $352 for the i7-9700k (8C/8T) $251 for the i5-9600k (6C/6T)" These are presumed to have hardware level spectre/meltdown etc fixes so ought to perform pretty well. I think though, personally speaking, I shall wait till next year for AMD's 7nm CPUs or another year for Intel's 10nm :)
Afraid it's not that simple mate.
Dugom said:but who plays in 720 ?
This question is unfortunately irrelevant, even though at first it might sound pertinent, and this is why...
Dugom said:With SLi/NvLink and now RTX 2000 series, this CPU bottleneck will be a PITA...
We don't have to wait for RTX 2000, or even RTX 5000 5 years from now, to find out the performance/bottleneck of the CPU and games - all we have to do is reduce the resolution to simulate scenarios where the GPU is not bottlenecking the CPU - it has nothing to do with what resolution people play at because even that is completely subjective - some games will perform well, some won't - one can't extrapolate across the board. Furthermore, any such limited benchmark would be useless for predicting future gaming performance.

For a proper CPU or memory benchmark, one needs to simulate an infinitely powerful GPU. Simply put, reducing the game resolution workload by a half pretty much exactly replicates a future GPU which is 2x as fast as current GPUs.

The tests by gamegpu are mostly invalid because of this - Not the load on the GPU, nor the load on the CPU, are constant throughout any gaming benchmark - at random moments, the GPU bottlenecks the CPU and there is no way to tell when and for how long by those graphs - it could be 5% of the benchmark, 25%, or 95% of the benchmark. The only clue we have is when there is a wall of bars having exact figures - this is where the GPU was bottlenecking the CPU through 100% of the benchmark.

Summary:

We should never be comparing whether:

1. CPU X paired with
2. GPU Y gives us
3. Z fps in
4. BlahBlah game.

This is because this can only ever tell us how that particular CPU, performs paired with that particular GPU, with that particular game, - not very useful.

i. It's so bad because it can't even tell us when and for how long the GPU is bottlenecking the CPU or vice versa.

ii. Nor can it properly differentiate between a 5 GHz 4 core 7700K and a 50 GHz 40 core future CPU - they will both show the same or similar FPS, depending on how badly the GPU is being a bottleneck.


We should only ever consider how:

1. CPU X1
2. compares with CPU X2
3. in a scenareo where the GPU is infinitely powerful (i.e. tested at very low resolution)
4. in an averaged trend of games progressively having better and better multithreaded support.

This is far more useful, because it tells us how the CPU, will perform in ANY game (past, present, and future), paired with ANY GPU (past, present, and future).

Using our above hypothetical 50 GHz CPU case as an example, it will indeed actually tell us that the 50GHz 40 core CPU is X times faster than a 7700K.


One small peace of information can be gleaned from gamegpu tests, especially with their BFV benchmarks recently however - it looks like 8 core CPUs are having a real life, tangible and noticeable impact already on even current gen games.

I came across the news that "Singaporean Retailer 'Bizgram' Newest Price-list Includes Intel 9th Generation Pricing, when Converted to USD and the 7% GST removed is:
$452 for the i9-9900k (8C/16T)
$352 for the i7-9700k (8C/8T)
$251 for the i5-9600k (6C/6T)"

These are presumed to have hardware level spectre/meltdown etc fixes so ought to perform pretty well.

I think though, personally speaking, I shall wait till next year for AMD's 7nm CPUs or another year for Intel's 10nm :)

Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.

Posted 09/11/2018 09:10 PM   
[quote="RAGEdemon"]This is far more useful, because it tells us how the CPU, will perform in ANY game (past, present, and future), paired with ANY GPU (past, present, and future).[/quote]I don't get that... If each game is different, result will be also different. I really don't get the purpose of your testing. GameGPU testing is clear, they show me when to change CPU in 1080, to get all my GPU potential: I just have to select my GPU, and read the line of my CPU, to see if I loose FPS. GameGPU specialy test bottleneck in hard scenes for CPU. In their reviews there are other graphs, that show other stuffs. Go see: https://gamegpu.com/
RAGEdemon said:This is far more useful, because it tells us how the CPU, will perform in ANY game (past, present, and future), paired with ANY GPU (past, present, and future).
I don't get that... If each game is different, result will be also different. I really don't get the purpose of your testing.

GameGPU testing is clear, they show me when to change CPU in 1080, to get all my GPU potential:
I just have to select my GPU, and read the line of my CPU, to see if I loose FPS.

GameGPU specialy test bottleneck in hard scenes for CPU. In their reviews there are other graphs, that show other stuffs. Go see: https://gamegpu.com/
It's useful to see the separate potential of a GPU and a GPU. It can tell you "I won't get Xfps at Y resolution in this game for now, but I know that next year with a new GPU I would be able to get Zfps tops because of the CPU". You can also extrapolate results to non benchmarked games that you know how they perform. Same for GPUs. They should be tested with the best and most overclocked CPU.
It's useful to see the separate potential of a GPU and a GPU. It can tell you "I won't get Xfps at Y resolution in this game for now, but I know that next year with a new GPU I would be able to get Zfps tops because of the CPU". You can also extrapolate results to non benchmarked games that you know how they perform.

Same for GPUs. They should be tested with the best and most overclocked CPU.

CPU: Intel Core i7 7700K @ 4.9GHz
Motherboard: Gigabyte Aorus GA-Z270X-Gaming 5
RAM: GSKILL Ripjaws Z 16GB 3866MHz CL18
GPU: Gainward Phoenix 1080 GLH
Monitor: Asus PG278QR
Speakers: Logitech Z506
Donations account: masterotakusuko@gmail.com

Posted 09/12/2018 01:17 PM   
Here: https://gamegpu.com/action-/-fps-/-tps/battlefield-v-open-beta-test-gpu-cpu Tried with 2500K: Lower 48FPS, Average 79FPS, same results with 1080Ti / 1080 / 1070Ti / 1070 / 980Ti / 1060 6GB / 980 = 48/72 970 = 45/60 1060 3GB = 43/59 Is this not like you said ? .
Here: https://gamegpu.com/action-/-fps-/-tps/battlefield-v-open-beta-test-gpu-cpu



Tried with 2500K:

Lower 48FPS, Average 79FPS, same results with 1080Ti / 1080 / 1070Ti / 1070 / 980Ti /

1060 6GB / 980 = 48/72

970 = 45/60

1060 3GB = 43/59


Is this not like you said ?




.
Dugom, Those tests are not with 3D as far as I can see so they have 0 relevance for us. The problem is that when activating 3D there is an additional bottleneck appearing and the CPU is not fully utilized.
Dugom, Those tests are not with 3D as far as I can see so they have 0 relevance for us.
The problem is that when activating 3D there is an additional bottleneck appearing and the CPU is not fully utilized.

Intel i7 8086K
Gigabyte GTX 1080Ti Aorus Extreme
DDR4 2x8gb 3200mhz Cl14
TV LG OLED65E6V
Windows 10 64bits

Posted 09/13/2018 05:09 AM   
I think FFXV has this bug too. As well as other weird stuff. 1080p In hammerhead 2D: 60 FPS CPU USAGE 80-100% GPU Usage 60% GPU Usage 60% 3D: 30-50 FPS CPU USAGE 50-54% GPU USAGE: about 60% still with slight bumps higher (Probably when CPU Usage Drops) This is witha 7700k @4.5 GHZ Titan X (Pascal) Watercooled to ~2000 GHZ I later tested and disabled hyperthreading. It ran 100% in 2D or 3D... so my guess is 3D vision limits it to 4 cores? There's also some other odd stuff. I go to the wild just outside of of hammerhead. Toggling 3D and 2D back and forth. I turn off 3D, CPU Utilization is at 60-70% in 2D 60 FPS I turn on 3D, (WITHOUT FIX so its only Nvidia's Driver as I wanted to get 3D migoto out of the question) CPU-Utilization increases to 80-100%
I think FFXV has this bug too. As well as other weird stuff.


1080p In hammerhead

2D: 60 FPS
CPU USAGE 80-100%
GPU Usage 60%
GPU Usage 60%

3D: 30-50 FPS
CPU USAGE 50-54%
GPU USAGE: about 60% still with slight bumps higher (Probably when CPU Usage Drops)

This is witha 7700k @4.5 GHZ
Titan X (Pascal) Watercooled to ~2000 GHZ

I later tested and disabled hyperthreading.
It ran 100% in 2D or 3D... so my guess is 3D vision limits it to 4 cores?

There's also some other odd stuff.

I go to the wild just outside of of hammerhead. Toggling 3D and 2D back and forth.
I turn off 3D, CPU Utilization is at 60-70% in 2D 60 FPS
I turn on 3D, (WITHOUT FIX so its only Nvidia's Driver as I wanted to get 3D migoto out of the question) CPU-Utilization increases to 80-100%

I'm ishiki, forum screwed up my name.

7700k @4.7 GHZ, 16GBDDR4@3466MHZ, 2080 Ti

Posted 10/15/2018 05:30 AM   
This issue is still a problem? Such a shame! Can't be that hard for them to fix it, or is it? I would donate $ to get this fixed if that is an option. Anyway at this point to effectively request this fix? I believe I have already submited a request twice long ago and I think it went into a black hole.
This issue is still a problem? Such a shame! Can't be that hard for them to fix it, or is it? I would donate $ to get this fixed if that is an option.

Anyway at this point to effectively request this fix? I believe I have already submited a request twice long ago and I think it went into a black hole.

Posted 10/20/2018 06:28 AM   
I would also be interested to know if the new RTX series are affected similarly using Turing. I know someone posted good fps with it though.
I would also be interested to know if the new RTX series are affected similarly using Turing. I know someone posted good fps with it though.

Posted 10/20/2018 06:39 AM   
Update from Ray @ nVidia... [color="green"]Hello, Sorry for the delay in getting back to you, unfortunately I don't have any update. There has been no further update on this request since development mentioned debate between optimizing existing code path or possible look into a new code path. I did discuss with the development manager but I wasn't able to get much details other than that his team have been pulled to work on higher priorities and no resource or time to allocate to this issue. This doesn't appear to be a priority considering it's been almost a year since I submitted the issue to development team. I am sorry but that is all I know at this point. Best regards, Ray[/color]
Update from Ray @ nVidia...

Hello,

Sorry for the delay in getting back to you, unfortunately I don't have any update. There has been no further update on this request since development mentioned debate between optimizing existing code path or possible look into a new code path. I did discuss with the development manager but I wasn't able to get much details other than that his team have been pulled to work on higher priorities and no resource or time to allocate to this issue. This doesn't appear to be a priority considering it's been almost a year since I submitted the issue to development team. I am sorry but that is all I know at this point.


Best regards,
Ray

Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.

Posted 12/04/2018 12:20 AM   
This is pretty lame, I get the cards I get because of 3D if I didn’t have 3D I’d be getting rx580s every other generation like I was befoee
This is pretty lame, I get the cards I get because of 3D if I didn’t have 3D I’d be getting rx580s every other generation like I was befoee

I'm ishiki, forum screwed up my name.

7700k @4.7 GHZ, 16GBDDR4@3466MHZ, 2080 Ti

Posted 12/04/2018 01:56 AM   
As many of us predicted, it looks like we'll be left to our own devices. I personally regret all the wasted effort from all of us, gathering the information and presenting it, many long emails, lengthy discussions etc, but all to null effect. They have acknowledged the problem, and the fact that it affects all of us, but have decided it is more profitable for them to ignore us and not to fix it after all; relegating it to bottom priority. Admittedly, it's a fine business decision, but one which leaves a sour taste in the mouths of the affected consumer, especially as nVidia is making record high profits. On a positive note, I loaded up Shadow of the Tomb Raider in DX12 SBS mode (nVidia 3DV doesn't work in DX12 mode), but I saw all my 8 cores, physical and virtual hovering at ~90% during gameplay. I was getting double the FPS in DX12 than I was in DX11. I have never seen such amazingly high CPU usage in any game before. Granted, this was with 3DV disabled, but perhaps when [b]properly optimised[/b] DX12/Vulcan games work consistently with 3DV going into the future, that there shall be our salvation...
As many of us predicted, it looks like we'll be left to our own devices. I personally regret all the wasted effort from all of us, gathering the information and presenting it, many long emails, lengthy discussions etc, but all to null effect.

They have acknowledged the problem, and the fact that it affects all of us, but have decided it is more profitable for them to ignore us and not to fix it after all; relegating it to bottom priority. Admittedly, it's a fine business decision, but one which leaves a sour taste in the mouths of the affected consumer, especially as nVidia is making record high profits.

On a positive note, I loaded up Shadow of the Tomb Raider in DX12 SBS mode (nVidia 3DV doesn't work in DX12 mode), but I saw all my 8 cores, physical and virtual hovering at ~90% during gameplay. I was getting double the FPS in DX12 than I was in DX11.

I have never seen such amazingly high CPU usage in any game before.

Granted, this was with 3DV disabled, but perhaps when properly optimised DX12/Vulcan games work consistently with 3DV going into the future, that there shall be our salvation...

Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.

Posted 12/04/2018 01:40 PM   
  21 / 22    
Scroll To Top