How important is CPU performance for 3d vision?
  5 / 22    
In this age of alternative facts and fake news, we have to be very careful about what information we consume mate. On the net, the results are different because most of the websites are using bad science and methodology. For example, in the video you linked, you can clearly see that the GPU is always >95% (meaning the tests are GPU bound and the CPU isn't getting much workload at all). This is effectively benchmarking the GPU, NOT the CPU. In a CPU benchmark, GPU should never be anywhere near 90%; ideally remaining below 50%. This is why you test at 720p, or max 1080p with a Titan X Pascal. The tests in the video are so bad that you could probably do what they did with your i7 860, and get the same results as all the latest CPUs. Then you can declare that an i7 860 is as good as an overclocked 6900K. As also highlighted in the past, most of the other benchmarks around the web are also useless because they are done in GPU bound scenarios, where you won't see a difference even if the tested CPU was running at 100GHz with 100 cores. This is horrendous testing methodology, and an explanation of why can be seen in my previous post in quotes directly from gamersnexus, and some of my lamenting in previous posts. Someone smart once said it's like driving a modern Ferrari against a 90s Civic in a Speed Limit 30 road, and then declaring "look, they are both equally as fast!". Do you think this is a fair test of their real performance? Any "review" which shows Ryzen and 6900K/7700K showing nearly the same gaming performance should be considered junk. You can usually spot these as they will have one or both of the following problems: 1. Tests are done at higher than 1080p resolutions 2. A weak graphics card such as anything lower than a Titan XP is used. What we know is that Ryzen should be performing faster as synthetics show it to be a great CPU - just not for gaming at present. Perhaps issues might be fixed down the line with bios / microcode updates. I understand your dilemma regarding upgrading re: coffeelake and cannon lake etc; I remember covering it in another post - let's see find I can find it: [quote="RAGEdemon"]As far as I am aware mate, both the upcoming coffee lake and cannon lake will be mobile and tablet parts only, not high performance desktop parts. They will be the same 10% performance increase as skylake to kaby lake, i.e. 0% @ IPC but a few hundred MHz on the clock - but there will not be any desktop high performance parts such as a 66000K/6700K or 7600K/7700K. More info here: https://www.pcgamesn.com/intel/intel-14nm-coffee-lake-release-date For the foreseeable future, it looks like the best desktop performance chip for low thread count gaming will be an OC 7700k. Maybe with the advent of Ryzen, Intel will reconsider its future plans. If you fellas want to wait, that that's great. Patience is a virtue! But it seems very unlikely that we will get any gaming grade chips better than kaby lake for quite some time, and if one does come out, waiting might just have been a waste of time as during that time you could have been plying most games at 60FPS locked. [/quote] Potentially, you might wait for Zen 2, where they should hopefully have all the memory latency / memory controller issues sorted and bugs fixed. It ought to be a far bigger jump than Intel's recent 10% per generation.
In this age of alternative facts and fake news, we have to be very careful about what information we consume mate.

On the net, the results are different because most of the websites are using bad science and methodology.

For example, in the video you linked, you can clearly see that the GPU is always >95% (meaning the tests are GPU bound and the CPU isn't getting much workload at all).

This is effectively benchmarking the GPU, NOT the CPU.

In a CPU benchmark, GPU should never be anywhere near 90%; ideally remaining below 50%. This is why you test at 720p, or max 1080p with a Titan X Pascal. The tests in the video are so bad that you could probably do what they did with your i7 860, and get the same results as all the latest CPUs. Then you can declare that an i7 860 is as good as an overclocked 6900K.

As also highlighted in the past, most of the other benchmarks around the web are also useless because they are done in GPU bound scenarios, where you won't see a difference even if the tested CPU was running at 100GHz with 100 cores. This is horrendous testing methodology, and an explanation of why can be seen in my previous post in quotes directly from gamersnexus, and some of my lamenting in previous posts.

Someone smart once said it's like driving a modern Ferrari against a 90s Civic in a Speed Limit 30 road, and then declaring "look, they are both equally as fast!". Do you think this is a fair test of their real performance?

Any "review" which shows Ryzen and 6900K/7700K showing nearly the same gaming performance should be considered junk. You can usually spot these as they will have one or both of the following problems:

1. Tests are done at higher than 1080p resolutions
2. A weak graphics card such as anything lower than a Titan XP is used.

What we know is that Ryzen should be performing faster as synthetics show it to be a great CPU - just not for gaming at present. Perhaps issues might be fixed down the line with bios / microcode updates.

I understand your dilemma regarding upgrading re: coffeelake and cannon lake etc; I remember covering it in another post - let's see find I can find it:

RAGEdemon said:As far as I am aware mate, both the upcoming coffee lake and cannon lake will be mobile and tablet parts only, not high performance desktop parts. They will be the same 10% performance increase as skylake to kaby lake, i.e. 0% @ IPC but a few hundred MHz on the clock - but there will not be any desktop high performance parts such as a 66000K/6700K or 7600K/7700K.

More info here:

https://www.pcgamesn.com/intel/intel-14nm-coffee-lake-release-date

For the foreseeable future, it looks like the best desktop performance chip for low thread count gaming will be an OC 7700k. Maybe with the advent of Ryzen, Intel will reconsider its future plans.

If you fellas want to wait, that that's great. Patience is a virtue! But it seems very unlikely that we will get any gaming grade chips better than kaby lake for quite some time, and if one does come out, waiting might just have been a waste of time as during that time you could have been plying most games at 60FPS locked.


Potentially, you might wait for Zen 2, where they should hopefully have all the memory latency / memory controller issues sorted and bugs fixed. It ought to be a far bigger jump than Intel's recent 10% per generation.

Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.

#61
Posted 03/03/2017 05:09 AM   
I reckon, that even the 4/6 core ryzens woldn't pass 4Ghz. They should offer lower prices but in game performace will remain same or even lower than the current 8 core models. Also intel won't reduce the price of 7700k at all in this situation. My second 1080 is already on it's way and I shall evaluate myself the botleneck caused by the old CPU. I see only two options atm: 7700k or waiting for another year
I reckon, that even the 4/6 core ryzens woldn't pass 4Ghz. They should offer lower prices but in game performace will remain same or even lower than the current 8 core models.
Also intel won't reduce the price of 7700k at all in this situation.

My second 1080 is already on it's way and I shall evaluate myself the botleneck caused by the old CPU.
I see only two options atm: 7700k or waiting for another year

Ryzen 1700X 3.9GHz | Asrock X370 Taichi | 16GB G.Skill
GTX 1080 Ti SLI | 850W EVGA P2 | Win7x64
Asus VG278HR | Panasonic TX-58EX750B 4K Active 3D

#62
Posted 03/03/2017 01:01 PM   
Playing the waiting game is fun :p. I wonder how far physics and destruction would be in current games if CPUs could have had performance increases like GPUs. 6 or 8 core CPU at guaranteed 5GHz with at least slightly higher IPC than Kaby Lake and I'm sold.
Playing the waiting game is fun :p. I wonder how far physics and destruction would be in current games if CPUs could have had performance increases like GPUs.

6 or 8 core CPU at guaranteed 5GHz with at least slightly higher IPC than Kaby Lake and I'm sold.

CPU: Intel Core i7 7700K @ 4.9GHz
Motherboard: Gigabyte Aorus GA-Z270X-Gaming 5
RAM: GSKILL Ripjaws Z 16GB 3866MHz CL18
GPU: MSI GeForce RTX 2080Ti Gaming X Trio
Monitor: Asus PG278QR
Speakers: Logitech Z506
Donations account: masterotakusuko@gmail.com

#63
Posted 03/03/2017 02:58 PM   
[quote="RAGEdemon"]In this age of alternative facts and fake news, we have to be very careful about what information we consume mate. On the net, the results are different because most of the websites are using bad science and methodology. For example, in the video you linked, you can clearly see that the GPU is always >95% (meaning the tests are GPU bound and the CPU isn't getting much workload at all). This is effectively benchmarking the GPU, NOT the CPU. In a CPU benchmark, GPU should never be anywhere near 90%; ideally remaining below 50%. This is why you test at 720p, or max 1080p with a Titan X Pascal. The tests in the video are so bad that you could probably do what they did with your i7 860, and get the same results as all the latest CPUs. Then you can declare that an i7 860 is as good as an overclocked 6900K. As also highlighted in the past, most of the other benchmarks around the web are also useless because they are done in GPU bound scenarios, where you won't see a difference even if the tested CPU was running at 100GHz with 100 cores. This is horrendous testing methodology, and an explanation of why can be seen in my previous post in quotes directly from gamersnexus, and some of my lamenting in previous posts. Someone smart once said it's like driving a modern Ferrari against a 90s Civic in a Speed Limit 30 road, and then declaring "look, they are both equally as fast!". Do you think this is a fair test of their real performance? Any "review" which shows Ryzen and 6900K/7700K showing nearly the same gaming performance should be considered junk. You can usually spot these as they will have one or both of the following problems: 1. Tests are done at higher than 1080p resolutions 2. A weak graphics card such as anything lower than a Titan XP is used. What we know is that Ryzen should be performing faster as synthetics show it to be a great CPU - just not for gaming at present. Perhaps issues might be fixed down the line with bios / microcode updates. I understand your dilemma regarding upgrading re: coffeelake and cannon lake etc; I remember covering it in another post - let's see find I can find it: [quote="RAGEdemon"]As far as I am aware mate, both the upcoming coffee lake and cannon lake will be mobile and tablet parts only, not high performance desktop parts. They will be the same 10% performance increase as skylake to kaby lake, i.e. 0% @ IPC but a few hundred MHz on the clock - but there will not be any desktop high performance parts such as a 66000K/6700K or 7600K/7700K. More info here: https://www.pcgamesn.com/intel/intel-14nm-coffee-lake-release-date For the foreseeable future, it looks like the best desktop performance chip for low thread count gaming will be an OC 7700k. Maybe with the advent of Ryzen, Intel will reconsider its future plans. If you fellas want to wait, that that's great. Patience is a virtue! But it seems very unlikely that we will get any gaming grade chips better than kaby lake for quite some time, and if one does come out, waiting might just have been a waste of time as during that time you could have been plying most games at 60FPS locked. [/quote] Potentially, you might wait for Zen 2, where they should hopefully have all the memory latency / memory controller issues sorted and bugs fixed. It ought to be a far bigger jump than Intel's recent 10% per generation. [/quote]Yeah, I get all of that. This is Joker though, and he is good buddies with the guy that you linked the article to from gamingnexus. In fact, they did a podcast together yesterday. I believe these benchmarks I posted were done in 1080 P. And for certain you can hit high gpu usage in some games even at that resolution. They mentioned in the podcast people doing crazy resolutions and they like to do 1080 P as the highest resolution for CPU benchmarks because of less gpu bottlenecking, and it's a real world gaming scenario because it's an extremely common resolution. Personally I think the video was useful because it wasn't a crazy high resolution like 4k or even 1440 p where you are for sure gpu bound. Also, we are seeing a real playthrough and not just graphs, so we can really pay attention to the frametimes throughout to check for hiccups.
RAGEdemon said:In this age of alternative facts and fake news, we have to be very careful about what information we consume mate.

On the net, the results are different because most of the websites are using bad science and methodology.

For example, in the video you linked, you can clearly see that the GPU is always >95% (meaning the tests are GPU bound and the CPU isn't getting much workload at all).

This is effectively benchmarking the GPU, NOT the CPU.

In a CPU benchmark, GPU should never be anywhere near 90%; ideally remaining below 50%. This is why you test at 720p, or max 1080p with a Titan X Pascal. The tests in the video are so bad that you could probably do what they did with your i7 860, and get the same results as all the latest CPUs. Then you can declare that an i7 860 is as good as an overclocked 6900K.

As also highlighted in the past, most of the other benchmarks around the web are also useless because they are done in GPU bound scenarios, where you won't see a difference even if the tested CPU was running at 100GHz with 100 cores. This is horrendous testing methodology, and an explanation of why can be seen in my previous post in quotes directly from gamersnexus, and some of my lamenting in previous posts.

Someone smart once said it's like driving a modern Ferrari against a 90s Civic in a Speed Limit 30 road, and then declaring "look, they are both equally as fast!". Do you think this is a fair test of their real performance?

Any "review" which shows Ryzen and 6900K/7700K showing nearly the same gaming performance should be considered junk. You can usually spot these as they will have one or both of the following problems:

1. Tests are done at higher than 1080p resolutions
2. A weak graphics card such as anything lower than a Titan XP is used.

What we know is that Ryzen should be performing faster as synthetics show it to be a great CPU - just not for gaming at present. Perhaps issues might be fixed down the line with bios / microcode updates.

I understand your dilemma regarding upgrading re: coffeelake and cannon lake etc; I remember covering it in another post - let's see find I can find it:

RAGEdemon said:As far as I am aware mate, both the upcoming coffee lake and cannon lake will be mobile and tablet parts only, not high performance desktop parts. They will be the same 10% performance increase as skylake to kaby lake, i.e. 0% @ IPC but a few hundred MHz on the clock - but there will not be any desktop high performance parts such as a 66000K/6700K or 7600K/7700K.

More info here:


https://www.pcgamesn.com/intel/intel-14nm-coffee-lake-release-date


For the foreseeable future, it looks like the best desktop performance chip for low thread count gaming will be an OC 7700k. Maybe with the advent of Ryzen, Intel will reconsider its future plans.

If you fellas want to wait, that that's great. Patience is a virtue! But it seems very unlikely that we will get any gaming grade chips better than kaby lake for quite some time, and if one does come out, waiting might just have been a waste of time as during that time you could have been plying most games at 60FPS locked.


Potentially, you might wait for Zen 2, where they should hopefully have all the memory latency / memory controller issues sorted and bugs fixed. It ought to be a far bigger jump than Intel's recent 10% per generation.


Yeah, I get all of that. This is Joker though, and he is good buddies with the guy that you linked the article to from gamingnexus. In fact, they did a podcast together yesterday. I believe these benchmarks I posted were done in 1080 P. And for certain you can hit high gpu usage in some games even at that resolution. They mentioned in the podcast people doing crazy resolutions and they like to do 1080 P as the highest resolution for CPU benchmarks because of less gpu bottlenecking, and it's a real world gaming scenario because it's an extremely common resolution.

Personally I think the video was useful because it wasn't a crazy high resolution like 4k or even 1440 p where you are for sure gpu bound. Also, we are seeing a real playthrough and not just graphs, so we can really pay attention to the frametimes throughout to check for hiccups.

#64
Posted 03/03/2017 04:51 PM   
[quote="masterotaku"]Playing the waiting game is fun :p. I wonder how far physics and destruction would be in current games if CPUs could have had performance increases like GPUs. 6 or 8 core CPU at guaranteed 5GHz with at least slightly higher IPC than Kaby Lake and I'm sold.[/quote] That's what everyone was hoping for :) Unfortunately, there is nothing like that around the corner on any road map for at least the next 3 years. I've just been reading about AMD's response about resolution GPU limiting benchmarks. They are now trying to sell the idea that 1080p resolution shouldn't be used for benchmarking because no-one really uses it and it's not who RyZen is aimed at; that higher resolutions should be used instead as they indicate some subjective "user experience". It's sad that they are trying to further the already muddy understanding of Mr. Average Joe, like game resolutions are magically tied to the CPU in some way. You can run a 10 year old game at 8K and get the exact same unrestricted results as modern 720p/1080p benchmarks showing how badly games perform on RyZen. Or you can make a game with a lot of assets and run that on 720p/1080p on 2x Titan XP SLi and get >90% GPU usage on both, and then do GPU limited benchmarks showing how even an I7-860 is 'as powerful' as a 4.4GHx 6900X / 5GHz 7700K even at 720p. Damage control through obfuscation of facts is not honourable, and I am personally losing respect each day this goes on. Some additional RyZen benchmarks from HardOCP at 640x480 - people who actually know what they are doing ;) http://www.hardocp.com/article/2017/03/02/amd_ryzen_1700x_cpu_review/4
masterotaku said:Playing the waiting game is fun :p. I wonder how far physics and destruction would be in current games if CPUs could have had performance increases like GPUs.

6 or 8 core CPU at guaranteed 5GHz with at least slightly higher IPC than Kaby Lake and I'm sold.

That's what everyone was hoping for :)

Unfortunately, there is nothing like that around the corner on any road map for at least the next 3 years.




I've just been reading about AMD's response about resolution GPU limiting benchmarks. They are now trying to sell the idea that 1080p resolution shouldn't be used for benchmarking because no-one really uses it and it's not who RyZen is aimed at; that higher resolutions should be used instead as they indicate some subjective "user experience".

It's sad that they are trying to further the already muddy understanding of Mr. Average Joe, like game resolutions are magically tied to the CPU in some way.

You can run a 10 year old game at 8K and get the exact same unrestricted results as modern 720p/1080p benchmarks showing how badly games perform on RyZen. Or you can make a game with a lot of assets and run that on 720p/1080p on 2x Titan XP SLi and get >90% GPU usage on both, and then do GPU limited benchmarks showing how even an I7-860 is 'as powerful' as a 4.4GHx 6900X / 5GHz 7700K even at 720p.

Damage control through obfuscation of facts is not honourable, and I am personally losing respect each day this goes on.

Some additional RyZen benchmarks from HardOCP at 640x480 - people who actually know what they are doing ;)
http://www.hardocp.com/article/2017/03/02/amd_ryzen_1700x_cpu_review/4

Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.

#65
Posted 03/03/2017 05:45 PM   
[quote="RAGEdemon"][quote="masterotaku"]Playing the waiting game is fun :p. I wonder how far physics and destruction would be in current games if CPUs could have had performance increases like GPUs. 6 or 8 core CPU at guaranteed 5GHz with at least slightly higher IPC than Kaby Lake and I'm sold.[/quote] That's what everyone was hoping for :) Unfortunately, there is nothing like that around the corner on any road map for at least the next 3 years. I've just been reading about AMD's response about resolution GPU limiting benchmarks. They are now trying to sell the idea that 1080p resolution shouldn't be used for benchmarking because no-one really uses it and it's not who RyZen is aimed at; that higher resolutions should be used instead as they indicate some subjective "user experience". It's sad that they are trying to further the already muddy understanding of Mr. Average Joe, like game resolutions are magically tied to the CPU in some way. You can run a 10 year old game at 8K and get the exact same unrestricted results as modern 720p/1080p benchmarks showing how badly games perform on RyZen. Or you can make a game with a lot of assets and run that on 720p/1080p on 2x Titan XP SLi and get >90% GPU usage on both, and then do GPU limited benchmarks showing how even an I7-860 is 'as powerful' as a 4.4GHx 6900X / 5GHz 7700K even at 720p. Damage control through obfuscation of facts is not honourable, and I am personally losing respect each day this goes on. Some additional RyZen benchmarks from HardOCP at 640x480 - people who actually know what they are doing ;) http://www.hardocp.com/article/2017/03/02/amd_ryzen_1700x_cpu_review/4[/quote]Yeah, not in agreement at all about 1080 p being dead. Tell that to the esports scene, or tons of other people that want to hit their high refresh-rate monitors cap. I myself am using a 1080 p 144 hz display. That being said, according to a poster on OCN, AMD asked people to show 4k results because they believed they had more consistent frame-times at that resolution. Now, I have no idea of this is true, but I wouldn't mind seeing a 4k raw data upload like Joker posted so I could watch the frame-times myself to see if this is indeed true. Ah man, you had to bring my poor 860 into this :(
RAGEdemon said:
masterotaku said:Playing the waiting game is fun :p. I wonder how far physics and destruction would be in current games if CPUs could have had performance increases like GPUs.

6 or 8 core CPU at guaranteed 5GHz with at least slightly higher IPC than Kaby Lake and I'm sold.

That's what everyone was hoping for :)

Unfortunately, there is nothing like that around the corner on any road map for at least the next 3 years.




I've just been reading about AMD's response about resolution GPU limiting benchmarks. They are now trying to sell the idea that 1080p resolution shouldn't be used for benchmarking because no-one really uses it and it's not who RyZen is aimed at; that higher resolutions should be used instead as they indicate some subjective "user experience".

It's sad that they are trying to further the already muddy understanding of Mr. Average Joe, like game resolutions are magically tied to the CPU in some way.

You can run a 10 year old game at 8K and get the exact same unrestricted results as modern 720p/1080p benchmarks showing how badly games perform on RyZen. Or you can make a game with a lot of assets and run that on 720p/1080p on 2x Titan XP SLi and get >90% GPU usage on both, and then do GPU limited benchmarks showing how even an I7-860 is 'as powerful' as a 4.4GHx 6900X / 5GHz 7700K even at 720p.

Damage control through obfuscation of facts is not honourable, and I am personally losing respect each day this goes on.

Some additional RyZen benchmarks from HardOCP at 640x480 - people who actually know what they are doing ;)
http://www.hardocp.com/article/2017/03/02/amd_ryzen_1700x_cpu_review/4
Yeah, not in agreement at all about 1080 p being dead. Tell that to the esports scene, or tons of other people that want to hit their high refresh-rate monitors cap. I myself am using a 1080 p 144 hz display. That being said, according to a poster on OCN, AMD asked people to show 4k results because they believed they had more consistent frame-times at that resolution. Now, I have no idea of this is true, but I wouldn't mind seeing a 4k raw data upload like Joker posted so I could watch the frame-times myself to see if this is indeed true.

Ah man, you had to bring my poor 860 into this :(

#66
Posted 03/03/2017 06:07 PM   
Hehe, sorry - it was the first thing that sprang to my mind when thinking of an older CPU, after you mentioned it :) I wouldn't say lowering frame times would be a valid excuse, as frame times all across the resolution band should be tested on all CPUs for fairness. Computerbase.de has an excellent comparison of frame times here, albeit, at a set resolution: https://www.computerbase.de/2017-03/amd-ryzen-1800x-1700x-1700-test/4/#diagramm-ashes-of-the-singularity-dx11-frametimes-ryzen-7-1800x-gegen-core-i7-7700k
Hehe, sorry - it was the first thing that sprang to my mind when thinking of an older CPU, after you mentioned it :)

I wouldn't say lowering frame times would be a valid excuse, as frame times all across the resolution band should be tested on all CPUs for fairness.

Computerbase.de has an excellent comparison of frame times here, albeit, at a set resolution:
https://www.computerbase.de/2017-03/amd-ryzen-1800x-1700x-1700-test/4/#diagramm-ashes-of-the-singularity-dx11-frametimes-ryzen-7-1800x-gegen-core-i7-7700k

Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.

#67
Posted 03/03/2017 06:23 PM   
[quote="RAGEdemon"]Hehe, sorry - it was the first thing that sprang to my mind when thinking of an older CPU, after you mentioned it :) I wouldn't say lowering frame times would be a valid excuse, as frame times all across the resolution band should be tested on all CPUs for fairness. Computerbase.de has an excellent comparison of frame times here, albeit, at a set resolution: https://www.computerbase.de/2017-03/amd-ryzen-1800x-1700x-1700-test/4/#diagramm-ashes-of-the-singularity-dx11-frametimes-ryzen-7-1800x-gegen-core-i7-7700k[/quote]Interesting, in dx11 BF1 looks to be a better experience in multiplayer (perhaps this is where the extra threads are giving it a boost?) You don't see a frametime of higher than 13 with ryzen, but the 7700 k sees some frametime spikes near 25 ms. This is at 1080 P too. Mind you this is dx 11. dx12 is another story. 35 ms frametime spikes in dx12 ryzen multiplayer. Mankind divivded is a dog. Ryzen is getting 50 ms frametime spikes while even intel hit 40 ms with the 7700 k. Doom vulkan appears to have better frametimes for Ryzen Rise of the tomb raider saw some significant boosts to performance in frametimes for both companies in dx12 over 11. The witcher 3 is pretty even for both. Man, the frametimes in that game are a huge improvement for me. Total war is extremely interesting. On Average the 7700k hands ryzen it's ass, but it did have the largest spike on the graph. Watchdogd 2 is extremely close. All in all ryzen shows pretty well in these frametimes graphs. Ryzen is actually better than the 6900 k for gaming it seems going on those frametimes alone.
RAGEdemon said:Hehe, sorry - it was the first thing that sprang to my mind when thinking of an older CPU, after you mentioned it :)

I wouldn't say lowering frame times would be a valid excuse, as frame times all across the resolution band should be tested on all CPUs for fairness.

Computerbase.de has an excellent comparison of frame times here, albeit, at a set resolution:
https://www.computerbase.de/2017-03/amd-ryzen-1800x-1700x-1700-test/4/#diagramm-ashes-of-the-singularity-dx11-frametimes-ryzen-7-1800x-gegen-core-i7-7700k
Interesting, in dx11 BF1 looks to be a better experience in multiplayer (perhaps this is where the extra threads are giving it a boost?) You don't see a frametime of higher than 13 with ryzen, but the 7700 k sees some frametime spikes near 25 ms. This is at 1080 P too. Mind you this is dx 11. dx12 is another story. 35 ms frametime spikes in dx12 ryzen multiplayer.

Mankind divivded is a dog. Ryzen is getting 50 ms frametime spikes while even intel hit 40 ms with the 7700 k. Doom vulkan appears to have better frametimes for Ryzen Rise of the tomb raider saw some significant boosts to performance in frametimes for both companies in dx12 over 11. The witcher 3 is pretty even for both. Man, the frametimes in that game are a huge improvement for me.

Total war is extremely interesting. On Average the 7700k hands ryzen it's ass, but it did have the largest spike on the graph. Watchdogd 2 is extremely close. All in all ryzen shows pretty well in these frametimes graphs. Ryzen is actually better than the 6900 k for gaming it seems going on those frametimes alone.

#68
Posted 03/03/2017 06:43 PM   
[quote="RAGEdemon"]In this age of alternative facts and fake news, we have to be very careful about what information we consume mate. On the net, the results are different because most of the websites are using bad science and methodology. For example, in the video you linked, you can clearly see that the GPU is always >95% (meaning the tests are GPU bound and the CPU isn't getting much workload at all). This is effectively benchmarking the GPU, NOT the CPU. In a CPU benchmark, GPU should never be anywhere near 90%; ideally remaining below 50%. This is why you test at 720p, or max 1080p with a Titan X Pascal. The tests in the video are so bad that you could probably do what they did with your i7 860, and get the same results as all the latest CPUs. Then you can declare that an i7 860 is as good as an overclocked 6900K. As also highlighted in the past, most of the other benchmarks around the web are also useless because they are done in GPU bound scenarios, where you won't see a difference even if the tested CPU was running at 100GHz with 100 cores. This is horrendous testing methodology, and an explanation of why can be seen in my previous post in quotes directly from gamersnexus, and some of my lamenting in previous posts. Someone smart once said it's like driving a modern Ferrari against a 90s Civic in a Speed Limit 30 road, and then declaring "look, they are both equally as fast!". Do you think this is a fair test of their real performance? Any "review" which shows Ryzen and 6900K/7700K showing nearly the same gaming performance should be considered junk. You can usually spot these as they will have one or both of the following problems: 1. Tests are done at higher than 1080p resolutions 2. A weak graphics card such as anything lower than a Titan XP is used. What we know is that Ryzen should be performing faster as synthetics show it to be a great CPU - just not for gaming at present. Perhaps issues might be fixed down the line with bios / microcode updates. I understand your dilemma regarding upgrading re: coffeelake and cannon lake etc; I remember covering it in another post - let's see find I can find it: [/quote]Jokers got you! https://www.youtube.com/watch?v=nsDjx-tW_WQ
RAGEdemon said:In this age of alternative facts and fake news, we have to be very careful about what information we consume mate.

On the net, the results are different because most of the websites are using bad science and methodology.

For example, in the video you linked, you can clearly see that the GPU is always >95% (meaning the tests are GPU bound and the CPU isn't getting much workload at all).

This is effectively benchmarking the GPU, NOT the CPU.

In a CPU benchmark, GPU should never be anywhere near 90%; ideally remaining below 50%. This is why you test at 720p, or max 1080p with a Titan X Pascal. The tests in the video are so bad that you could probably do what they did with your i7 860, and get the same results as all the latest CPUs. Then you can declare that an i7 860 is as good as an overclocked 6900K.

As also highlighted in the past, most of the other benchmarks around the web are also useless because they are done in GPU bound scenarios, where you won't see a difference even if the tested CPU was running at 100GHz with 100 cores. This is horrendous testing methodology, and an explanation of why can be seen in my previous post in quotes directly from gamersnexus, and some of my lamenting in previous posts.

Someone smart once said it's like driving a modern Ferrari against a 90s Civic in a Speed Limit 30 road, and then declaring "look, they are both equally as fast!". Do you think this is a fair test of their real performance?

Any "review" which shows Ryzen and 6900K/7700K showing nearly the same gaming performance should be considered junk. You can usually spot these as they will have one or both of the following problems:

1. Tests are done at higher than 1080p resolutions
2. A weak graphics card such as anything lower than a Titan XP is used.

What we know is that Ryzen should be performing faster as synthetics show it to be a great CPU - just not for gaming at present. Perhaps issues might be fixed down the line with bios / microcode updates.

I understand your dilemma regarding upgrading re: coffeelake and cannon lake etc; I remember covering it in another post - let's see find I can find it:

Jokers got you!

#69
Posted 03/03/2017 06:48 PM   
[img]https://s3.postimg.org/629zlpw37/ryzen_v_i7p2.png[/img]Ouch!!!!!!!!! [img]https://s2.postimg.org/wzb56ydi1/usage.png[/img]That CPU usage.... Edit: Not apples to apples. Ryzen is using dx11 and 7700k is using dx 12. Joker admitted he screw up here. [img]https://s21.postimg.org/wfyz35p1z/wow.png[/img] Only a 100+ fps lead, no biggie! Whats the deal with such high GPU usage at this resolution on intel over amd????
ImageOuch!!!!!!!!!

ImageThat CPU usage.... Edit: Not apples to apples. Ryzen is using dx11 and 7700k is using dx 12. Joker admitted he screw up here.

Image
Only a 100+ fps lead, no biggie! Whats the deal with such high GPU usage at this resolution on intel over amd????

#70
Posted 03/03/2017 07:00 PM   
Whoops, meant to edit post.
Whoops, meant to edit post.

#71
Posted 03/03/2017 07:07 PM   
Kudos to Joker. He has added tests to show proper "unbiased" performance, as he calls it. It takes a big person to do something like that :) The GPU usage will increase the more powerful the CPU to give a higher FPS. The CPU is a GPU multiplier of sorts (and vise versa, depending on which one is being bottlenecked). So, for example, if Ryzen is producing 100fps using the GPU at 50%, and say for example, 7700 is 20% faster, producing 120FPS, it means that the GPU working with the 7700K will be also working at ~20% more, so it will be at 60%. We can test this using your bottom screenshot: We have Ryzen producing 258fps @ 58% load We have 7700K producing 398fps @ 87% load. (398-258)/258*100 = 54% performance increase. This means that the GPU should be working around 54% more on the 7700K system. Is it though? (87-58)/58*100 = 50% So, yes it is, and our hypothesis seems correct :)
Kudos to Joker. He has added tests to show proper "unbiased" performance, as he calls it.

It takes a big person to do something like that :)



The GPU usage will increase the more powerful the CPU to give a higher FPS. The CPU is a GPU multiplier of sorts (and vise versa, depending on which one is being bottlenecked).

So, for example, if Ryzen is producing 100fps using the GPU at 50%, and say for example, 7700 is 20% faster, producing 120FPS, it means that the GPU working with the 7700K will be also working at ~20% more, so it will be at 60%.

We can test this using your bottom screenshot:
We have Ryzen producing 258fps @ 58% load
We have 7700K producing 398fps @ 87% load.

(398-258)/258*100 = 54% performance increase.

This means that the GPU should be working around 54% more on the 7700K system. Is it though?

(87-58)/58*100 = 50%

So, yes it is, and our hypothesis seems correct :)

Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.

#72
Posted 03/03/2017 08:14 PM   
I had a look at the GTA Video and TBH i don't understand how is this possible. 100 fps with 36% gpu usage and 85,9 FPS with 67% -- this is fishy [img]https://forums.geforce.com/cmd/default/download-comment-attachment/71900/[/img]
I had a look at the GTA Video and TBH i don't understand how is this possible.

100 fps with 36% gpu usage and 85,9 FPS with 67% -- this is fishy
Image
Attachments

Strange.JPG

Intel i7 8086K
Gigabyte GTX 1080Ti Aorus Extreme
DDR4 2x8gb 3200mhz Cl14
TV LG OLED65E6V
Avegant Glyph
Windows 10 64bits

#73
Posted 03/03/2017 09:46 PM   
[quote="joker18"]I had a look at the GTA Video and TBH i don't understand how is this possible. 100 fps with 36% gpu usage and 85,9 FPS with 67% -- this is fishy [img]https://forums.geforce.com/cmd/default/download-comment-attachment/71900/[/img][/quote]This is actually watchdogs 2. Perhaps Rage can comment on this phenomenon? We are getting the opposite effect from the sniper elite 4 run.
joker18 said:I had a look at the GTA Video and TBH i don't understand how is this possible.

100 fps with 36% gpu usage and 85,9 FPS with 67% -- this is fishy
Image
This is actually watchdogs 2. Perhaps Rage can comment on this phenomenon? We are getting the opposite effect from the sniper elite 4 run.

#74
Posted 03/03/2017 10:08 PM   
I would wager that he has accidentally used different game settings such as MSAA etc on one of the systems. If you look at the card memory usage, it's 5.7GB vs 1.7GB, which suggests a huge mess-up somewhere in the game settings, perhaps even resolution. If he has missed the DX11 vs DX12, and has had only 1 day to make this video, I don't blame him. In the end, as both GPU's are well below 90%, it shouldn't have significant impact on CPU based FPS results.
I would wager that he has accidentally used different game settings such as MSAA etc on one of the systems.

If you look at the card memory usage, it's 5.7GB vs 1.7GB, which suggests a huge mess-up somewhere in the game settings, perhaps even resolution.

If he has missed the DX11 vs DX12, and has had only 1 day to make this video, I don't blame him.

In the end, as both GPU's are well below 90%, it shouldn't have significant impact on CPU based FPS results.

Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.

#75
Posted 03/03/2017 11:15 PM   
  5 / 22    
Scroll To Top