i5 8600K vs i7 8700K for 3D Vision?
  2 / 4    
[quote="zig11727"]When hyper threading is disabled your CPU usage and temperatures will almost double. I7-8XXX and I5-8XXX run very hot stock.[/quote] No, sorry, that is actually an illusion because of the way the CPU performance is displayed by Windows. As an example, let's take a 4 core i7-6700k. If you are using four threads fully, then on that chip, it will show CPU usage at 50%, because it has 4 other threads it could use that are idle. If you disable HyperThreading, then you have only 4 threads available, which are all used, which then shows CPU usage at 100%, because the other threads are gone. There is no difference in actual performance.
zig11727 said:When hyper threading is disabled your CPU usage and temperatures will almost double.

I7-8XXX and I5-8XXX run very hot stock.

No, sorry, that is actually an illusion because of the way the CPU performance is displayed by Windows.

As an example, let's take a 4 core i7-6700k. If you are using four threads fully, then on that chip, it will show CPU usage at 50%, because it has 4 other threads it could use that are idle. If you disable HyperThreading, then you have only 4 threads available, which are all used, which then shows CPU usage at 100%, because the other threads are gone. There is no difference in actual performance.

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

#16
Posted 07/04/2018 03:48 AM   
As bo3b says. Also, temperatures will not double when HT is disabled; actually quite the opposite. When HT is enabled, and the CPU is using all 8 threads, then there will be a slight to moderate temperature increase because the CPU is working ~30% harder.
As bo3b says.

Also, temperatures will not double when HT is disabled; actually quite the opposite.

When HT is enabled, and the CPU is using all 8 threads, then there will be a slight to moderate temperature increase because the CPU is working ~30% harder.

Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.

#17
Posted 07/04/2018 10:46 AM   
@zig11727: Thanks for that video. Very interesting results. [quote="RAGEdemon"]zig11727, that vid is comparing a heavily OC'd 5GHz i5 on all cores to a stock i7 at 3.7 base, sometimes turboing a single core to 4.7. It's not at all a fair comparison. Check out my thread here showing the huge difference hyperthreading makes to gaming: [url]https://forums.geforce.com/default/topic/1061296/3d-vision/amd-ryzen-7-2700x-performance-on-3d-vision-/post/5830088/#5830088[/url] As for 3D Vision, due to the CPU bottleneck bug, I don't know to what extent it will be affected.[/quote] That video is actually a lot better than I'd expect. True, it's an sort of unfair comparison to use OC vs stock, but that's not the interesting part there. If you go through all 8 of those games, not a single one uses the 12 threads of the i7-8700K. And in fact, only one game effectively uses the 6 cores/threads of the i5-8600K. But it [i]only [/i]uses 6 threads, as the 8700K is at 50% usage. The test is fairly solid because he uses low-quality mode for all games, just at 1080p. This idea that more threads is good for gaming is still a myth, today. Those include modern games, including a bunch from your earlier list. They are not using all cores, even for 2D gaming, let alone 3D. I stand by my assessment that more cores is not the right thing to buy today. An i5-8600K is a perfectly good choice for today's gaming, and likely for at least a couple of years. The reason this is important and I keep harping on it, is because people have this mystical idea of multitasking, and don't realize that it's nearly impossible to make things scale using more cores in games. There are some outliers like Ashes, but in general, game logic does not lend itself well to more threads/cores. Here's a particularly good thread where people discuss the idea: https://forums.anandtech.com/threads/on-a-serious-note-are-games-going-to-use-more-cores.2517695/ A fellow there brings up Amdahls law for multiprocessing, and describes it well. It's an extremely good point, and I think suggests that anything past about 8 cores/threads is going to be limited.
@zig11727: Thanks for that video. Very interesting results.


RAGEdemon said:zig11727, that vid is comparing a heavily OC'd 5GHz i5 on all cores to a stock i7 at 3.7 base, sometimes turboing a single core to 4.7. It's not at all a fair comparison.

Check out my thread here showing the huge difference hyperthreading makes to gaming:
https://forums.geforce.com/default/topic/1061296/3d-vision/amd-ryzen-7-2700x-performance-on-3d-vision-/post/5830088/#5830088

As for 3D Vision, due to the CPU bottleneck bug, I don't know to what extent it will be affected.

That video is actually a lot better than I'd expect.

True, it's an sort of unfair comparison to use OC vs stock, but that's not the interesting part there.


If you go through all 8 of those games, not a single one uses the 12 threads of the i7-8700K. And in fact, only one game effectively uses the 6 cores/threads of the i5-8600K. But it only uses 6 threads, as the 8700K is at 50% usage.

The test is fairly solid because he uses low-quality mode for all games, just at 1080p.

This idea that more threads is good for gaming is still a myth, today. Those include modern games, including a bunch from your earlier list. They are not using all cores, even for 2D gaming, let alone 3D.

I stand by my assessment that more cores is not the right thing to buy today. An i5-8600K is a perfectly good choice for today's gaming, and likely for at least a couple of years.


The reason this is important and I keep harping on it, is because people have this mystical idea of multitasking, and don't realize that it's nearly impossible to make things scale using more cores in games. There are some outliers like Ashes, but in general, game logic does not lend itself well to more threads/cores.

Here's a particularly good thread where people discuss the idea: https://forums.anandtech.com/threads/on-a-serious-note-are-games-going-to-use-more-cores.2517695/

A fellow there brings up Amdahls law for multiprocessing, and describes it well. It's an extremely good point, and I think suggests that anything past about 8 cores/threads is going to be limited.

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

#18
Posted 07/05/2018 12:11 PM   
That's up to you mate. The great thing about science/truth is that it does not change whether someone believes it or not. In the face of a litany of benchmarks from reputable sites from various corners of the internet, if you would rather take this video which doesn't follow any scientific method and some posts on a thread on anandtech forums, then that's up to you mate. We all suffer from [url]https://en.wikipedia.org/wiki/Confirmation_bias[/url] one way or another. I am guilty of it too and try hard to recognise it when I see it, to varying degrees of success :) I could paste off more benchmarks, forums, or articles showing how dev studios are starting to use and aim for 16 core CPUs, etc. I could also prattle on about how it's not about multitasking, but rather thread dependency where a higher core CPU can more effectively process a thread and finish earlier allowing other threads to move forward than a lower core cpu at the same IPC x clock, or hyperthreading which is only a 30% boost rather than a 100% boost with physical cores, but I won't ;-) Your mind is made up and it doesn't affect me. I have linked and explained what I could, and hope that people I care about here, including you, will make up their own minds or even change their minds in time as the future unfolds. Certainly I absolutely agree that 8700k and 8600k are not bad choices in any sense. An 8700K which has been delidded, overclocked, cooled well, and paired with fast memory with low latency is a fantastic choice for price, performance, and accessibility, and ought to last a very long time indeed. On a personal note, I am on a different upgrade schedule - my 5.1GHz 7700K ought to last me some time, towards when the next gen consoles come out which dictate even PC hardware requirements, and new higher core CPUs are more readily available from both Intel and DAAMIT, everyone's favourite underdog. All the best :)
That's up to you mate. The great thing about science/truth is that it does not change whether someone believes it or not.

In the face of a litany of benchmarks from reputable sites from various corners of the internet, if you would rather take this video which doesn't follow any scientific method and some posts on a thread on anandtech forums, then that's up to you mate.

We all suffer from https://en.wikipedia.org/wiki/Confirmation_bias one way or another. I am guilty of it too and try hard to recognise it when I see it, to varying degrees of success :)

I could paste off more benchmarks, forums, or articles showing how dev studios are starting to use and aim for 16 core CPUs, etc.

I could also prattle on about how it's not about multitasking, but rather thread dependency where a higher core CPU can more effectively process a thread and finish earlier allowing other threads to move forward than a lower core cpu at the same IPC x clock, or hyperthreading which is only a 30% boost rather than a 100% boost with physical cores, but I won't ;-)

Your mind is made up and it doesn't affect me. I have linked and explained what I could, and hope that people I care about here, including you, will make up their own minds or even change their minds in time as the future unfolds.

Certainly I absolutely agree that 8700k and 8600k are not bad choices in any sense. An 8700K which has been delidded, overclocked, cooled well, and paired with fast memory with low latency is a fantastic choice for price, performance, and accessibility, and ought to last a very long time indeed.

On a personal note, I am on a different upgrade schedule - my 5.1GHz 7700K ought to last me some time, towards when the next gen consoles come out which dictate even PC hardware requirements, and new higher core CPUs are more readily available from both Intel and DAAMIT, everyone's favourite underdog.

All the best :)

Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.

#19
Posted 07/05/2018 01:11 PM   
Here is a particularly good video from that same tester, Testing Games. https://www.youtube.com/watch?v=rvY8VqqBt4Q This is i3-8100 vs. i7-8700K. Again, the only thing I'm interested in is the CPU and GPU usage. The reason this is interesting is because it is the counter example, running fewer cores. If my theory is right, then it should also max the CPU which has only 4 cores/threads, and also hammer the frame rate. Witcher 3: 50%. At least 6 threads. Maxed out GPU Cars 2: 35%. 4 threads. BattleField 1: 50%. 6 threads. Far Cry 5: 40%. 5 threads. Assassins Origins: 75%. 9 threads. Wreckfest: 25%. 3 threads. Kingdom Come: 40%. 5 threads. Maxed out GPU Fallout 4: 35%. 4 threads. [quote="RAGEdemon"]That's up to you mate. The great thing about science/truth is that it does not change whether someone believes it or not. In the face of litany of benchmarks from reputable sites from various corners of the internet, if you would rather take this video which doesn't follow any scientific method and the anandtech forum, then that's up to you. We all suffer from [url]https://en.wikipedia.org/wiki/Confirmation_bias[/url] one way or another. I am guilty of it too and try hard to recognise it when I see it :) I could paste off more benchmarks, forums, or articles showing how dev studios are starting to use and aim for 16 core CPUs, etc, but your mind is made up and it doesn't affect me. I have linked what I could, and hope that people will make up their own minds in time. Certainly 8700k and 8600k are not bad choices in any sense. An 8700K which has been delidded and overclocked is a fantastic choice for price, performance, and accessibility, and ought to last a very long time indeed. All the best :)[/quote] Actually, the greatest thing about science, is that we [i]always [/i]question our results, not take experts words for it. Reproducibility, experiments trying to disprove the tested theory. We are not trying to prove our theory, we are looking for holes in it. Ignoring or dismissing contrary results is not science. This is not my confirmation bias- you are deliberately ignoring the terrific results you can glean from these videos. It's not a perfect test, I'd rather have him set graphics to low, and use 720p, but it's still close enough, because we can also watch GPU load. If GPU load hits 95% or above, we know it's moved over to GPU limiting. Anything below that, and it's a valid core test. I just did 4 hours of internet research looking to answer that exact question, and I differ in your results in that I think everything people are talking about is rubbish. The litany of results is groupthink, all saying the same thing, no benchmarks, no data. There are 100s of forums of people saying "it must be", "it will be" and nearly zero experimental results. I also looked at the SkylakeX tests, with absolutely massive core counts to see if games might be skipping HT threads on purpose, and see no difference in results. The 18 core monster is only using 6 cores for games. If you have a solid test case, I'd love to see it. (Ashes is a one-off) In that thread I referenced, one person spoke about having contact with Treyarch, and why they specifically are not interested as a studio to handle multiple cores. Anecdotal, but so is all the other bullshit on the forums about what developers are "going to do in the future." You gave anecdotal evidence, this is counter anecdotal evidence. That was worth reading because it is also worth considering Amdahl's Law. This is very well known in computer science. And particularly important for game code. The massive core counts idea is not confirmation bias, it is wishful thinking. It is beginning to happen now, Assassins Creed Origins is the first I've seen to use more than 8. It's crept up from topping out at 4 to topping out at 6. I'm not saying it won't happen, but right this second there is almost nothing using more than 6 threads. And the future still remains very uncertain. I don't understand why you are so dismissive of this data. [quote]Your mind is made up and it doesn't affect me. I have linked and explained what I could, and hope that people I care about here, including you, will make up their own minds or even change their minds in time as the future unfolds.[/quote] Actually no, my mind is not at all made up. I just need solid experimental evidence. The tests I've run myself also do not show any particular advantage to more cores.
Here is a particularly good video from that same tester, Testing Games.



This is i3-8100 vs. i7-8700K. Again, the only thing I'm interested in is the CPU and GPU usage. The reason this is interesting is because it is the counter example, running fewer cores. If my theory is right, then it should also max the CPU which has only 4 cores/threads, and also hammer the frame rate.

Witcher 3: 50%. At least 6 threads. Maxed out GPU
Cars 2: 35%. 4 threads.
BattleField 1: 50%. 6 threads.
Far Cry 5: 40%. 5 threads.
Assassins Origins: 75%. 9 threads.
Wreckfest: 25%. 3 threads.
Kingdom Come: 40%. 5 threads. Maxed out GPU
Fallout 4: 35%. 4 threads.


RAGEdemon said:That's up to you mate. The great thing about science/truth is that it does not change whether someone believes it or not.

In the face of litany of benchmarks from reputable sites from various corners of the internet, if you would rather take this video which doesn't follow any scientific method and the anandtech forum, then that's up to you.

We all suffer from https://en.wikipedia.org/wiki/Confirmation_bias one way or another. I am guilty of it too and try hard to recognise it when I see it :)

I could paste off more benchmarks, forums, or articles showing how dev studios are starting to use and aim for 16 core CPUs, etc, but your mind is made up and it doesn't affect me. I have linked what I could, and hope that people will make up their own minds in time.

Certainly 8700k and 8600k are not bad choices in any sense. An 8700K which has been delidded and overclocked is a fantastic choice for price, performance, and accessibility, and ought to last a very long time indeed.

All the best :)

Actually, the greatest thing about science, is that we always question our results, not take experts words for it. Reproducibility, experiments trying to disprove the tested theory. We are not trying to prove our theory, we are looking for holes in it. Ignoring or dismissing contrary results is not science.

This is not my confirmation bias- you are deliberately ignoring the terrific results you can glean from these videos. It's not a perfect test, I'd rather have him set graphics to low, and use 720p, but it's still close enough, because we can also watch GPU load. If GPU load hits 95% or above, we know it's moved over to GPU limiting. Anything below that, and it's a valid core test.


I just did 4 hours of internet research looking to answer that exact question, and I differ in your results in that I think everything people are talking about is rubbish. The litany of results is groupthink, all saying the same thing, no benchmarks, no data. There are 100s of forums of people saying "it must be", "it will be" and nearly zero experimental results.

I also looked at the SkylakeX tests, with absolutely massive core counts to see if games might be skipping HT threads on purpose, and see no difference in results. The 18 core monster is only using 6 cores for games. If you have a solid test case, I'd love to see it. (Ashes is a one-off)


In that thread I referenced, one person spoke about having contact with Treyarch, and why they specifically are not interested as a studio to handle multiple cores. Anecdotal, but so is all the other bullshit on the forums about what developers are "going to do in the future." You gave anecdotal evidence, this is counter anecdotal evidence.

That was worth reading because it is also worth considering Amdahl's Law. This is very well known in computer science. And particularly important for game code.


The massive core counts idea is not confirmation bias, it is wishful thinking. It is beginning to happen now, Assassins Creed Origins is the first I've seen to use more than 8. It's crept up from topping out at 4 to topping out at 6. I'm not saying it won't happen, but right this second there is almost nothing using more than 6 threads. And the future still remains very uncertain.

I don't understand why you are so dismissive of this data.


Your mind is made up and it doesn't affect me. I have linked and explained what I could, and hope that people I care about here, including you, will make up their own minds or even change their minds in time as the future unfolds.

Actually no, my mind is not at all made up. I just need solid experimental evidence.

The tests I've run myself also do not show any particular advantage to more cores.

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

#20
Posted 07/05/2018 01:52 PM   
https://www.youtube.com/watch?v=QPeWt8NPd-8&t=325s Project Cars: 35%. 4 threads. GTA 5: 40%. 5 threads. Far Cry 5: 40%. 5 threads. Fallout 4: 35%. 4 threads. AC Origins: 75%. 9 threads. Witcher 3: 50%. 6 threads. Arma 3: 25%. 3 threads. Battlefield 1: 45% ? 5.4 threads? Could be 6 with bad scaling. https://www.youtube.com/watch?v=TBWB6IoY_QA Project Cars: 50%. 3 threads. Better test than above. Fallout 4: 60%. 4 threads. AC Origins: 100%. 6 threads. GPU not maxed. Arma 3: 40%. ? 2.5 threads. Might be bad scaling. GTA 5: 65%. 4 threads. Witcher 3: 80%. 5 threads. GPU maxed. Could be 6 threads. Far Cry 5: 75%. 4.5 threads? Might be bad scaling. Battlefield 1: 85%. 5 threads? GPU maxed. https://www.youtube.com/watch?v=AqBl9frFESI Ryzen 4 x 6 x 8 BattleField 1: 4 cores limits GPU, 6 cores moves to GPU limited. Project Cars 2: GPU limited at 4 cores. AC Origins: 4 cores limits GPU. 6 cores 100% use, GPU limited. 8 cores 75% GPU limited. Kingdom Come: 4 cores GPU limited. Far Cry 5: 4 cores GPU limited. Witcher 3: 4 cores limits GPU, 6 cores GPU limited. Fallout 4: 4 cores 80%, 6 cores 65%, 8 cores 55%.
;t=325s

Project Cars: 35%. 4 threads.
GTA 5: 40%. 5 threads.
Far Cry 5: 40%. 5 threads.
Fallout 4: 35%. 4 threads.
AC Origins: 75%. 9 threads.
Witcher 3: 50%. 6 threads.
Arma 3: 25%. 3 threads.
Battlefield 1: 45% ? 5.4 threads? Could be 6 with bad scaling.




Project Cars: 50%. 3 threads. Better test than above.
Fallout 4: 60%. 4 threads.
AC Origins: 100%. 6 threads. GPU not maxed.
Arma 3: 40%. ? 2.5 threads. Might be bad scaling.
GTA 5: 65%. 4 threads.
Witcher 3: 80%. 5 threads. GPU maxed. Could be 6 threads.
Far Cry 5: 75%. 4.5 threads? Might be bad scaling.
Battlefield 1: 85%. 5 threads? GPU maxed.




Ryzen 4 x 6 x 8

BattleField 1: 4 cores limits GPU, 6 cores moves to GPU limited.
Project Cars 2: GPU limited at 4 cores.
AC Origins: 4 cores limits GPU. 6 cores 100% use, GPU limited. 8 cores 75% GPU limited.
Kingdom Come: 4 cores GPU limited.
Far Cry 5: 4 cores GPU limited.
Witcher 3: 4 cores limits GPU, 6 cores GPU limited.
Fallout 4: 4 cores 80%, 6 cores 65%, 8 cores 55%.

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

#21
Posted 07/05/2018 02:41 PM   
I just purchased 8700K from New Egg It runs a lot cooler than my i7-7700K.
I just purchased 8700K from New Egg It runs a lot cooler than my i7-7700K.
Attachments

8700K.JPG

Gigabyte Z370 Gaming 7 32GB Ram i9-9900K GigaByte Aorus Extreme Gaming 2080TI (single) Game Blaster Z Windows 10 X64 build #17763.195 Define R6 Blackout Case Corsair H110i GTX Sandisk 1TB (OS) SanDisk 2TB SSD (Games) Seagate EXOs 8 and 12 TB drives Samsung UN46c7000 HD TV Samsung UN55HU9000 UHD TVCurrently using ACER PASSIVE EDID override on 3D TVs LG 55

#22
Posted 07/06/2018 12:13 AM   
I'm specifically trying to answer the question as to whether more cores matters in today's game. And answer this thread's question of i5-8600K or i7-8700K both at 5GHz. I don't mind spending money if it's used, but I do not like to buy stuff that sits idle. [quote="zig11727"]I been purchasing computer hardware for over twenty-years now and every time I try to save money it ends up costing me more.[/quote] Damn fine advice, and I have to seriously take that into account. Pretty much everything I've seen on the web, is giving misleading information. [s]I have yet to find any[/s] I have found 3 current games that uses more than 6 threads. https://techreport.com/review/31366/amd-ryzen-7-1800x-ryzen-7-1700x-and-ryzen-7-1700-cpus-reviewed/8 That link for WatchDogs2 suggests that the game can use the 20 threads of 6950X, but their test is questionable using 1080p@Max settings, and they don't give enough data to see what might be happening. I tested WatchDog2 and it certainly does not use all 8 threads of an i7-6700K (not even 2D). Like with any benchmarking, it's always possible to simply make mistakes. Watch Dogs 2 and Ashes are the only two games I can find that scale fairly well. My test of WD2 indicates it is using 75% of 8 threads= 6 threads in 2D. In 3D, the only test I'm ever going to actually care about, it's closer to 4 threads. Ashes is not interesting to me except as a tech demo. That video Zig posted earlier actually answer this question for the 8 games he tested. I will follow up with other games RageDemon suggested, that I have access to. I'll further test BattleField 1, Watch Dogs 2, Star Wars BattleFront, Quantum Break, Mafia 3, Crysis 3, Witcher 3, and Mirrors Edge Catalyst (some suggested by Dugom). Of the list from here: [url]https://forums.geforce.com/default/topic/1061296/3d-vision/amd-ryzen-7-2700x-performance-on-3d-vision-/post/5830088/#5830088[/url] It's abundantly clear from my testing that Fallout 4 and GTA 5 definitely do NOT scale with cores. The assumption that it does because the i7-5960X had higher frames is a red herring. That chip also has 25Mb of L3 cache, and people just ignore that because they prefer to believe in cores (probably confirmation bias). https://www.youtube.com/watch?v=SkTxXrqE5F0&t=3s This is interesting, because it's specifically an OC 6 thread CPU vs. the stock clock 12 thread CPU. That's interesting because it can directly answer whether threads or GHz matters most. His test here is 1080p but low settings, so it's important to watch for GPU being pegged. He also has some sort of 200 fps cap, which is important to avoid. [code].. 8700K stock (fps) vs. 8600K 5Ghz (fps) GTA 5: 121 vs. 141 Threads don't win Project Cars: 169 vs. 184 Threads don't win BattleField 1: 195 vs. 187 [i]Threads win.[/i] (200 fps cap, but right toward end is a clear spot.) Fallout 4: 103 vs. 134 Threads don't win Hitman: 102 vs. 117 Threads don't win Dirt 4: 293 vs. 333 Threads don't win (look for non maxed GPU moments.) Arma 3: 98 vs. 111 Threads don't win Far Cry Primal: 130 vs. 163 Threads don't win. (not GPU capped at front) [/code] Of those 8 fairly recent games, only BattleField 1 scales above 6 threads. This also directly repudiates the idea that HyperThreading matters for these games in particular. The i5 is like the i7 with HT off. Another set of tests from AnandTech Bench: [url]https://www.anandtech.com/bench/product/1730?vs=2109[/url] Of 20 thread 6950X vs. 12 thread 8700K show that there is no scaling with regard to threads for Civilization 6, Shadow of Mordor, Rise of the Tomb Raider, Rocket League, GTA 5. So that's Watch Dogs 2, BattleField 1, Assassins Creed Origins, and Ashes- as known games that use more than 6 threads. This does not necessarily prove that i5-8600K is a clear choice, there are a handful of games that use more than 6 threads, and going forward there will likely be more. It does however clearly say that waiting for 9700K is not important, because any greater use of threads will be piecemeal, and using more than 12 threads is unlikely to happen. For the moment, GHz is still clearly king. I know that no cares about this data, because their minds are already made up, but I'm posting this here since I already did the research, and it doesn't exist elsewhere. Also keep in mind these tests are strictly 2D, and results are likely worse for 3D. Edit: Actually AC:Origins does not use more than 6 threads. Here is a comparative CPU test run by PC Gamer, formerly from MaximumPC. https://cdn.mos.cms.futurecdn.net/qqeKTqZnuXpvXawBz4nATR-650-80.png That shows that an i5-8400 (6C/6T) budget part is roughly comparable to the i7-8700K (6C/12T). Nearly matched at the minimum frame rates (97%). Probably the clock speed difference. It also shows the monster part i9-7900X (10C/20T) does not keep up with the i5-8400. If it were scaling beyond 6 threads, this part should have done better. It seems fairly clear that in fact only Ashes is using more than 6 threads at present.
I'm specifically trying to answer the question as to whether more cores matters in today's game. And answer this thread's question of i5-8600K or i7-8700K both at 5GHz. I don't mind spending money if it's used, but I do not like to buy stuff that sits idle.

zig11727 said:I been purchasing computer hardware for over twenty-years now and every time I try to save money it ends up costing me more.

Damn fine advice, and I have to seriously take that into account.


Pretty much everything I've seen on the web, is giving misleading information. I have yet to find any I have found 3 current games that uses more than 6 threads.

https://techreport.com/review/31366/amd-ryzen-7-1800x-ryzen-7-1700x-and-ryzen-7-1700-cpus-reviewed/8

That link for WatchDogs2 suggests that the game can use the 20 threads of 6950X, but their test is questionable using 1080p@Max settings, and they don't give enough data to see what might be happening. I tested WatchDog2 and it certainly does not use all 8 threads of an i7-6700K (not even 2D). Like with any benchmarking, it's always possible to simply make mistakes.

Watch Dogs 2 and Ashes are the only two games I can find that scale fairly well. My test of WD2 indicates it is using 75% of 8 threads= 6 threads in 2D. In 3D, the only test I'm ever going to actually care about, it's closer to 4 threads. Ashes is not interesting to me except as a tech demo.


That video Zig posted earlier actually answer this question for the 8 games he tested. I will follow up with other games RageDemon suggested, that I have access to. I'll further test BattleField 1, Watch Dogs 2, Star Wars BattleFront, Quantum Break, Mafia 3, Crysis 3, Witcher 3, and Mirrors Edge Catalyst (some suggested by Dugom).

Of the list from here: https://forums.geforce.com/default/topic/1061296/3d-vision/amd-ryzen-7-2700x-performance-on-3d-vision-/post/5830088/#5830088

It's abundantly clear from my testing that Fallout 4 and GTA 5 definitely do NOT scale with cores. The assumption that it does because the i7-5960X had higher frames is a red herring. That chip also has 25Mb of L3 cache, and people just ignore that because they prefer to believe in cores (probably confirmation bias).


;t=3s


This is interesting, because it's specifically an OC 6 thread CPU vs. the stock clock 12 thread CPU. That's interesting because it can directly answer whether threads or GHz matters most.

His test here is 1080p but low settings, so it's important to watch for GPU being pegged. He also has some sort of 200 fps cap, which is important to avoid.

..
8700K stock (fps) vs. 8600K 5Ghz (fps)
GTA 5: 121 vs. 141 Threads don't win
Project Cars: 169 vs. 184 Threads don't win
BattleField 1: 195 vs. 187 Threads win. (200 fps cap, but right toward end is a clear spot.)
Fallout 4: 103 vs. 134 Threads don't win
Hitman: 102 vs. 117 Threads don't win
Dirt 4: 293 vs. 333 Threads don't win (look for non maxed GPU moments.)
Arma 3: 98 vs. 111 Threads don't win
Far Cry Primal: 130 vs. 163 Threads don't win. (not GPU capped at front)


Of those 8 fairly recent games, only BattleField 1 scales above 6 threads. This also directly repudiates the idea that HyperThreading matters for these games in particular. The i5 is like the i7 with HT off.

Another set of tests from AnandTech Bench: https://www.anandtech.com/bench/product/1730?vs=2109

Of 20 thread 6950X vs. 12 thread 8700K show that there is no scaling with regard to threads for Civilization 6, Shadow of Mordor, Rise of the Tomb Raider, Rocket League, GTA 5.


So that's Watch Dogs 2, BattleField 1, Assassins Creed Origins, and Ashes- as known games that use more than 6 threads.

This does not necessarily prove that i5-8600K is a clear choice, there are a handful of games that use more than 6 threads, and going forward there will likely be more. It does however clearly say that waiting for 9700K is not important, because any greater use of threads will be piecemeal, and using more than 12 threads is unlikely to happen. For the moment, GHz is still clearly king.


I know that no cares about this data, because their minds are already made up, but I'm posting this here since I already did the research, and it doesn't exist elsewhere.

Also keep in mind these tests are strictly 2D, and results are likely worse for 3D.


Edit: Actually AC:Origins does not use more than 6 threads. Here is a comparative CPU test run by PC Gamer, formerly from MaximumPC.

https://cdn.mos.cms.futurecdn.net/qqeKTqZnuXpvXawBz4nATR-650-80.png

That shows that an i5-8400 (6C/6T) budget part is roughly comparable to the i7-8700K (6C/12T). Nearly matched at the minimum frame rates (97%). Probably the clock speed difference.

It also shows the monster part i9-7900X (10C/20T) does not keep up with the i5-8400. If it were scaling beyond 6 threads, this part should have done better.

It seems fairly clear that in fact only Ashes is using more than 6 threads at present.

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

#23
Posted 07/06/2018 03:20 AM   
Thanks for the clarifications, I've been silently following the "battle", and it has been very interesting ! Just what I needed to make up my mind, that is I'll postpone my CPU upgrade, as it appears to be more or less a waste of money, at least for the time being. THANKS guys :)
Thanks for the clarifications, I've been silently following the "battle", and it has been very interesting !

Just what I needed to make up my mind, that is I'll postpone my CPU upgrade, as it appears to be more or less a waste of money, at least for the time being.

THANKS guys :)

Win7 64bit Pro
CPU: 4790K 4.8 GHZ
GPU: Aurus 1080 TI 2.08 GHZ - 100% Watercooled !
Monitor: Asus PG278QR
And lots of ram and HD's ;)

#24
Posted 07/06/2018 08:42 PM   
What a fascinating thread. Good natured debate, with blatant respect for knowledge sharing. My personal life has pretty much gone down the toilet recently; so much so, I haven't had the will to game. But I still lurk and respect this community. FWIW I love debate but hate antagonism, so threads like this provide me with pleasure. Kind regards from a fellow 3D Vision enthusiast Andy :-)
What a fascinating thread. Good natured debate, with blatant respect for knowledge sharing.

My personal life has pretty much gone down the toilet recently; so much so, I haven't had the will to game. But I still lurk and respect this community.

FWIW I love debate but hate antagonism, so threads like this provide me with pleasure.

Kind regards from a fellow 3D Vision enthusiast

Andy :-)

Lord, grant me the serenity to accept the things I cannot change, the courage to change the things I can, and the wisdom to know the difference.
-------------------
Vitals: Windows 7 64bit, i5 2500 @ 4.4ghz, SLI GTX670, 8GB, Viewsonic VX2268WM

Handy Driver Discussion
Helix Mod - community fixes
Bo3b's Shaderhacker School - How to fix 3D in games
3dsolutionsgaming.com - videos, reviews and 3D fixes

#25
Posted 07/06/2018 11:04 PM   
"waiting for 9700K is not important, because any greater use of threads will be piecemeal, and using more than 12 threads is unlikely to happen. For the moment, GHz is still clearly king. " hmm.... this and with Intel's deliberate hold-back on CPU tech means we won't see a generation leap in the next 9th gen?
"waiting for 9700K is not important, because any greater use of threads will be piecemeal, and using more than 12 threads is unlikely to happen. For the moment, GHz is still clearly king. "

hmm.... this and with Intel's deliberate hold-back on CPU tech means we won't see a generation leap in the next 9th gen?

8700K 5.0Ghz OC (Silicon Lottery Edition)
Noctua NH-15 cooler
Asus Maximus X Hero
16 GB Corsair Vengeance LPX RAM DDR4 3000
1TB Samsung PM961 OEM M.2 NVMe
MSI Gaming X Trio 1080Ti SLI
Corsair 1000RMi PSU
Cougar Conquer Case
Triple Screens Acer Predator 3D Vision XB272
3D Vision 2 Glasses
Win 10 Pro x64

#26
Posted 07/08/2018 11:26 PM   
I would take bo3b's conclusion with a grain of salt mate. I respect the bloke a great deal and he is an absolute critical person in the community, much like DSS et al - I consider all of them my friends. Unfortunately, his analysis is flawed for many reasons, not least of which is that personal "tests" were not done on a 6 core or an 8 core system and those that were, are from second hand videos which are flawed in the first place with rampant GPU saturation and no experimental controls; the very controls which are a cornerstone of any scientific experiment, called the "Scientific Method". Conclusions gleaned from the videos are therefore questionable at best and simply misleading at worst. Example 1: Video shows a heavily overclocked i5 8600K at 5GHz against an i7 8700k at stock, mostly 3.7GHz. Apparently, according to bo3b, "it can directly answer whether threads or GHz matters most". How one can claim that an 8700K which is clocked lower to an arbitrary degree is a good comparison for measuring the difference more cores make, is completely beyond me. This is bad because we do not know to what degree clock speed affects performance, and we certainly cannot gleam from this anything useful about hyper-threading without both CPUs working at the exact same frequency, and certainly not reliable >6 core performance. This is a fallacy known as [url]https://en.wikipedia.org/wiki/False_equivalence[/url] Example 2: There is a fundamental misunderstanding and analyses of hyper-threading and problems with correlating it with actual more physical cores - Example, if a process allows 8 threads then in reality: 4C4T CPU = 100% performance 4C8T CPU = 130% performance at best. 8C8T CPU = 200% performance One cannot simply say that because 4C HT showed x performance increase, 8 physical core count CPU will also show x performance increase... unfortunately, the majority of the analyses is plagued by such errors. We want to know if more physical cores make a difference, not necessarily fake HT cores as they will be only 30% better at best. This is another False Equivalence fallacy. It's also quite bizarre to dismiss the importance of a higher core CPUs at the same IPC x clock (GHz) with a blanket and authoritative statements such as "waiting for 9700K is not important", when in the same breath the claimant himself finds that a lot of games not only make good use of 6 threads, but at least 4 AAA modern games of his own testing, past and present, make good use of more than 6 threads;- he himself just a day prior was arguing that one does not need more than even a 4 core i5 CPU according to some forum thread on anandtech. Perhaps he meant the prospective 8C16T 9900K, but the same problems apply to a statement against that CPU too. This is known as a [url]https://en.wikipedia.org/wiki/Formal_fallacy[/url]. Kudos to bo3b for trying though, he did quite well with what he had, even if the conclusions, whether ultimately right or wrong, were based on flawed data, experiment, interpretation, and theory. I am genuinely sad about that because the good man put a lot of effort into it :( Aside from the analysis side of things, he also uses fallacies such as [url]https://en.wikipedia.org/wiki/Straw_man[/url], e.g. from your quote "For the moment, GHz is still clearly king", and bo3b stating "directly answer whether threads or GHz matters most" - implying that I was saying that more cores give better performance than IPC x Clock (GHz) - I have been extremely clear on many different threads over the years that IPC x Clock is the most important thing above all else, especially for 3D gaming and its bottleneck bug. My ideal scenario has always been to get as many cores as one can if and only if IPC x clock can remain constant. e.g. here is a post from just last year: [url]https://forums.geforce.com/default/topic/1027620/3d-vision/upgrade-cpu-which-one-is-the-best-cost-x-benefit-for-3d-vision-/post/5246983/#5246983[/url] Here is one from just 2 days ago, speaking directly to bo3b highlighting the importance of IPC x Clock (GHz): [url]https://forums.geforce.com/default/topic/1061296/3d-vision/amd-ryzen-7-2700x-performance-on-3d-vision-/post/5830088/#5830088[/url] Fallacious arguments in general are uncool, and make a person want to disengage. When deliberate and combined with [url]https://en.wikipedia.org/wiki/Confirmation_bias[/url], citing completely unscientific experiments, flawed analyses, to try and prove some point on the internet, is what leads to toxic atmospheres, arguments, and breakdown of friendships and communities. My philosophy is "that's cool man, whatever you say" and try to move on - I will not waste my time on such things; life is too short :) To be fair to Intel, the consensus is that they don't in fact seem to be hiding tech - they just stopped investing much in R&D because they had no competition, otherwise patents for newly developed tech by Intel to impede copying by DAAMIT would have been filed and analysed by the tech community quite thoroughly by now. When you say "generation leap", what do you mean? Over the past few gens, the "generation leap" has been ~2-3% performance increase at best. The general accepted truth is that we won't be seeing any kind of increased CPU performance per core any time soon. The only way to "increase" performance now is by increasing the core count, which then leaves the performance increase completely dependent on the many layers of programming to ensure minimal inter thread dependency while simultaneously trying to thread as heavily as possible :) Overall, I think there is a fundamental difference in our respective philosophies - I don't believe either is wrong or right, I think it's more suited to different individuals. I believe bo3b's correct viewpoint is as follows: + The majority of games on the market since the beginning of time do not support more than 4 cores so why bother with a higher core CPU. + The future absolutely cannot be predicted - If games start to use more cores and it shows, upgrade to the best CPU for gaming at that point. If this viewpoint resonates with you, then fantastic, it is good advice. I believe my viewpoint, on the other hand, is also correct, if more nuanced: + Most of us play the latest games - old games don't matter in comparison as we are upgrading for modern and especially future games, not old games. + We don't intend to upgrade the CPU again for the next ~5 years. In this sense, most AAA games indeed make good use of a high number of threads and will continue to increase in thread count as years go by. + Indeed no-one can exactly predict the future, but there are trends which cannot be ignored - the entire stock market in every world region is based on such trends and predictions, and there are good predictions which often come to pass. One such good prediction that we can count on is that due to the fact that individual Core performance has reached a limit since as far back as ~10 years ago, now, multi-core is the only way to go up. Market demand will ensure games use more cores. + Since the vast majority of games we play are primarily designed for consoles which already have 8 cores each since 2013, it would be smart to have a high physical core count processor going into the future, especially since the future XBox and PS5 will almost definitely have more than 8 cores each. + Granted you might not see great scaling today, but the smart money says that multi threading will get more prominent as the future unfolds - worst case scenario, even if every game goes back to 1 core, there would be no disadvantage in having more cores, as IPC x Clock - what bo3b refers to as "GHz", being the same on coffeeLake and beyond, is king, and the spare cache will boost performance regardless. + Coming from a 4 core CPU, an 8 core CPU would be a worthy upgrade, rather than an intermediate 6 core CPU. + Upgrading to a better CPU now would mean one won't have to upgrade quite so soon later. Someone has already said, when you try to save money now, one inevitably ends up spending more money later. In this case a person will have to buy not only a new CPU, but likely also a motherboard etc as well. Admittedly, this might be somewhat offset by the current cost of the likely premium price a prospective 9900K 8C16T might come in at, but a new CPU + motherboard sooner rather than later would be quite an investment too if an upgrade was needed sooner rather than later. This POV resonates with myself and others who live more on the electronic edge, and if it resonates with you, also, then great. To each their own! Hope that helps... :)
I would take bo3b's conclusion with a grain of salt mate. I respect the bloke a great deal and he is an absolute critical person in the community, much like DSS et al - I consider all of them my friends.


Unfortunately, his analysis is flawed for many reasons, not least of which is that personal "tests" were not done on a 6 core or an 8 core system and those that were, are from second hand videos which are flawed in the first place with rampant GPU saturation and no experimental controls; the very controls which are a cornerstone of any scientific experiment, called the "Scientific Method". Conclusions gleaned from the videos are therefore questionable at best and simply misleading at worst.

Example 1: Video shows a heavily overclocked i5 8600K at 5GHz against an i7 8700k at stock, mostly 3.7GHz. Apparently, according to bo3b, "it can directly answer whether threads or GHz matters most". How one can claim that an 8700K which is clocked lower to an arbitrary degree is a good comparison for measuring the difference more cores make, is completely beyond me. This is bad because we do not know to what degree clock speed affects performance, and we certainly cannot gleam from this anything useful about hyper-threading without both CPUs working at the exact same frequency, and certainly not reliable >6 core performance. This is a fallacy known as https://en.wikipedia.org/wiki/False_equivalence

Example 2: There is a fundamental misunderstanding and analyses of hyper-threading and problems with correlating it with actual more physical cores -

Example, if a process allows 8 threads then in reality:
4C4T CPU = 100% performance
4C8T CPU = 130% performance at best.
8C8T CPU = 200% performance

One cannot simply say that because 4C HT showed x performance increase, 8 physical core count CPU will also show x performance increase... unfortunately, the majority of the analyses is plagued by such errors. We want to know if more physical cores make a difference, not necessarily fake HT cores as they will be only 30% better at best. This is another False Equivalence fallacy.

It's also quite bizarre to dismiss the importance of a higher core CPUs at the same IPC x clock (GHz) with a blanket and authoritative statements such as "waiting for 9700K is not important", when in the same breath the claimant himself finds that a lot of games not only make good use of 6 threads, but at least 4 AAA modern games of his own testing, past and present, make good use of more than 6 threads;- he himself just a day prior was arguing that one does not need more than even a 4 core i5 CPU according to some forum thread on anandtech. Perhaps he meant the prospective 8C16T 9900K, but the same problems apply to a statement against that CPU too. This is known as a https://en.wikipedia.org/wiki/Formal_fallacy.

Kudos to bo3b for trying though, he did quite well with what he had, even if the conclusions, whether ultimately right or wrong, were based on flawed data, experiment, interpretation, and theory. I am genuinely sad about that because the good man put a lot of effort into it :(

Aside from the analysis side of things, he also uses fallacies such as https://en.wikipedia.org/wiki/Straw_man, e.g. from your quote "For the moment, GHz is still clearly king", and bo3b stating "directly answer whether threads or GHz matters most" - implying that I was saying that more cores give better performance than IPC x Clock (GHz) - I have been extremely clear on many different threads over the years that IPC x Clock is the most important thing above all else, especially for 3D gaming and its bottleneck bug. My ideal scenario has always been to get as many cores as one can if and only if IPC x clock can remain constant.

e.g. here is a post from just last year:
https://forums.geforce.com/default/topic/1027620/3d-vision/upgrade-cpu-which-one-is-the-best-cost-x-benefit-for-3d-vision-/post/5246983/#5246983

Here is one from just 2 days ago, speaking directly to bo3b highlighting the importance of IPC x Clock (GHz):
https://forums.geforce.com/default/topic/1061296/3d-vision/amd-ryzen-7-2700x-performance-on-3d-vision-/post/5830088/#5830088

Fallacious arguments in general are uncool, and make a person want to disengage. When deliberate and combined with https://en.wikipedia.org/wiki/Confirmation_bias, citing completely unscientific experiments, flawed analyses, to try and prove some point on the internet, is what leads to toxic atmospheres, arguments, and breakdown of friendships and communities. My philosophy is "that's cool man, whatever you say" and try to move on - I will not waste my time on such things; life is too short :)

To be fair to Intel, the consensus is that they don't in fact seem to be hiding tech - they just stopped investing much in R&D because they had no competition, otherwise patents for newly developed tech by Intel to impede copying by DAAMIT would have been filed and analysed by the tech community quite thoroughly by now.

When you say "generation leap", what do you mean? Over the past few gens, the "generation leap" has been ~2-3% performance increase at best. The general accepted truth is that we won't be seeing any kind of increased CPU performance per core any time soon. The only way to "increase" performance now is by increasing the core count, which then leaves the performance increase completely dependent on the many layers of programming to ensure minimal inter thread dependency while simultaneously trying to thread as heavily as possible :)


Overall, I think there is a fundamental difference in our respective philosophies - I don't believe either is wrong or right, I think it's more suited to different individuals.


I believe bo3b's correct viewpoint is as follows:

+ The majority of games on the market since the beginning of time do not support more than 4 cores so why bother with a higher core CPU.

+ The future absolutely cannot be predicted - If games start to use more cores and it shows, upgrade to the best CPU for gaming at that point.

If this viewpoint resonates with you, then fantastic, it is good advice.


I believe my viewpoint, on the other hand, is also correct, if more nuanced:

+ Most of us play the latest games - old games don't matter in comparison as we are upgrading for modern and especially future games, not old games.

+ We don't intend to upgrade the CPU again for the next ~5 years. In this sense, most AAA games indeed make good use of a high number of threads and will continue to increase in thread count as years go by.

+ Indeed no-one can exactly predict the future, but there are trends which cannot be ignored - the entire stock market in every world region is based on such trends and predictions, and there are good predictions which often come to pass. One such good prediction that we can count on is that due to the fact that individual Core performance has reached a limit since as far back as ~10 years ago, now, multi-core is the only way to go up. Market demand will ensure games use more cores.

+ Since the vast majority of games we play are primarily designed for consoles which already have 8 cores each since 2013, it would be smart to have a high physical core count processor going into the future, especially since the future XBox and PS5 will almost definitely have more than 8 cores each.

+ Granted you might not see great scaling today, but the smart money says that multi threading will get more prominent as the future unfolds - worst case scenario, even if every game goes back to 1 core, there would be no disadvantage in having more cores, as IPC x Clock - what bo3b refers to as "GHz", being the same on coffeeLake and beyond, is king, and the spare cache will boost performance regardless.

+ Coming from a 4 core CPU, an 8 core CPU would be a worthy upgrade, rather than an intermediate 6 core CPU.

+ Upgrading to a better CPU now would mean one won't have to upgrade quite so soon later. Someone has already said, when you try to save money now, one inevitably ends up spending more money later. In this case a person will have to buy not only a new CPU, but likely also a motherboard etc as well. Admittedly, this might be somewhat offset by the current cost of the likely premium price a prospective 9900K 8C16T might come in at, but a new CPU + motherboard sooner rather than later would be quite an investment too if an upgrade was needed sooner rather than later.

This POV resonates with myself and others who live more on the electronic edge, and if it resonates with you, also, then great. To each their own!

Hope that helps... :)

Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.

#27
Posted 07/09/2018 01:10 AM   
I would consider generational leap in hardware = the release of Sandy Bridge, the advent of 6 core mainstream cpu (vs. the traditional 4 core), the release of Maxwell and Pascal etc. Thanks for posting. It's good to see both sides of the story
I would consider generational leap in hardware = the release of Sandy Bridge, the advent of 6 core mainstream cpu (vs. the traditional 4 core), the release of Maxwell and Pascal etc.

Thanks for posting. It's good to see both sides of the story

8700K 5.0Ghz OC (Silicon Lottery Edition)
Noctua NH-15 cooler
Asus Maximus X Hero
16 GB Corsair Vengeance LPX RAM DDR4 3000
1TB Samsung PM961 OEM M.2 NVMe
MSI Gaming X Trio 1080Ti SLI
Corsair 1000RMi PSU
Cougar Conquer Case
Triple Screens Acer Predator 3D Vision XB272
3D Vision 2 Glasses
Win 10 Pro x64

#28
Posted 07/09/2018 05:41 AM   
OK, I finished up this tedious testing of a bunch of different games. System spec is a high end Sager laptop with an i7-6700K, (which has 4 cores, 8 threads), GTX 980 desktop part, 16G of RAM. It's closer to a desktop than a laptop. Not overclocked. Settings in all games were a graphical minimums, 720p, Low settings. Except for BattleField1, which has a 200fps cap, and thus needed medium settings to avoid hitting the cap. VSync is off, to avoid stalling the game. GPU usage can go to 100%, because in a free run state, we can drive past 60 or 120 fps until we max out the GPU. In this scenario, CPU should still be working as hard as it can, but it's possible that a more powerful GPU would open up more CPU usage. In every case, the frame rate was greater than 120fps (3D Vision), so it wouldn't make a practical difference. [code]... 720p Low 3d 2d Mafia 3 60% of 8 60% of 8 max gpu from free run framerate Battlefield 1 50% of 8 75% of 8 Watch Dogs 2 60% of 8 70% of 8 max gpu from free run SW Battlefront 2 40% of 8 35% of 8 80% when loading Quantum Break 45% of 8 55% of 8 Crysis 3 33% of 8 40% of 8 Witcher 3 25% of 8 33% of 8 max gpu from free run Far Cry 4 45% of 8 50% of 8 Dark Souls 3 25% of 8 25% of 8 capped at 60[/code] I'm not seeing anything that suggests greater than 6 cores makes any difference. Including Battlefield 1, which uses 6 cores. Probably Assassins Creed Origins uses more than 6, but I do not have access to that game to test. If someone who has the game can do a CPU % busy test, that would helpful. I definitely would not go with a Ryzen chip, as the extra cores don't make any difference. In an above video from TestingGames, he compares an i5-8600K vs ryzen 7 2700X, and the 6 thread vs 16 threads still loses in all 8 games, including AC: Origins and Battlefield 1. For anyone who is looking for a budget build, I think your best choice would be an i5-8600K.
OK, I finished up this tedious testing of a bunch of different games.

System spec is a high end Sager laptop with an i7-6700K, (which has 4 cores, 8 threads), GTX 980 desktop part, 16G of RAM. It's closer to a desktop than a laptop. Not overclocked.

Settings in all games were a graphical minimums, 720p, Low settings. Except for BattleField1, which has a 200fps cap, and thus needed medium settings to avoid hitting the cap.

VSync is off, to avoid stalling the game. GPU usage can go to 100%, because in a free run state, we can drive past 60 or 120 fps until we max out the GPU. In this scenario, CPU should still be working as hard as it can, but it's possible that a more powerful GPU would open up more CPU usage. In every case, the frame rate was greater than 120fps (3D Vision), so it wouldn't make a practical difference.


...
720p Low 3d 2d
Mafia 3 60% of 8 60% of 8 max gpu from free run framerate
Battlefield 1 50% of 8 75% of 8
Watch Dogs 2 60% of 8 70% of 8 max gpu from free run
SW Battlefront 2 40% of 8 35% of 8 80% when loading
Quantum Break 45% of 8 55% of 8
Crysis 3 33% of 8 40% of 8
Witcher 3 25% of 8 33% of 8 max gpu from free run
Far Cry 4 45% of 8 50% of 8
Dark Souls 3 25% of 8 25% of 8 capped at 60



I'm not seeing anything that suggests greater than 6 cores makes any difference. Including Battlefield 1, which uses 6 cores. Probably Assassins Creed Origins uses more than 6, but I do not have access to that game to test. If someone who has the game can do a CPU % busy test, that would helpful.

I definitely would not go with a Ryzen chip, as the extra cores don't make any difference. In an above video from TestingGames, he compares an i5-8600K vs ryzen 7 2700X, and the 6 thread vs 16 threads still loses in all 8 games, including AC: Origins and Battlefield 1.

For anyone who is looking for a budget build, I think your best choice would be an i5-8600K.

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

#29
Posted 07/09/2018 07:35 AM   
[quote="RAGEdemon"]Example 1: Video shows a heavily overclocked i5 8600K at 5GHz against an i7 8700k at stock, mostly 3.7GHz. Apparently, according to bo3b, "it can directly answer whether threads or GHz matters most". How one can claim that an 8700K which is clocked lower to an arbitrary degree is a good comparison for measuring the difference hyper-threading makes, is completely beyond me. This is bad because we do not know to what degree clock speed affects performance, and we certainly cannot gleam from this anything useful about hyper-threading without both CPUs working at the exact same frequency, and certainly not reliable >6 core performance. This is a fallacy known as https://en.wikipedia.org/wiki/False_equivalence [/quote] I'm well aware of all the debating tricks, including false equivalency and straw man arguments. You are presently doing a good job at obfuscating the problem with debating tricks, and not so much with actually looking at the data and engaging your brain. I'm not some idiot on the web, I do this for a living, and I know what matters. You [i]know[/i] this. I also know you are typically well informed and willing to do actual research, so it's baffling to me that you are resorting to internet forum gimmicks to try to make people question my results. To be clear, I'm not questioning your motives, I know you want correct answers as well. That video is a [i]terrific[/i] test case. Quoting Neil Degrasse-Tyson does not in fact make it bad data, or lack scientific value. Within the constraints of that test environment, there is absolutely solid data to be gleaned. You can dismiss it because it's not 'properly scientfic', but if you adopt that attitude, then nothing will ever be good enough for you, because you cannot possibly control all variables. Real scientists still do experiments, even if they cannot control all variables. In my judgment, there isn't anything wrong with his test case. He has set the video to 1080p, and disabled vsync. He uses Low settings. 1080p is not optimal, but it absolutely does not invalidate his tests- as long as GPU usage does not cap out. As I mention above, I call out each instance where it's GPU bound, and it's not that often. Some GPU bound tests are fine, as long is it is free-running for maximum fps, which will of course max out the GPU, as long as we have [i]enough[/i] threads. Moreover, it also includes the total CPU % usage, so we can see exactly how many threads are active. Have you actually watched this video? The reason this is super interesting is because it's literally threads vs. GHz. i7-8700K (12 threads) at 3.7GHz. i5-8600K (6 threads) at 5GHz. The architecture is identical, including RAM, motherboard, video card, SSD, OS. The games are run through identical scenarios, using identical settings. His experimental controls seem as good as they can be. It is literally Threads vs. GHz for these 8 games. Am I missing something? Maybe we can argue that those 6 HyperThreads aren't 'real' threads, but we both know that HyperThreads are pretty good. That i7 is literally twice as fast as an i5 for some scenarios, because it has twice the threads. And if a 12 thread CPU cannot best a higher clocked 6 thread CPU, then any threads above 6 do not matter. The question at hand is whether more threads matter. And this test case proves they do not. (For at least these 8 games.) If the 8600K were clocked the same as the 8700K, that would just make the results equal. The test itself aptly demonstrates that GHz trumps threads for these games. Please take this specific test case, and explain why you think it is invalid. Please, no rhetorical tricks, let's just talk about the data. I value your opinion, otherwise I wouldn't bother to write back. What [i]exactly[/i] am I missing about this test case that you think makes it invalid? [url]https://youtu.be/SkTxXrqE5F0?t=1[/url]
RAGEdemon said:Example 1: Video shows a heavily overclocked i5 8600K at 5GHz against an i7 8700k at stock, mostly 3.7GHz. Apparently, according to bo3b, "it can directly answer whether threads or GHz matters most". How one can claim that an 8700K which is clocked lower to an arbitrary degree is a good comparison for measuring the difference hyper-threading makes, is completely beyond me. This is bad because we do not know to what degree clock speed affects performance, and we certainly cannot gleam from this anything useful about hyper-threading without both CPUs working at the exact same frequency, and certainly not reliable >6 core performance. This is a fallacy known as https://en.wikipedia.org/wiki/False_equivalence

I'm well aware of all the debating tricks, including false equivalency and straw man arguments.

You are presently doing a good job at obfuscating the problem with debating tricks, and not so much with actually looking at the data and engaging your brain. I'm not some idiot on the web, I do this for a living, and I know what matters. You know this. I also know you are typically well informed and willing to do actual research, so it's baffling to me that you are resorting to internet forum gimmicks to try to make people question my results.

To be clear, I'm not questioning your motives, I know you want correct answers as well.


That video is a terrific test case. Quoting Neil Degrasse-Tyson does not in fact make it bad data, or lack scientific value.

Within the constraints of that test environment, there is absolutely solid data to be gleaned. You can dismiss it because it's not 'properly scientfic', but if you adopt that attitude, then nothing will ever be good enough for you, because you cannot possibly control all variables. Real scientists still do experiments, even if they cannot control all variables.

In my judgment, there isn't anything wrong with his test case. He has set the video to 1080p, and disabled vsync. He uses Low settings. 1080p is not optimal, but it absolutely does not invalidate his tests- as long as GPU usage does not cap out. As I mention above, I call out each instance where it's GPU bound, and it's not that often. Some GPU bound tests are fine, as long is it is free-running for maximum fps, which will of course max out the GPU, as long as we have enough threads. Moreover, it also includes the total CPU % usage, so we can see exactly how many threads are active. Have you actually watched this video?

The reason this is super interesting is because it's literally threads vs. GHz.
i7-8700K (12 threads) at 3.7GHz.
i5-8600K (6 threads) at 5GHz.

The architecture is identical, including RAM, motherboard, video card, SSD, OS. The games are run through identical scenarios, using identical settings. His experimental controls seem as good as they can be. It is literally Threads vs. GHz for these 8 games. Am I missing something?

Maybe we can argue that those 6 HyperThreads aren't 'real' threads, but we both know that HyperThreads are pretty good. That i7 is literally twice as fast as an i5 for some scenarios, because it has twice the threads. And if a 12 thread CPU cannot best a higher clocked 6 thread CPU, then any threads above 6 do not matter.

The question at hand is whether more threads matter. And this test case proves they do not. (For at least these 8 games.) If the 8600K were clocked the same as the 8700K, that would just make the results equal. The test itself aptly demonstrates that GHz trumps threads for these games.


Please take this specific test case, and explain why you think it is invalid. Please, no rhetorical tricks, let's just talk about the data. I value your opinion, otherwise I wouldn't bother to write back.

What exactly am I missing about this test case that you think makes it invalid?


https://youtu.be/SkTxXrqE5F0?t=1

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

#30
Posted 07/09/2018 08:08 AM   
  2 / 4    
Scroll To Top