[quote="bo3b"]Maybe we can [u]argue[/u] that those 6 HyperThreads aren't 'real' threads, but we both know that HyperThreads are pretty good. [u]That i7 is literally twice as fast as an i5 for some scenarios, because it has twice the threads.[/u] And [u]if a 12 thread CPU cannot best a higher clocked 6 thread CPU, then any threads above 6 do not matter.[/u] [/quote]
I don't want to argue mate. All I care about is facts. I really don't care for the whole macho "I'm right; you're wrong" thing - in fact, let's get that drivel out of the way. I'm wrong; I'm the bad guy;- You're right.
With that said, let's talk about why this video isn't great and your above interpretation isn't great as a consequence.
1.
Let's start with the [url]https://en.wikipedia.org/wiki/False_dilemma[/url] fallacy. We are not in any way proposing to swap "GHz" for Cores - it's not a choice between the two. My proposition is that as we can now have both 5GHz cores and more cores at that, all wrapped up nicely on Intel's Ring Bus. We are no longer limited to the past CPU architectures where more cores meant vastly slower clocked cores and slower interconnects (Intel Mesh vs Ring Bus). Why not have both if there is no trade-off? In fact, the extra cache would be better even if the extra cores are never used.
[img]https://cdn.wccftech.com/wp-content/uploads/2014/12/Intel-Level-3-Cache-Benchmarks-Aggregated.png[/img]
2. [quote="bo3b"]That i7 is literally twice as fast as an i5 for some scenarios, because it has twice the threads.[/quote]
No mate, that's not how hyper-threading works at all. In game related scenarios, games which make use of extra threads which the CPU can make use of, i.e. 8 thread game on a 4c8t CPU, the game will only run ~33% faster at absolute best, and often far lower than that.
Sometimes games actually have negative scaling with HT on. To lay things to rest, check out this good video demonstrating how in GTA5 and other games, [u]hyperthreading is negatively impacting game performance[/u], on an 8700k no less; 6c6t vs 6c12t, at the same clocks:
https://www.youtube.com/watch?v=8qkXKmpOWa0
The video is also a good presentation of HT on vs HT off for other games too, where GPU is not saturating to >90% or so.
Hyperthreading is in no way ever equivalent to physical cores where the game would indeed run near 100% as fast, everything else being equal and optimised, and the threads being present to take advantage of the extra cores.
Let me show you a Cinebench graphic highlighting how low virtual core performance is compared to real physical cores:
[img]http://core0.staticworld.net/images/article/2016/02/dx12_cpu_cinebench_r15_all_cores-100647717-orig.png[/img]
Notice how:
4 cores = ~500 points
8 cores = ~1000 points
4 cores with HT = only ~660 points, i.e. (660-500)/500=32% performance increase.
Transferring this into the absolute best case scenario as multi-threaded game i.e. Ashes of the Singularity:
[img]https://core0.staticworld.net/images/article/2016/02/dx12_cpu_ashes_of_the_singularity_beta_2_average_cpu_frame_rate_high_quality_19x10-100647718-orig.png[/img]
Notice how even in Ashes, 8 core CPU is 60% faster than a 4 core CPU but a 4c8t CPU is only 15% faster?
The simple fact is that one cannot substitute HT cores for real cores and project a performance difference with any kind of accuracy.
3. [quote="bo3b"]if a 12 thread CPU cannot best a higher clocked 6 thread CPU, then any threads above 6 do not matter.[/quote]
I'm sorry but this is wrong for multiple reasons.
My high school physics teacher was an awesome lady who I have always held in high regard. She once said that a good way to imagine the results of an experiment is to take things to the extreme. Let me show you why this is wrong by using your own words:
[color="orange"]"If a 12 thread [u]1GHz[/u] CPU cannot best a [u]5GHz[/u] 6 thread CPU, then any threads above 6 do not matter.[/color]
Can you see why this statement makes no sense?
Yea. As you might realise, we simply do not know what or how large of an effect a lower speed CPU would have on performance. For this to be any kind of real experiment that one might glean results from, both CPUs must be running at the same frequency as a critical experimental control - in fact, the entire setup needs to be as identical as possible, and eliminate all bottlenecks. The only difference can be that one has 6 cores, and the other has 12 cores, or if we are testing what difference HT makes, then one CPU has HT on, and the other CPU has HT off.
A note on hyperthreading itself:
Usually the OS has to juggle threads to a cpu. HT simply shows the OS more cores so the OS dumps its threads onto the CPU, and the CPU then does the juggling, more efficiently. How it does this is by processing threads on the CPU while the other threads are not being processed, usually due to them waiting on memory retrieval etc. It's just a trick to make the CPU more efficient, and can never be equivalent to extra physical cores. This also means that the CPU % usage reported needs to be taken with a huge grain of salt.
Otherwise, "that's cool man, whatever you say" :)
bo3b said:Maybe we can argue that those 6 HyperThreads aren't 'real' threads, but we both know that HyperThreads are pretty good. That i7 is literally twice as fast as an i5 for some scenarios, because it has twice the threads. And if a 12 thread CPU cannot best a higher clocked 6 thread CPU, then any threads above 6 do not matter.
I don't want to argue mate. All I care about is facts. I really don't care for the whole macho "I'm right; you're wrong" thing - in fact, let's get that drivel out of the way. I'm wrong; I'm the bad guy;- You're right.
With that said, let's talk about why this video isn't great and your above interpretation isn't great as a consequence.
1.
Let's start with the https://en.wikipedia.org/wiki/False_dilemma fallacy. We are not in any way proposing to swap "GHz" for Cores - it's not a choice between the two. My proposition is that as we can now have both 5GHz cores and more cores at that, all wrapped up nicely on Intel's Ring Bus. We are no longer limited to the past CPU architectures where more cores meant vastly slower clocked cores and slower interconnects (Intel Mesh vs Ring Bus). Why not have both if there is no trade-off? In fact, the extra cache would be better even if the extra cores are never used.
2.
bo3b said:That i7 is literally twice as fast as an i5 for some scenarios, because it has twice the threads.
No mate, that's not how hyper-threading works at all. In game related scenarios, games which make use of extra threads which the CPU can make use of, i.e. 8 thread game on a 4c8t CPU, the game will only run ~33% faster at absolute best, and often far lower than that.
Sometimes games actually have negative scaling with HT on. To lay things to rest, check out this good video demonstrating how in GTA5 and other games, hyperthreading is negatively impacting game performance, on an 8700k no less; 6c6t vs 6c12t, at the same clocks:
The video is also a good presentation of HT on vs HT off for other games too, where GPU is not saturating to >90% or so.
Hyperthreading is in no way ever equivalent to physical cores where the game would indeed run near 100% as fast, everything else being equal and optimised, and the threads being present to take advantage of the extra cores.
Let me show you a Cinebench graphic highlighting how low virtual core performance is compared to real physical cores:
Notice how:
4 cores = ~500 points
8 cores = ~1000 points
4 cores with HT = only ~660 points, i.e. (660-500)/500=32% performance increase.
Transferring this into the absolute best case scenario as multi-threaded game i.e. Ashes of the Singularity:
Notice how even in Ashes, 8 core CPU is 60% faster than a 4 core CPU but a 4c8t CPU is only 15% faster?
The simple fact is that one cannot substitute HT cores for real cores and project a performance difference with any kind of accuracy.
3.
bo3b said:if a 12 thread CPU cannot best a higher clocked 6 thread CPU, then any threads above 6 do not matter.
I'm sorry but this is wrong for multiple reasons.
My high school physics teacher was an awesome lady who I have always held in high regard. She once said that a good way to imagine the results of an experiment is to take things to the extreme. Let me show you why this is wrong by using your own words:
"If a 12 thread 1GHz CPU cannot best a 5GHz 6 thread CPU, then any threads above 6 do not matter.
Can you see why this statement makes no sense?
Yea. As you might realise, we simply do not know what or how large of an effect a lower speed CPU would have on performance. For this to be any kind of real experiment that one might glean results from, both CPUs must be running at the same frequency as a critical experimental control - in fact, the entire setup needs to be as identical as possible, and eliminate all bottlenecks. The only difference can be that one has 6 cores, and the other has 12 cores, or if we are testing what difference HT makes, then one CPU has HT on, and the other CPU has HT off.
A note on hyperthreading itself:
Usually the OS has to juggle threads to a cpu. HT simply shows the OS more cores so the OS dumps its threads onto the CPU, and the CPU then does the juggling, more efficiently. How it does this is by processing threads on the CPU while the other threads are not being processed, usually due to them waiting on memory retrieval etc. It's just a trick to make the CPU more efficient, and can never be equivalent to extra physical cores. This also means that the CPU % usage reported needs to be taken with a huge grain of salt.
Otherwise, "that's cool man, whatever you say" :)
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
hm.. Ashes is just an anomaly. It's a nice tech demo that enthusiast kept bringing up to justify outliner rather than a norm. I have yet to see any recent, newer mainstream games with that type of behavior
Are we looking at 2-3 years more years before the market (or to be precise, the console market) begins to adopt something like Ashes?
hm.. Ashes is just an anomaly. It's a nice tech demo that enthusiast kept bringing up to justify outliner rather than a norm. I have yet to see any recent, newer mainstream games with that type of behavior
Are we looking at 2-3 years more years before the market (or to be precise, the console market) begins to adopt something like Ashes?
8700K 5.0Ghz OC (Silicon Lottery Edition)
Noctua NH-15 cooler
Asus Maximus X Hero
16 GB Corsair Vengeance LPX RAM DDR4 3000
1TB Samsung PM961 OEM M.2 NVMe
MSI Gaming X Trio 1080Ti SLI
Corsair 1000RMi PSU
Cougar Conquer Case
Triple Screens Acer Predator 3D Vision XB272
3D Vision 2 Glasses
Win 10 Pro x64
Since we are on the topic of threading, you guys might want to take a look at this benchmark for i9-7900X (20 threads, 4.3Ghz) vs i9-7900X (8 threads, 4.5Ghz) vs. I7-7700K (8 threads, 4.5Ghz).
https://www.gamersnexus.net/guides/2963-intel-12k-marketing-blunder-pcie-lane-scaling-benchmarks
[img]https://www.gamersnexus.net/images/media/2017/CPUs/7900x/pcie/x8-vs-x16-sli-sniper-mp.png[/img]
[img]https://www.gamersnexus.net/images/media/2017/CPUs/7900x/pcie/x8-vs-x16-sli-tww-mp.png[/img]
The difference is almost insignificant.
We just don't know mate, the future is yet to unfold. Certainly, next-next gen console games will all be DX12, and those consoles will very likely have more than 8 cores each.
IMO, the next "generational leap" that you asked about is an 8 core which is at least as good for gaming (once overclocked and cooled properly) as its counterparts - a 4-core 7700K and a 6 core 8700k, and hopefully cost effective (8700k is £52 per core being significantly cheaper than the 7700k at £72 per core - they are both pretty much the same price overall). This would be the projected 8c16t 9900K.
15 years ago, almost all games were single core. 10 years ago, they were mostly dual core. With the advent of 8 core XB1 and PS4, over the past 5 years, games have started to use 4+ cores. The smart money says that 5 years from now, the threads will double again. How long do you intend to keep your CPU?
It's unfortunate that RyZen isn't as good for gaming, nevertheless, an apples to apples comparison between the 6 core 12t 2600x and 8 core 16t 2700x @ clock can be quite enlightening. For most games today, the difference is marginal, but its there: [url]https://www.gamersnexus.net/hwreviews/3288-amd-r5-2600-2600x-review-stream-benchmarks-gaming-blender/page-3[/url]
Assassin's Creed: Origins
8C16T 2700x@4.2GHz = 111.8
6C12T 2600x@4.2GHz = 103.0
Watch Dogs 2:
8C16T 2700x@4.2GHz = 111.8
6C12T 2600x@4.2GHz = 105.0
Project Cars:
8C16T 2700x@4.2GHz = 113.8
6C12T 2600x@4.2GHz = 111.0
Ashes:
8C16T 2700x@4.2GHz = 50.7
6C12T 2600x@4.2GHz = 44.0
GTA5:
8C16T 2700x@4.2GHz = 122.0
6C12T 2600x@4.2GHz = 117.3
Whether one sees value here, or with a potential future, and then whether one wishes to wait for the 8 core big brother of the 8700K or not is a personal choice :)
We just don't know mate, the future is yet to unfold. Certainly, next-next gen console games will all be DX12, and those consoles will very likely have more than 8 cores each.
IMO, the next "generational leap" that you asked about is an 8 core which is at least as good for gaming (once overclocked and cooled properly) as its counterparts - a 4-core 7700K and a 6 core 8700k, and hopefully cost effective (8700k is £52 per core being significantly cheaper than the 7700k at £72 per core - they are both pretty much the same price overall). This would be the projected 8c16t 9900K.
15 years ago, almost all games were single core. 10 years ago, they were mostly dual core. With the advent of 8 core XB1 and PS4, over the past 5 years, games have started to use 4+ cores. The smart money says that 5 years from now, the threads will double again. How long do you intend to keep your CPU?
Whether one sees value here, or with a potential future, and then whether one wishes to wait for the 8 core big brother of the 8700K or not is a personal choice :)
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
if the 9th gen is another generational leap & it makes a significant impact on 3D Surorund performance/Emulator (RPCS3/CEMU/PCSX2), then I would sell my 8700K rig and upgrade.
but why 9900K though? i thought the natural procession would be 9700K
btw, how come none of you running 79xx or 89xx this gen?
if the 9th gen is another generational leap & it makes a significant impact on 3D Surorund performance/Emulator (RPCS3/CEMU/PCSX2), then I would sell my 8700K rig and upgrade.
but why 9900K though? i thought the natural procession would be 9700K
btw, how come none of you running 79xx or 89xx this gen?
8700K 5.0Ghz OC (Silicon Lottery Edition)
Noctua NH-15 cooler
Asus Maximus X Hero
16 GB Corsair Vengeance LPX RAM DDR4 3000
1TB Samsung PM961 OEM M.2 NVMe
MSI Gaming X Trio 1080Ti SLI
Corsair 1000RMi PSU
Cougar Conquer Case
Triple Screens Acer Predator 3D Vision XB272
3D Vision 2 Glasses
Win 10 Pro x64
Those used the Mesh interconnects which give low core to core communication speed (for gaming) and low frequency. Although higher core count, the performance increase vs cost was questionable at best.
8700K is the first 6 core to have neither of these weaknesses, and so does the expected 9900K @ 8 cores.
Waiting for it might be worth while, though moving from 8700K to 9900K as an upgrade, the difference will be marginal;- not likely worth the money to upgrade because the older second hand system will have significantly depreciated in value at that point.
It's similar to buying last year's Mercedes Benz at full price when the new model is about to come out, and then trying to sell the old model to 'upgrade' to the new model. You'll end up paying a premium on both, but then also taking a huge hit when selling the old model too because no-one wants last years car... but it happens!
Whatever makes you feel comfortable, man, go for it!
If I had an 8700K, I'd wait for a 12/16 core CPU on the next fabrication node, perhaps even 7nm for an upgrade to be worthwhile, but that's me.
I hope this thread helps others too down the line...
Those used the Mesh interconnects which give low core to core communication speed (for gaming) and low frequency. Although higher core count, the performance increase vs cost was questionable at best.
8700K is the first 6 core to have neither of these weaknesses, and so does the expected 9900K @ 8 cores.
Waiting for it might be worth while, though moving from 8700K to 9900K as an upgrade, the difference will be marginal;- not likely worth the money to upgrade because the older second hand system will have significantly depreciated in value at that point.
It's similar to buying last year's Mercedes Benz at full price when the new model is about to come out, and then trying to sell the old model to 'upgrade' to the new model. You'll end up paying a premium on both, but then also taking a huge hit when selling the old model too because no-one wants last years car... but it happens!
Whatever makes you feel comfortable, man, go for it!
If I had an 8700K, I'd wait for a 12/16 core CPU on the next fabrication node, perhaps even 7nm for an upgrade to be worthwhile, but that's me.
I hope this thread helps others too down the line...
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
so I have to jump even to a higher tier (i9) than before, if I am to upgrade to 9th gen?
DAMNNNN.. I thought I made a substantial leap with i5/4th gen to i7/8th gen with pretty significant price:performance.
So I guess the 9th gen counterpart of the i7-8700K (i7-9700K) wouldn't be a good upgrade since it's still 6 core? Dude, that i9-9900K will cost you a pretty penny and with Silicon lottery, it becomes exorbitant. So are you going to upgrade to i9-9900K from your i7-7700K?
Btw, have the i9 line ever been good to OC? How come I never hear any enthusiast/overclocker attempt to OC that line? I thought that line was primarily for server like the old Xenon (which poorly suits for gaming purpose).
so I have to jump even to a higher tier (i9) than before, if I am to upgrade to 9th gen?
DAMNNNN.. I thought I made a substantial leap with i5/4th gen to i7/8th gen with pretty significant price:performance.
So I guess the 9th gen counterpart of the i7-8700K (i7-9700K) wouldn't be a good upgrade since it's still 6 core? Dude, that i9-9900K will cost you a pretty penny and with Silicon lottery, it becomes exorbitant. So are you going to upgrade to i9-9900K from your i7-7700K?
Btw, have the i9 line ever been good to OC? How come I never hear any enthusiast/overclocker attempt to OC that line? I thought that line was primarily for server like the old Xenon (which poorly suits for gaming purpose).
8700K 5.0Ghz OC (Silicon Lottery Edition)
Noctua NH-15 cooler
Asus Maximus X Hero
16 GB Corsair Vengeance LPX RAM DDR4 3000
1TB Samsung PM961 OEM M.2 NVMe
MSI Gaming X Trio 1080Ti SLI
Corsair 1000RMi PSU
Cougar Conquer Case
Triple Screens Acer Predator 3D Vision XB272
3D Vision 2 Glasses
Win 10 Pro x64
[quote="RAGEdemon"]2. [quote="bo3b"]That i7 is literally twice as fast as an i5 for some scenarios, because it has twice the threads.[/quote]No mate, that's not how hyper-threading works at all. In game related scenarios, games which make use of extra threads which the CPU can make use of, i.e. 8 thread game on a 4c8t CPU, the game will only run ~33% faster at absolute best, and often far lower than that.
Sometimes games actually have negative scaling with HT on. To lay things to rest, check out this good video demonstrating how in GTA5 and other games, [u]hyperthreading is negatively impacting game performance[/u], on an 8700k no less; 6c6t vs 6c12t, at the same clocks:
[url]https://youtu.be/8qkXKmpOWa0?t=1[/url]
The video is also a good presentation of HT on vs HT off for other games too, where GPU is not saturating to >90% or so.
Hyperthreading is in no way ever equivalent to physical cores where the game would indeed run near 100% as fast, everything else being equal and optimised, and the threads being present to take advantage of the extra cores.
<snip>
[/quote]
Your understanding here is incorrect. The way that hyperthreading works is by utilizing underused resources because of stalls in the CPU pipeline. It's essentially a way to keep a core working, when it otherwise would be waiting on things like memory accesses. One memory access for example is 100s of instructions for a core, all lost time, unless you just happen to have something ready to go on the corresponding hyperthread.
It's not a real thread in the sense that it cannot replace a full core, for [i]most [/i]scenarios. It's a way to use otherwise wasted clock cycles.
If code is super well written, and peformance optimized, it's not going to help much if at all. If a program has been performance optimized to keep the pipeline filled, then there is no 'slop' that can be recovered, and the hyperthread will add nothing. However... we know that most code in the world is terrible, and very rarely well optimized. Which is why in the real world, we actually get some pretty good value out of hyperthreading.
You are absolutely incorrect that it cannot scale to 100%. It can, and it does, [i]depending upon the scenario[/i].
In a hypothetical case, based on simply the design, if a core were getting 50% stalls for some reason, running at exactly half it's rated speed- and a second process/thread happens to not conflict in any way with the first thread's resources... then you could get 100% scaling. Hypothetically.
Now, here is an actual example of this happening.
https://techbuyersguru.com/dual-core-quad-core-and-hyperthreading-benchmark-analysis?page=2
[img]https://forums.geforce.com/cmd/default/download-comment-attachment/75409/[/img]
In this graph, we are looking at Far Cry 3. The game parameters are not ideal, but they are the same between each test. And same CPU frequency at 3.3GHz.
The 3220 chip is a 2C/4T chip. 2 real cores, 4 threads.
The 3770K is a 4C/8T chip. 4 real cores, 8 threads.
The 3220 is an older and weaker chip, and yet it is providing almost exactly the same performance as the better chip with HT turned off. So, 2C/4T is working as well as 4C/4T.
If there is one example, clearly there will be more.
Mostly you are right, HT does not scale super well in general, and especially not in games. Although I'm trying to point out that it is [i]highly [/i]dependent on the scenario. Averages are OK, but tell you nothing about specific scenarios.
And it is fairly clear that the 4 thread point is the sweet spot for best scaling. As you add more and more threads and HyperThreads, the value of hyperthreading is a lot less.
Edit: RageDemon talks about this example in a few posts, and it seems fairly clear that this is a bad example. The author of the original post shows a GPU graph that is not running at 100%, but for a different game. He does not show one for this test case, so it's entirely possible it is GPU bound for this example.
Here is a better example of high HT scaling. In the comparison of i3 (2C/4T) to i5 (4C/4T) for this video, we see the 2C/4T i3 at roughly 85% of a 4C/4T i5.
https://www.youtube.com/watch?v=o8l-kEs3XH8
bo3b said:That i7 is literally twice as fast as an i5 for some scenarios, because it has twice the threads.
No mate, that's not how hyper-threading works at all. In game related scenarios, games which make use of extra threads which the CPU can make use of, i.e. 8 thread game on a 4c8t CPU, the game will only run ~33% faster at absolute best, and often far lower than that.
Sometimes games actually have negative scaling with HT on. To lay things to rest, check out this good video demonstrating how in GTA5 and other games, hyperthreading is negatively impacting game performance, on an 8700k no less; 6c6t vs 6c12t, at the same clocks:
The video is also a good presentation of HT on vs HT off for other games too, where GPU is not saturating to >90% or so.
Hyperthreading is in no way ever equivalent to physical cores where the game would indeed run near 100% as fast, everything else being equal and optimised, and the threads being present to take advantage of the extra cores.
<snip>
Your understanding here is incorrect. The way that hyperthreading works is by utilizing underused resources because of stalls in the CPU pipeline. It's essentially a way to keep a core working, when it otherwise would be waiting on things like memory accesses. One memory access for example is 100s of instructions for a core, all lost time, unless you just happen to have something ready to go on the corresponding hyperthread.
It's not a real thread in the sense that it cannot replace a full core, for most scenarios. It's a way to use otherwise wasted clock cycles.
If code is super well written, and peformance optimized, it's not going to help much if at all. If a program has been performance optimized to keep the pipeline filled, then there is no 'slop' that can be recovered, and the hyperthread will add nothing. However... we know that most code in the world is terrible, and very rarely well optimized. Which is why in the real world, we actually get some pretty good value out of hyperthreading.
You are absolutely incorrect that it cannot scale to 100%. It can, and it does, depending upon the scenario.
In a hypothetical case, based on simply the design, if a core were getting 50% stalls for some reason, running at exactly half it's rated speed- and a second process/thread happens to not conflict in any way with the first thread's resources... then you could get 100% scaling. Hypothetically.
In this graph, we are looking at Far Cry 3. The game parameters are not ideal, but they are the same between each test. And same CPU frequency at 3.3GHz.
The 3220 chip is a 2C/4T chip. 2 real cores, 4 threads.
The 3770K is a 4C/8T chip. 4 real cores, 8 threads.
The 3220 is an older and weaker chip, and yet it is providing almost exactly the same performance as the better chip with HT turned off. So, 2C/4T is working as well as 4C/4T.
If there is one example, clearly there will be more.
Mostly you are right, HT does not scale super well in general, and especially not in games. Although I'm trying to point out that it is highly dependent on the scenario. Averages are OK, but tell you nothing about specific scenarios.
And it is fairly clear that the 4 thread point is the sweet spot for best scaling. As you add more and more threads and HyperThreads, the value of hyperthreading is a lot less.
Edit: RageDemon talks about this example in a few posts, and it seems fairly clear that this is a bad example. The author of the original post shows a GPU graph that is not running at 100%, but for a different game. He does not show one for this test case, so it's entirely possible it is GPU bound for this example.
Here is a better example of high HT scaling. In the comparison of i3 (2C/4T) to i5 (4C/4T) for this video, we see the 2C/4T i3 at roughly 85% of a 4C/4T i5.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607 Latest 3Dmigoto Release Bo3b's School for ShaderHackers
As I said...
[quote="RAGEdemon"]On a personal note, I am on a different upgrade schedule - my 5.1GHz 7700K ought to last me some time, towards when the next gen consoles come out which dictate even PC hardware requirements, and new higher core CPUs are more readily available from both Intel and DAAMIT, everyone's favourite underdog.
...
If I had an 8700K, I'd wait for a 12/16 core CPU on the next fabrication node, perhaps even 7nm for an upgrade to be worthwhile, but that's me.
[/quote]
Even with my 7700k, I'll likely wait till a 12 core on a 7nm process.
Listen; don't worry about intel "generations", they are only marketing speak made up by intel to try and sell more CPUs - they don't actually mean anything. A CoffeeLake is a KabyLake is a SkyLake. There is no real difference. The platforms too are very similar - z370 is a z270 is a z170 with minor differences.
In fact, you would struggle to see any performance difference between a SkyLake @ 5GHz vs a CoffLake @ 5GHz.
Also don't worry about the i9/i7 moniker. In the past, a 6 core would have been a server grade CPU which is poor choice for gaming unless heavily overclocked (I actually did have a 6C 12T Xeon x5660 @ 4.4GHz previous to my 7700k). A 6 Core 8700K was the first chip to break the status quo - to be good for gaming and it was actually cheaper per core and about the same price overall as a 7700k. The 9900k is expected to follow the same train of performance and price,- not the old style i9's fancy pricing and lack of gaming performance.
@ bo3b, you will struggle to find any game which shows even near double the performance on modern 4 core+HT vs 4 core no HT. You speak of Ashes as an outlier and refuse to accept it as a potential future, but then post results from a game from 2012 running on a weak 2 core chip, using words such as "hypothetical scenareos" to try and illustrate how HT can have near 100% scaling in modern times. This is known as the [url]https://en.wikipedia.org/wiki/Faulty_generalization[/url] fallacy.
Hypothetically, J-Enermax could become the president of the world tomorrow, but in reality it won't happen. (I actually hope it does, that would be great to see!)
Btw, your example graph is 76% scaling, not 100%, but I see you have cherry picked the best graph to support your claim. Please allow me to post the rest of the graphs from the very article you linked, even on this ancient 2 core processor with 2012 games - please pay attention to the y-axis, it does not start at 0 which I think is throwing you off:
[img]https://techbuyersguru.com/sites/default/files/pictures/TheGamersBench/DualQuadCore/DeusEx%20CPU.PNG[/img]
2 core CPU:
HT OFF = 80 fps
HT ON = 92 fps
Scaling = (92-80)/80= 15%
4 core CPU:
2% HT scaling.
[img]https://techbuyersguru.com/sites/default/files/pictures/TheGamersBench/DualQuadCore/Hitman%20CPU.PNG[/img]
2 core CPU:
HT OFF = 52 fps
HT ON = 65 fps
Scaling = (65-52)/52= 25%
4 core CPU:
-2% HT scaling.
[img]https://techbuyersguru.com/sites/default/files/pictures/TheGamersBench/DualQuadCore/TombRaider%20CPU.PNG[/img]
2 core CPU:
HT OFF = 54 fps
HT ON = 43 fps
Scaling = (54-43)/54= 25%
4 core CPU:
0% HT scaling.
As you can see, unfortunately, there is no mythical 100% scaling or anywhere near that in modern 4 core+ CPUs and games, as the graphs above from your own article show. I wish this was not true as I'd love to think my 4 HT cores were near the performance of real cores, but alas we all have to live with the disappointment that they might give 25% on a good day, usually give no significant gain, and sometimes even give a negative gain altogether :)
RAGEdemon said:On a personal note, I am on a different upgrade schedule - my 5.1GHz 7700K ought to last me some time, towards when the next gen consoles come out which dictate even PC hardware requirements, and new higher core CPUs are more readily available from both Intel and DAAMIT, everyone's favourite underdog.
...
If I had an 8700K, I'd wait for a 12/16 core CPU on the next fabrication node, perhaps even 7nm for an upgrade to be worthwhile, but that's me.
Even with my 7700k, I'll likely wait till a 12 core on a 7nm process.
Listen; don't worry about intel "generations", they are only marketing speak made up by intel to try and sell more CPUs - they don't actually mean anything. A CoffeeLake is a KabyLake is a SkyLake. There is no real difference. The platforms too are very similar - z370 is a z270 is a z170 with minor differences.
In fact, you would struggle to see any performance difference between a SkyLake @ 5GHz vs a CoffLake @ 5GHz.
Also don't worry about the i9/i7 moniker. In the past, a 6 core would have been a server grade CPU which is poor choice for gaming unless heavily overclocked (I actually did have a 6C 12T Xeon x5660 @ 4.4GHz previous to my 7700k). A 6 Core 8700K was the first chip to break the status quo - to be good for gaming and it was actually cheaper per core and about the same price overall as a 7700k. The 9900k is expected to follow the same train of performance and price,- not the old style i9's fancy pricing and lack of gaming performance.
@ bo3b, you will struggle to find any game which shows even near double the performance on modern 4 core+HT vs 4 core no HT. You speak of Ashes as an outlier and refuse to accept it as a potential future, but then post results from a game from 2012 running on a weak 2 core chip, using words such as "hypothetical scenareos" to try and illustrate how HT can have near 100% scaling in modern times. This is known as the https://en.wikipedia.org/wiki/Faulty_generalization fallacy.
Hypothetically, J-Enermax could become the president of the world tomorrow, but in reality it won't happen. (I actually hope it does, that would be great to see!)
Btw, your example graph is 76% scaling, not 100%, but I see you have cherry picked the best graph to support your claim. Please allow me to post the rest of the graphs from the very article you linked, even on this ancient 2 core processor with 2012 games - please pay attention to the y-axis, it does not start at 0 which I think is throwing you off:
2 core CPU:
HT OFF = 80 fps
HT ON = 92 fps
Scaling = (92-80)/80= 15%
4 core CPU:
2% HT scaling.
2 core CPU:
HT OFF = 52 fps
HT ON = 65 fps
Scaling = (65-52)/52= 25%
4 core CPU:
-2% HT scaling.
2 core CPU:
HT OFF = 54 fps
HT ON = 43 fps
Scaling = (54-43)/54= 25%
4 core CPU:
0% HT scaling.
As you can see, unfortunately, there is no mythical 100% scaling or anywhere near that in modern 4 core+ CPUs and games, as the graphs above from your own article show. I wish this was not true as I'd love to think my 4 HT cores were near the performance of real cores, but alas we all have to live with the disappointment that they might give 25% on a good day, usually give no significant gain, and sometimes even give a negative gain altogether :)
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
@AndySonOfBob: Sorry to hear about your recent troubles, and no gaming definitely is grim. Best of luck. Since you guys are finding this conversation interesting, and it's not just me and RageDemon, I'll keep posting.
[quote="J-Enermax"]hm.. Ashes is just an anomaly. It's a nice tech demo that enthusiast kept bringing up to justify outliner rather than a norm. I have yet to see any recent, newer mainstream games with that type of behavior
Are we looking at 2-3 years more years before the market (or to be precise, the console market) begins to adopt something like Ashes?[/quote]As RageDemon notes, there is really no way to tell. This is more of a Rorschach test, and everyone will have different opinions on where it will go.
My take is the opposite of RageDemon's, I don't think we are going to have others follow in the Ashes footsteps, because they really do not need to, and it's damn hard to make software like they have done.
There is also the case that scaling gets worse and worse, the more cores/threads you have, with diminishing returns. If there is required communication between threads, you will follow Amdahl's Law. If you have something truly parallel, it can work, but in general game code is not that way. Graphics are, because one rectangle of the screen is nearly independent of any other, and with tiled lighting, this is even more true. So parallelism works great with pushing pixels.
Keeping track of player movements and maps, and triggering actions, and game play- not so much. For open world, or something like Ashes where things are mostly independent, it can help. For something like a first person shooter, it's not as likely to work, and much harder.
The two big game engines of Unreal and Unity do not currently support this level of concurrency, and it's not at all likely they will, because of the massive code base. And the overwhelming complexity that multi-threading brings. Bugs in multi-threaded are universally acknowledged as the hardest to solve.
In 3D Vision, I see even less thread usage, not keeping up with 2D. Driver maybe, 3 core bug, not enough performance, hard to say, but it's nearly always worse. Same is true for VR usage. The Oculus SDK is better than OpenVR, and neither support multi-threading in any sensible way. I don't care about 2D, so more threads is even less useful to me.
Lastly, on consoles, do they need massive scale? Maybe it can open up some new game types, new multi-player venues or bigger scale, but in general, their target is quite a lot lower, even shooting for 30 fps as their goal.
So for all those reasons, my view is that we are not likely to get great thread use in the next 5 years. Moreover, from my buying perspective, I'm not going to use this computer in 5 years anyway, so it's mostly irrelevant. If I'm right I use this currently best machine for at least 3 years, because 9xxx is not going to turn the dial. If I'm wrong, and somehow the 8 full cores of 9xxx is vastly superior, I'll sell this rig and buy anew. It would be a loss, except that I get to play maxed out for an easy 6 months anyway, so loss is relative.
Still, for each person it will depend upon your prediction.
@AndySonOfBob: Sorry to hear about your recent troubles, and no gaming definitely is grim. Best of luck. Since you guys are finding this conversation interesting, and it's not just me and RageDemon, I'll keep posting.
J-Enermax said:hm.. Ashes is just an anomaly. It's a nice tech demo that enthusiast kept bringing up to justify outliner rather than a norm. I have yet to see any recent, newer mainstream games with that type of behavior
Are we looking at 2-3 years more years before the market (or to be precise, the console market) begins to adopt something like Ashes?
As RageDemon notes, there is really no way to tell. This is more of a Rorschach test, and everyone will have different opinions on where it will go.
My take is the opposite of RageDemon's, I don't think we are going to have others follow in the Ashes footsteps, because they really do not need to, and it's damn hard to make software like they have done.
There is also the case that scaling gets worse and worse, the more cores/threads you have, with diminishing returns. If there is required communication between threads, you will follow Amdahl's Law. If you have something truly parallel, it can work, but in general game code is not that way. Graphics are, because one rectangle of the screen is nearly independent of any other, and with tiled lighting, this is even more true. So parallelism works great with pushing pixels.
Keeping track of player movements and maps, and triggering actions, and game play- not so much. For open world, or something like Ashes where things are mostly independent, it can help. For something like a first person shooter, it's not as likely to work, and much harder.
The two big game engines of Unreal and Unity do not currently support this level of concurrency, and it's not at all likely they will, because of the massive code base. And the overwhelming complexity that multi-threading brings. Bugs in multi-threaded are universally acknowledged as the hardest to solve.
In 3D Vision, I see even less thread usage, not keeping up with 2D. Driver maybe, 3 core bug, not enough performance, hard to say, but it's nearly always worse. Same is true for VR usage. The Oculus SDK is better than OpenVR, and neither support multi-threading in any sensible way. I don't care about 2D, so more threads is even less useful to me.
Lastly, on consoles, do they need massive scale? Maybe it can open up some new game types, new multi-player venues or bigger scale, but in general, their target is quite a lot lower, even shooting for 30 fps as their goal.
So for all those reasons, my view is that we are not likely to get great thread use in the next 5 years. Moreover, from my buying perspective, I'm not going to use this computer in 5 years anyway, so it's mostly irrelevant. If I'm right I use this currently best machine for at least 3 years, because 9xxx is not going to turn the dial. If I'm wrong, and somehow the 8 full cores of 9xxx is vastly superior, I'll sell this rig and buy anew. It would be a loss, except that I get to play maxed out for an easy 6 months anyway, so loss is relative.
Still, for each person it will depend upon your prediction.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607 Latest 3Dmigoto Release Bo3b's School for ShaderHackers
Regarding the video showing the drop in performance while using HyperThreading, i8700K with it On, then Off:
[url]https://youtu.be/8qkXKmpOWa0[/url]
I'm actually not sure what to think about this. His test seems pretty solid, and it's a good back to back comparison of On vs. Off in those 9 games. For Hitman, Fallout 4, GTA 5, it suggests that HT on is actual a negative. HT off is faster.
There are no scenarios here where HT On was providing better performance. That's probably because none of these games use more than 6 threads, and so 6 real cores is perfectly fine. Further evidence that the i5-8600K is a perfectly great solution for today.
Here is another test of Hitman specifically, with a number of different CPUs.
Their suggestion is that looking at minimum frame rates, that higher thread count CPUs are showing benefit.
https://arstechnica.com/gadgets/2017/07/intel-core-i9-fastest-chip-but-too-darn-expensive/3/
[img]https://cdn.arstechnica.net/wp-content/uploads/sites/3/2017/07/games.019-1440x1080.png[/img]
The graph is mislabled with i7-7600k, which should be i5-7600K. 4C/4T chip, versus the 4C/8T i7-7700K. It's notable that the HT enabled chip is quite a bit better here, which is a bit the opposite of the video. However, if Hitman is really using 6 threads, that would be consistent. It's clear that Hitman does not use more than 6 however, as the i9-7900X with 10C/20T is not better on the average, let alone the more important minimums. And that's at least 10 real cores, compared to 4 real, 4 virtual.
At least in this case, those added 4 virtual cores are acting at least as good as 6 real cores. If a virtual HT core is worth at least 50% of a real core, then an i7-7700K acts like a 6 core chip.
In the Rise of the Tomb Raider case, in the video, it shows as mostly no difference, with a slight nod to the HT Off case, because parts of the benchmark are definitely running faster there. This case I don't like though, because the GPU is at 99% the whole time, but it might be OK if vsync is off and it's free running as fast as it can. That would be consistent with a 4 thread game.
In the Ars Technica case:
[img]https://cdn.arstechnica.net/wp-content/uploads/sites/3/2017/07/games.007-1440x1080.png[/img]
We are seeing that HT On for i7-7700K is a better choice than Off for i5-7600K. For the minimums, where it really helps, not the average where there is no difference.
The video doesn't speak to minimums, which is the biggest weakness, as that is always more interesting than average. Seeing the graph live can give you a rough idea though, and it's better than a single number for an entire run.
Here again, the monster chip 7900X with 10C/20T is not as good as the i7-7700K.
My conclusion here, which if you throw in a $1.25 will buy you a cup of coffee, is that HyperThreaded CPUs are more valuable than they seem, sort of filling in gaps that otherwise cost performance.
I plan to buy an i7-8700K and OC it to at least 4.9GHz. The i5-8600K is still a terrific part, and if money is tighter, I would not hesitate to go that route.
For 3D in particular, I would not go even cheaper however, because we need GHz, and the K part is better than something cheaper like the i5-8400. Because of our need for GHz in 3D, and the lack of thread scaling above 6 threads, I would not go with an AMD part.
And I have to agree with RageDemon that you don't want a 4 thread part anymore. Which I did not believe before digging into this, and this discussion. It's not strictly necessary for Unity and Unreal games, but anything FrostBite engine can use 6 threads. The i7-7700K is still a valid choice though, because the HT can legitimately fill in the gaps for 6 thread games.
I'm actually not sure what to think about this. His test seems pretty solid, and it's a good back to back comparison of On vs. Off in those 9 games. For Hitman, Fallout 4, GTA 5, it suggests that HT on is actual a negative. HT off is faster.
There are no scenarios here where HT On was providing better performance. That's probably because none of these games use more than 6 threads, and so 6 real cores is perfectly fine. Further evidence that the i5-8600K is a perfectly great solution for today.
The graph is mislabled with i7-7600k, which should be i5-7600K. 4C/4T chip, versus the 4C/8T i7-7700K. It's notable that the HT enabled chip is quite a bit better here, which is a bit the opposite of the video. However, if Hitman is really using 6 threads, that would be consistent. It's clear that Hitman does not use more than 6 however, as the i9-7900X with 10C/20T is not better on the average, let alone the more important minimums. And that's at least 10 real cores, compared to 4 real, 4 virtual.
At least in this case, those added 4 virtual cores are acting at least as good as 6 real cores. If a virtual HT core is worth at least 50% of a real core, then an i7-7700K acts like a 6 core chip.
In the Rise of the Tomb Raider case, in the video, it shows as mostly no difference, with a slight nod to the HT Off case, because parts of the benchmark are definitely running faster there. This case I don't like though, because the GPU is at 99% the whole time, but it might be OK if vsync is off and it's free running as fast as it can. That would be consistent with a 4 thread game.
In the Ars Technica case:
We are seeing that HT On for i7-7700K is a better choice than Off for i5-7600K. For the minimums, where it really helps, not the average where there is no difference.
The video doesn't speak to minimums, which is the biggest weakness, as that is always more interesting than average. Seeing the graph live can give you a rough idea though, and it's better than a single number for an entire run.
Here again, the monster chip 7900X with 10C/20T is not as good as the i7-7700K.
My conclusion here, which if you throw in a $1.25 will buy you a cup of coffee, is that HyperThreaded CPUs are more valuable than they seem, sort of filling in gaps that otherwise cost performance.
I plan to buy an i7-8700K and OC it to at least 4.9GHz. The i5-8600K is still a terrific part, and if money is tighter, I would not hesitate to go that route.
For 3D in particular, I would not go even cheaper however, because we need GHz, and the K part is better than something cheaper like the i5-8400. Because of our need for GHz in 3D, and the lack of thread scaling above 6 threads, I would not go with an AMD part.
And I have to agree with RageDemon that you don't want a 4 thread part anymore. Which I did not believe before digging into this, and this discussion. It's not strictly necessary for Unity and Unreal games, but anything FrostBite engine can use 6 threads. The i7-7700K is still a valid choice though, because the HT can legitimately fill in the gaps for 6 thread games.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607 Latest 3Dmigoto Release Bo3b's School for ShaderHackers
[quote="RAGEdemon"]@ bo3b, you will struggle to find any game which shows even near double the performance on modern 4 core+HT vs 4 core no HT. You speak of Ashes as an outlier and refuse to accept it as a potential future, but then post results from a game from 2012 running on a weak 2 core chip, using words such as "hypothetical scenareos" to try and illustrate how HT can have near 100% scaling in modern times. This is known as the [url]https://en.wikipedia.org/wiki/Faulty_generalization[/url] fallacy.
Btw, your example graph is 76% scaling, not 100%, but I see you have cherry picked the best graph to support your claim. Please allow me to post the rest of the graphs from the very article you linked, even on this ancient 2 core processor with 2012 games - please pay attention to the y-axis, it does not start at 0 which I think is throwing you off: [/quote]
No, you are confused once again. Or deliberately trolling. And bringing up an irrelevant fallacy.
Where do you get 76% from? The two middle pieces of my original best case scenario are the interesting spot. That's 3220@3.3 w/ HT, and 3770K @3.3 w/o HT. The left element of 3220@3.3 w/o HT is not interesting at all, because that's a 2 thread part for a 4 thread game, so of course it will be bad.
In the center two, I'm seeing
35 minimum for 3220
36 minimum for 3770K
That's (36-35)/35=3% less. Not 100%, but surely 97% is good enough?
You said:
[quote="RAGEdemon"]No mate, that's not how hyper-threading works at all. In game related scenarios, games which make use of extra threads which the CPU can make use of, i.e. 8 thread game on a 4c8t CPU, the game will only run ~33% faster [i]at absolute best[/i], and often far lower than that.
...
Hyperthreading is in no way [i]ever[/i] equivalent to physical cores where the game would indeed run near 100% as fast, everything else being equal and optimised, and the threads being present to take advantage of the extra cores.[/quote]Emphasis mine.
I gave you an exact example where your statement is incorrect. A 2C/4T part is the equal of a 4C/4T part in this exact [i]gaming [/i]scenario. That means a HyperThread can in fact be as good as a real core, and this is a real world example.
This is what we in engineering call an [i]existence proof[/i]. If there is one, there are others.
[i]I never said it was common[/i], I didn't even say it was a very good example. It's an example that disproves your overly broad statement. Using the best example is what you do for an existence proof.
Ashes is good for an [i]existence proof[/i] that it is possible for a game to be massively multi-threaded and use all cores. Extending that into the unknowable future as the future of all games is [i]your[/i] fallacy.
Again with the rhetorical forum gimmicks, you did not use to be a troll, and it's really starting to piss me off.
I never said that it was a generalization, I said that it [i]can happen[/i]. You said that it [i]could not ever happen.[/i] Bringing up the generalization fallacy, when I never said that, is particularly lame.
RAGEdemon said:@ bo3b, you will struggle to find any game which shows even near double the performance on modern 4 core+HT vs 4 core no HT. You speak of Ashes as an outlier and refuse to accept it as a potential future, but then post results from a game from 2012 running on a weak 2 core chip, using words such as "hypothetical scenareos" to try and illustrate how HT can have near 100% scaling in modern times. This is known as the https://en.wikipedia.org/wiki/Faulty_generalization fallacy.
Btw, your example graph is 76% scaling, not 100%, but I see you have cherry picked the best graph to support your claim. Please allow me to post the rest of the graphs from the very article you linked, even on this ancient 2 core processor with 2012 games - please pay attention to the y-axis, it does not start at 0 which I think is throwing you off:
No, you are confused once again. Or deliberately trolling. And bringing up an irrelevant fallacy.
Where do you get 76% from? The two middle pieces of my original best case scenario are the interesting spot. That's 3220@3.3 w/ HT, and 3770K @3.3 w/o HT. The left element of 3220@3.3 w/o HT is not interesting at all, because that's a 2 thread part for a 4 thread game, so of course it will be bad.
In the center two, I'm seeing
35 minimum for 3220
36 minimum for 3770K
That's (36-35)/35=3% less. Not 100%, but surely 97% is good enough?
You said:
RAGEdemon said:No mate, that's not how hyper-threading works at all. In game related scenarios, games which make use of extra threads which the CPU can make use of, i.e. 8 thread game on a 4c8t CPU, the game will only run ~33% faster at absolute best, and often far lower than that.
...
Hyperthreading is in no way ever equivalent to physical cores where the game would indeed run near 100% as fast, everything else being equal and optimised, and the threads being present to take advantage of the extra cores.
Emphasis mine.
I gave you an exact example where your statement is incorrect. A 2C/4T part is the equal of a 4C/4T part in this exact gaming scenario. That means a HyperThread can in fact be as good as a real core, and this is a real world example.
This is what we in engineering call an existence proof. If there is one, there are others.
I never said it was common, I didn't even say it was a very good example. It's an example that disproves your overly broad statement. Using the best example is what you do for an existence proof.
Ashes is good for an existence proof that it is possible for a game to be massively multi-threaded and use all cores. Extending that into the unknowable future as the future of all games is your fallacy.
Again with the rhetorical forum gimmicks, you did not use to be a troll, and it's really starting to piss me off.
I never said that it was a generalization, I said that it can happen. You said that it could not ever happen. Bringing up the generalization fallacy, when I never said that, is particularly lame.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607 Latest 3Dmigoto Release Bo3b's School for ShaderHackers
Relax mate, I am not intentionally being a troll, and apologise if I seem like it. I was genuinely going to send you your $1.25 for tea from here in England as a gesture of friendship, but cannot find your paypal after searching :)
I should have chosen my words better -I think it's foolish for anyone to say something can "never happen"; I did. For that I am sorry - There is always the possibility of some scenario where something can happen.
A better and more pedantic phrasing would have been "It is [b]unlikely[/b] on a [b]modern 4 core system[/b], even with multi-threaded games nowadays for HT to show any more than ~33% improvement"
For what it's worth: I picked up on your 2 core vs 2 core HT graph and the 76% came from the following calculation:
FC3
2 core w/o HT = 25
2 core w/ HT = 44
44-25/25=76% scaling. It's bad comparison however, because FC3 is a "4 thread game" as you call it, and doesn't really tell us much.
Now, please listen...
Those graphs and your calculations purporting to show 2C-HT being close to 4 Cores are not good, because they are being severely GPU limited by that guy's 670 on Ultra.
Here is an actual better version of your FC3 graph comparing your 2c4t 3220 vs your 4c8t 3770k where there is [u]no GPU bottleneck[/u], or at least less of one.
[img]http://www.pcgameshardware.de/screenshots/original/2012/11/FC3-Test-CPUs.png[/img]
3770k = 76fps
3220 = 47fps
As shown by your own game test case with your own choice of CPUs being compared, in proper non-bottlenecked scenarios elsewhere in the system, unfortunately HT is not nearly as good as real cores; - though I genuinely wish it was.
That's not to say there are no outliers, there probably are the rare ones.
[quote="bo3b"]
you did not use to be a troll, and it's really starting to piss me off. [/quote]
I don't want to piss anyone off. This is probably a good point to withdraw if that's the case.
I wish you the best of luck with your 8700K build my friend. An 8 core Coffee, in my humble opinion, won't give hugely increased performance over an 8700K,- I don't think you have anything to worry about.
A word of advice when overclocking, if I may:
There is a huge FUD about max voltages while overclocking. Intel have stated max KabyL voltage as something like 1.512 before breakdown - I have not done the leg work for the 8700K but it ought to be similar. As long as your cooling is good and your temps are constantly below 95 when stress testing, IMO it's perfectly safe to run at that max voltage (taking LLC into account). Be weary of people saying it is overly dangerous to go over even 1.4v. My 7700K has been running on 1.5V for 1.5 years now, and any degradation in lifespan will still mean the CPU will have outlived its usefulness to me.
Leave the cache at ~4.7GHz max - cache speed really doesn't matter with any significance IIRC, but increasing its speed destabilises an otherwise solid overclock.
Don't use the AVX negative offset, unless you are actually going to be encoding etc. Games such as The Witcher 3 actually use AVX, though not greatly. By limiting the AVX frequency, you will limit your overclock to this frequency in these games. Running Max OC with AVX at the same frequency will be fine for gaming that use it. This is the difference between 5.1GHz and 4.7GHz. The only downside is that in that rare instance you want to encode, the system will become unstable - but then you can simply turn off AVX in the encoding software or tone down your OC.
Look here to choose your memory:
[url]https://uk.hardware.info/reviews/8085/10/28-ddr4-memory-kits-comparison-test-the-best-memory-for-coffee-lake-and-ryzen-benchmarks-games[/url] - I can highly recommend the top Trident 3600 CL16 kit as well as the CL15, which is also available...
All the best!
[img]https://image.ibb.co/fMYbP8/chart.png[/img]
Relax mate, I am not intentionally being a troll, and apologise if I seem like it. I was genuinely going to send you your $1.25 for tea from here in England as a gesture of friendship, but cannot find your paypal after searching :)
I should have chosen my words better -I think it's foolish for anyone to say something can "never happen"; I did. For that I am sorry - There is always the possibility of some scenario where something can happen.
A better and more pedantic phrasing would have been "It is unlikely on a modern 4 core system, even with multi-threaded games nowadays for HT to show any more than ~33% improvement"
For what it's worth: I picked up on your 2 core vs 2 core HT graph and the 76% came from the following calculation:
FC3
2 core w/o HT = 25
2 core w/ HT = 44
44-25/25=76% scaling. It's bad comparison however, because FC3 is a "4 thread game" as you call it, and doesn't really tell us much.
Now, please listen...
Those graphs and your calculations purporting to show 2C-HT being close to 4 Cores are not good, because they are being severely GPU limited by that guy's 670 on Ultra.
Here is an actual better version of your FC3 graph comparing your 2c4t 3220 vs your 4c8t 3770k where there is no GPU bottleneck, or at least less of one.
3770k = 76fps
3220 = 47fps
As shown by your own game test case with your own choice of CPUs being compared, in proper non-bottlenecked scenarios elsewhere in the system, unfortunately HT is not nearly as good as real cores; - though I genuinely wish it was.
That's not to say there are no outliers, there probably are the rare ones.
bo3b said:
you did not use to be a troll, and it's really starting to piss me off.
I don't want to piss anyone off. This is probably a good point to withdraw if that's the case.
I wish you the best of luck with your 8700K build my friend. An 8 core Coffee, in my humble opinion, won't give hugely increased performance over an 8700K,- I don't think you have anything to worry about.
A word of advice when overclocking, if I may:
There is a huge FUD about max voltages while overclocking. Intel have stated max KabyL voltage as something like 1.512 before breakdown - I have not done the leg work for the 8700K but it ought to be similar. As long as your cooling is good and your temps are constantly below 95 when stress testing, IMO it's perfectly safe to run at that max voltage (taking LLC into account). Be weary of people saying it is overly dangerous to go over even 1.4v. My 7700K has been running on 1.5V for 1.5 years now, and any degradation in lifespan will still mean the CPU will have outlived its usefulness to me.
Leave the cache at ~4.7GHz max - cache speed really doesn't matter with any significance IIRC, but increasing its speed destabilises an otherwise solid overclock.
Don't use the AVX negative offset, unless you are actually going to be encoding etc. Games such as The Witcher 3 actually use AVX, though not greatly. By limiting the AVX frequency, you will limit your overclock to this frequency in these games. Running Max OC with AVX at the same frequency will be fine for gaming that use it. This is the difference between 5.1GHz and 4.7GHz. The only downside is that in that rare instance you want to encode, the system will become unstable - but then you can simply turn off AVX in the encoding software or tone down your OC.
[quote="bo3b"]That i5-8600K is an interesting spot, because it's a six core, no hyperthread setup. I've yet to run across a game that uses more than 4 threads, so that would likely be a good value spot for at least a year or two.
You won't likely be able to overclock it as well. The way chip binning usually works is that they try to make as many of the high end parts that they can, shooting for best i7-8700K. Then defects in manufacturing show up, and the parts that are rejected wind up in lower tiers.
So for example, an i5 is an i7 part that failed the hyperthread testing in at least one core, so they turn it off altogether, and sell it as an i5. Same thing is true for lower tiers, like i3, and even the laptop type parts with lower clock speeds. They down-bin the parts to wherever they can pass their tests and sell them there. It's all the same design, not different designs for each type.
That means that the parts you get for something like an i5 already failed some test, and thus will not be the best of the crop.
But as far as being a good value, I think a 6 thread chip today would be a good value proposition.
For Surround, I'm less sure because I've not run it, but in general you are capped by GPU performance in Surround, so the CPU should matter less.
For VR, it definitely will not matter, the vast majority of the VR software is single threaded. The two big game engines of Unreal and Unity do not effectively use multiple cores.
I don't know anything about Dolphin, and will defer to others.[/quote]
Wow, this is very interesting, I had no idea that's how Intel winds up selling CPU's without HT, i.e. an i5-8600k is an 8700k failed an HT test on one core, I learned something today, thanks for this.
bo3b said:That i5-8600K is an interesting spot, because it's a six core, no hyperthread setup. I've yet to run across a game that uses more than 4 threads, so that would likely be a good value spot for at least a year or two.
You won't likely be able to overclock it as well. The way chip binning usually works is that they try to make as many of the high end parts that they can, shooting for best i7-8700K. Then defects in manufacturing show up, and the parts that are rejected wind up in lower tiers.
So for example, an i5 is an i7 part that failed the hyperthread testing in at least one core, so they turn it off altogether, and sell it as an i5. Same thing is true for lower tiers, like i3, and even the laptop type parts with lower clock speeds. They down-bin the parts to wherever they can pass their tests and sell them there. It's all the same design, not different designs for each type.
That means that the parts you get for something like an i5 already failed some test, and thus will not be the best of the crop.
But as far as being a good value, I think a 6 thread chip today would be a good value proposition.
For Surround, I'm less sure because I've not run it, but in general you are capped by GPU performance in Surround, so the CPU should matter less.
For VR, it definitely will not matter, the vast majority of the VR software is single threaded. The two big game engines of Unreal and Unity do not effectively use multiple cores.
I don't know anything about Dolphin, and will defer to others.
Wow, this is very interesting, I had no idea that's how Intel winds up selling CPU's without HT, i.e. an i5-8600k is an 8700k failed an HT test on one core, I learned something today, thanks for this.
i7 8700k @ 5.1 GHz w/ EK Monoblock | GTX 1080 Ti FE + Full Nickel EK Block | EK SE 420 + EK PE 360 | 16GB G-Skill Trident Z @ 3200 MHz | Samsung 850 Evo | Corsair RM1000x | Asus ROG Swift PG278Q + Alienware AW3418DW | Win10 Pro 1703
[quote="bo3b"][quote="RAGEdemon"]Example 1: Video shows a heavily overclocked i5 8600K at 5GHz against an i7 8700k at stock, mostly 3.7GHz. Apparently, according to bo3b, "it can directly answer whether threads or GHz matters most". How one can claim that an 8700K which is clocked lower to an arbitrary degree is a good comparison for measuring the difference hyper-threading makes, is completely beyond me. This is bad because we do not know to what degree clock speed affects performance, and we certainly cannot gleam from this anything useful about hyper-threading without both CPUs working at the exact same frequency, and certainly not reliable >6 core performance. This is a fallacy known as https://en.wikipedia.org/wiki/False_equivalence [/quote]
I'm well aware of all the debating tricks, including false equivalency and straw man arguments.
You are presently doing a good job at obfuscating the problem with debating tricks, and not so much with actually looking at the data and engaging your brain. I'm not some idiot on the web, I do this for a living, and I know what matters. You [i]know[/i] this. I also know you are typically well informed and willing to do actual research, so it's baffling to me that you are resorting to internet forum gimmicks to try to make people question my results.
To be clear, I'm not questioning your motives, I know you want correct answers as well.
That video is a [i]terrific[/i] test case. Quoting Neil Degrasse-Tyson does not in fact make it bad data, or lack scientific value.
Within the constraints of that test environment, there is absolutely solid data to be gleaned. You can dismiss it because it's not 'properly scientfic', but if you adopt that attitude, then nothing will ever be good enough for you, because you cannot possibly control all variables. Real scientists still do experiments, even if they cannot control all variables.
In my judgment, there isn't anything wrong with his test case. He has set the video to 1080p, and disabled vsync. He uses Low settings. 1080p is not optimal, but it absolutely does not invalidate his tests- as long as GPU usage does not cap out. As I mention above, I call out each instance where it's GPU bound, and it's not that often. Some GPU bound tests are fine, as long is it is free-running for maximum fps, which will of course max out the GPU, as long as we have [i]enough[/i] threads. Moreover, it also includes the total CPU % usage, so we can see exactly how many threads are active. Have you actually watched this video?
The reason this is super interesting is because it's literally threads vs. GHz.
i7-8700K (12 threads) at 3.7GHz.
i5-8600K (6 threads) at 5GHz.
The architecture is identical, including RAM, motherboard, video card, SSD, OS. The games are run through identical scenarios, using identical settings. His experimental controls seem as good as they can be. It is literally Threads vs. GHz for these 8 games. Am I missing something?
Maybe we can argue that those 6 HyperThreads aren't 'real' threads, but we both know that HyperThreads are pretty good. That i7 is literally twice as fast as an i5 for some scenarios, because it has twice the threads. And if a 12 thread CPU cannot best a higher clocked 6 thread CPU, then any threads above 6 do not matter.
The question at hand is whether more threads matter. And this test case proves they do not. (For at least these 8 games.) If the 8600K were clocked the same as the 8700K, that would just make the results equal. The test itself aptly demonstrates that GHz trumps threads for these games.
Please take this specific test case, and explain why you think it is invalid. Please, no rhetorical tricks, let's just talk about the data. I value your opinion, otherwise I wouldn't bother to write back.
What [i]exactly[/i] am I missing about this test case that you think makes it invalid?
[url]https://youtu.be/SkTxXrqE5F0?t=1[/url][/quote]
Yeah I get your argument, but doesn't DX12 change things, I mean looking at the data you cited, where with BF1 the 8700k at 3.7 GHz had higher FPS than the 8600k at 5.0 GHz "threads win", was BF1 running with DX12 enabled?
Because like it or not, both DX12 and Vulkan are the future, and unlike DX11, here thread count absolutely matters.
So sure, you may save $100 (if that) opting for an 8600k over an 8700k but why shoot yourself in the foot in terms of futureproofness, especially considering that we are due for a console refresh in the next year or two from both Microsoft and Sony, who may well implement DX12 and or Vulkan API and abandon DX11 altogether:
https://www.anandtech.com/show/12547/expanding-directx-12-microsoft-announces-directx-raytracing
So yeah, this argument is valid now, where the majority of titles are still DX11, but I mean, we're talking about a cost saving of $100 for basically an 8700k without HT? Considering the cost outlay of a Coffee Lake build and the high probability of DX12 and Vulkan becoming more common in the next 3-5 years, this doesn't seem very prudent to me.
What also isn't stated is whether or not 8600k's, generally speaking, can attain the same kind of overclock as 8700k. I mean, 8600k is basically an 8700k that failed an HT test on at least one core and is now sold as an 8600k with HT disabled, does it also have a weaker IMC and other deficiencies resulting from being cut on the edge of the silicon wafer etc?
All of this may seem ancillary to the comparison between threads and frequency, but we are ultimately comparing an 8600k to an 8700k here in the end and so these, at least to me, are salient points.
Full disclosure: I'm with an 8700k @ 5.0 GHz (5.1 GHz -1 AVX) at 1.365v, delidded under a monoblock and 1kw of radiator surface area. I love this chip and the peace of mind that I won't have to worry about upgrading again for at least 5 years while we move away from DX11 and towards DX12 and Vulkan API and significantly faster GPU's mean that a CPU bottleneck may be more commonplace at 3440x1440 120 Hz (AW3418DW, I've given up on 3D Vision, sorry).
RAGEdemon said:Example 1: Video shows a heavily overclocked i5 8600K at 5GHz against an i7 8700k at stock, mostly 3.7GHz. Apparently, according to bo3b, "it can directly answer whether threads or GHz matters most". How one can claim that an 8700K which is clocked lower to an arbitrary degree is a good comparison for measuring the difference hyper-threading makes, is completely beyond me. This is bad because we do not know to what degree clock speed affects performance, and we certainly cannot gleam from this anything useful about hyper-threading without both CPUs working at the exact same frequency, and certainly not reliable >6 core performance. This is a fallacy known as https://en.wikipedia.org/wiki/False_equivalence
I'm well aware of all the debating tricks, including false equivalency and straw man arguments.
You are presently doing a good job at obfuscating the problem with debating tricks, and not so much with actually looking at the data and engaging your brain. I'm not some idiot on the web, I do this for a living, and I know what matters. You know this. I also know you are typically well informed and willing to do actual research, so it's baffling to me that you are resorting to internet forum gimmicks to try to make people question my results.
To be clear, I'm not questioning your motives, I know you want correct answers as well.
That video is a terrific test case. Quoting Neil Degrasse-Tyson does not in fact make it bad data, or lack scientific value.
Within the constraints of that test environment, there is absolutely solid data to be gleaned. You can dismiss it because it's not 'properly scientfic', but if you adopt that attitude, then nothing will ever be good enough for you, because you cannot possibly control all variables. Real scientists still do experiments, even if they cannot control all variables.
In my judgment, there isn't anything wrong with his test case. He has set the video to 1080p, and disabled vsync. He uses Low settings. 1080p is not optimal, but it absolutely does not invalidate his tests- as long as GPU usage does not cap out. As I mention above, I call out each instance where it's GPU bound, and it's not that often. Some GPU bound tests are fine, as long is it is free-running for maximum fps, which will of course max out the GPU, as long as we have enough threads. Moreover, it also includes the total CPU % usage, so we can see exactly how many threads are active. Have you actually watched this video?
The reason this is super interesting is because it's literally threads vs. GHz.
i7-8700K (12 threads) at 3.7GHz.
i5-8600K (6 threads) at 5GHz.
The architecture is identical, including RAM, motherboard, video card, SSD, OS. The games are run through identical scenarios, using identical settings. His experimental controls seem as good as they can be. It is literally Threads vs. GHz for these 8 games. Am I missing something?
Maybe we can argue that those 6 HyperThreads aren't 'real' threads, but we both know that HyperThreads are pretty good. That i7 is literally twice as fast as an i5 for some scenarios, because it has twice the threads. And if a 12 thread CPU cannot best a higher clocked 6 thread CPU, then any threads above 6 do not matter.
The question at hand is whether more threads matter. And this test case proves they do not. (For at least these 8 games.) If the 8600K were clocked the same as the 8700K, that would just make the results equal. The test itself aptly demonstrates that GHz trumps threads for these games.
Please take this specific test case, and explain why you think it is invalid. Please, no rhetorical tricks, let's just talk about the data. I value your opinion, otherwise I wouldn't bother to write back.
What exactly am I missing about this test case that you think makes it invalid?
Yeah I get your argument, but doesn't DX12 change things, I mean looking at the data you cited, where with BF1 the 8700k at 3.7 GHz had higher FPS than the 8600k at 5.0 GHz "threads win", was BF1 running with DX12 enabled?
Because like it or not, both DX12 and Vulkan are the future, and unlike DX11, here thread count absolutely matters.
So sure, you may save $100 (if that) opting for an 8600k over an 8700k but why shoot yourself in the foot in terms of futureproofness, especially considering that we are due for a console refresh in the next year or two from both Microsoft and Sony, who may well implement DX12 and or Vulkan API and abandon DX11 altogether:
So yeah, this argument is valid now, where the majority of titles are still DX11, but I mean, we're talking about a cost saving of $100 for basically an 8700k without HT? Considering the cost outlay of a Coffee Lake build and the high probability of DX12 and Vulkan becoming more common in the next 3-5 years, this doesn't seem very prudent to me.
What also isn't stated is whether or not 8600k's, generally speaking, can attain the same kind of overclock as 8700k. I mean, 8600k is basically an 8700k that failed an HT test on at least one core and is now sold as an 8600k with HT disabled, does it also have a weaker IMC and other deficiencies resulting from being cut on the edge of the silicon wafer etc?
All of this may seem ancillary to the comparison between threads and frequency, but we are ultimately comparing an 8600k to an 8700k here in the end and so these, at least to me, are salient points.
Full disclosure: I'm with an 8700k @ 5.0 GHz (5.1 GHz -1 AVX) at 1.365v, delidded under a monoblock and 1kw of radiator surface area. I love this chip and the peace of mind that I won't have to worry about upgrading again for at least 5 years while we move away from DX11 and towards DX12 and Vulkan API and significantly faster GPU's mean that a CPU bottleneck may be more commonplace at 3440x1440 120 Hz (AW3418DW, I've given up on 3D Vision, sorry).
i7 8700k @ 5.1 GHz w/ EK Monoblock | GTX 1080 Ti FE + Full Nickel EK Block | EK SE 420 + EK PE 360 | 16GB G-Skill Trident Z @ 3200 MHz | Samsung 850 Evo | Corsair RM1000x | Asus ROG Swift PG278Q + Alienware AW3418DW | Win10 Pro 1703
I don't want to argue mate. All I care about is facts. I really don't care for the whole macho "I'm right; you're wrong" thing - in fact, let's get that drivel out of the way. I'm wrong; I'm the bad guy;- You're right.
With that said, let's talk about why this video isn't great and your above interpretation isn't great as a consequence.
1.
Let's start with the https://en.wikipedia.org/wiki/False_dilemma fallacy. We are not in any way proposing to swap "GHz" for Cores - it's not a choice between the two. My proposition is that as we can now have both 5GHz cores and more cores at that, all wrapped up nicely on Intel's Ring Bus. We are no longer limited to the past CPU architectures where more cores meant vastly slower clocked cores and slower interconnects (Intel Mesh vs Ring Bus). Why not have both if there is no trade-off? In fact, the extra cache would be better even if the extra cores are never used.
2.
No mate, that's not how hyper-threading works at all. In game related scenarios, games which make use of extra threads which the CPU can make use of, i.e. 8 thread game on a 4c8t CPU, the game will only run ~33% faster at absolute best, and often far lower than that.
Sometimes games actually have negative scaling with HT on. To lay things to rest, check out this good video demonstrating how in GTA5 and other games, hyperthreading is negatively impacting game performance, on an 8700k no less; 6c6t vs 6c12t, at the same clocks:
The video is also a good presentation of HT on vs HT off for other games too, where GPU is not saturating to >90% or so.
Hyperthreading is in no way ever equivalent to physical cores where the game would indeed run near 100% as fast, everything else being equal and optimised, and the threads being present to take advantage of the extra cores.
Let me show you a Cinebench graphic highlighting how low virtual core performance is compared to real physical cores:
Notice how:
4 cores = ~500 points
8 cores = ~1000 points
4 cores with HT = only ~660 points, i.e. (660-500)/500=32% performance increase.
Transferring this into the absolute best case scenario as multi-threaded game i.e. Ashes of the Singularity:
Notice how even in Ashes, 8 core CPU is 60% faster than a 4 core CPU but a 4c8t CPU is only 15% faster?
The simple fact is that one cannot substitute HT cores for real cores and project a performance difference with any kind of accuracy.
3.
I'm sorry but this is wrong for multiple reasons.
My high school physics teacher was an awesome lady who I have always held in high regard. She once said that a good way to imagine the results of an experiment is to take things to the extreme. Let me show you why this is wrong by using your own words:
"If a 12 thread 1GHz CPU cannot best a 5GHz 6 thread CPU, then any threads above 6 do not matter.
Can you see why this statement makes no sense?
Yea. As you might realise, we simply do not know what or how large of an effect a lower speed CPU would have on performance. For this to be any kind of real experiment that one might glean results from, both CPUs must be running at the same frequency as a critical experimental control - in fact, the entire setup needs to be as identical as possible, and eliminate all bottlenecks. The only difference can be that one has 6 cores, and the other has 12 cores, or if we are testing what difference HT makes, then one CPU has HT on, and the other CPU has HT off.
A note on hyperthreading itself:
Usually the OS has to juggle threads to a cpu. HT simply shows the OS more cores so the OS dumps its threads onto the CPU, and the CPU then does the juggling, more efficiently. How it does this is by processing threads on the CPU while the other threads are not being processed, usually due to them waiting on memory retrieval etc. It's just a trick to make the CPU more efficient, and can never be equivalent to extra physical cores. This also means that the CPU % usage reported needs to be taken with a huge grain of salt.
Otherwise, "that's cool man, whatever you say" :)
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
Are we looking at 2-3 years more years before the market (or to be precise, the console market) begins to adopt something like Ashes?
8700K 5.0Ghz OC (Silicon Lottery Edition)
Noctua NH-15 cooler
Asus Maximus X Hero
16 GB Corsair Vengeance LPX RAM DDR4 3000
1TB Samsung PM961 OEM M.2 NVMe
MSI Gaming X Trio 1080Ti SLI
Corsair 1000RMi PSU
Cougar Conquer Case
Triple Screens Acer Predator 3D Vision XB272
3D Vision 2 Glasses
Win 10 Pro x64
https://www.gamersnexus.net/guides/2963-intel-12k-marketing-blunder-pcie-lane-scaling-benchmarks
The difference is almost insignificant.
8700K 5.0Ghz OC (Silicon Lottery Edition)
Noctua NH-15 cooler
Asus Maximus X Hero
16 GB Corsair Vengeance LPX RAM DDR4 3000
1TB Samsung PM961 OEM M.2 NVMe
MSI Gaming X Trio 1080Ti SLI
Corsair 1000RMi PSU
Cougar Conquer Case
Triple Screens Acer Predator 3D Vision XB272
3D Vision 2 Glasses
Win 10 Pro x64
IMO, the next "generational leap" that you asked about is an 8 core which is at least as good for gaming (once overclocked and cooled properly) as its counterparts - a 4-core 7700K and a 6 core 8700k, and hopefully cost effective (8700k is £52 per core being significantly cheaper than the 7700k at £72 per core - they are both pretty much the same price overall). This would be the projected 8c16t 9900K.
15 years ago, almost all games were single core. 10 years ago, they were mostly dual core. With the advent of 8 core XB1 and PS4, over the past 5 years, games have started to use 4+ cores. The smart money says that 5 years from now, the threads will double again. How long do you intend to keep your CPU?
It's unfortunate that RyZen isn't as good for gaming, nevertheless, an apples to apples comparison between the 6 core 12t 2600x and 8 core 16t 2700x @ clock can be quite enlightening. For most games today, the difference is marginal, but its there: https://www.gamersnexus.net/hwreviews/3288-amd-r5-2600-2600x-review-stream-benchmarks-gaming-blender/page-3
Assassin's Creed: Origins
8C16T 2700x@4.2GHz = 111.8
6C12T 2600x@4.2GHz = 103.0
Watch Dogs 2:
8C16T 2700x@4.2GHz = 111.8
6C12T 2600x@4.2GHz = 105.0
Project Cars:
8C16T 2700x@4.2GHz = 113.8
6C12T 2600x@4.2GHz = 111.0
Ashes:
8C16T 2700x@4.2GHz = 50.7
6C12T 2600x@4.2GHz = 44.0
GTA5:
8C16T 2700x@4.2GHz = 122.0
6C12T 2600x@4.2GHz = 117.3
Whether one sees value here, or with a potential future, and then whether one wishes to wait for the 8 core big brother of the 8700K or not is a personal choice :)
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
but why 9900K though? i thought the natural procession would be 9700K
btw, how come none of you running 79xx or 89xx this gen?
8700K 5.0Ghz OC (Silicon Lottery Edition)
Noctua NH-15 cooler
Asus Maximus X Hero
16 GB Corsair Vengeance LPX RAM DDR4 3000
1TB Samsung PM961 OEM M.2 NVMe
MSI Gaming X Trio 1080Ti SLI
Corsair 1000RMi PSU
Cougar Conquer Case
Triple Screens Acer Predator 3D Vision XB272
3D Vision 2 Glasses
Win 10 Pro x64
8700K is the first 6 core to have neither of these weaknesses, and so does the expected 9900K @ 8 cores.
Waiting for it might be worth while, though moving from 8700K to 9900K as an upgrade, the difference will be marginal;- not likely worth the money to upgrade because the older second hand system will have significantly depreciated in value at that point.
It's similar to buying last year's Mercedes Benz at full price when the new model is about to come out, and then trying to sell the old model to 'upgrade' to the new model. You'll end up paying a premium on both, but then also taking a huge hit when selling the old model too because no-one wants last years car... but it happens!
Whatever makes you feel comfortable, man, go for it!
If I had an 8700K, I'd wait for a 12/16 core CPU on the next fabrication node, perhaps even 7nm for an upgrade to be worthwhile, but that's me.
I hope this thread helps others too down the line...
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
DAMNNNN.. I thought I made a substantial leap with i5/4th gen to i7/8th gen with pretty significant price:performance.
So I guess the 9th gen counterpart of the i7-8700K (i7-9700K) wouldn't be a good upgrade since it's still 6 core? Dude, that i9-9900K will cost you a pretty penny and with Silicon lottery, it becomes exorbitant. So are you going to upgrade to i9-9900K from your i7-7700K?
Btw, have the i9 line ever been good to OC? How come I never hear any enthusiast/overclocker attempt to OC that line? I thought that line was primarily for server like the old Xenon (which poorly suits for gaming purpose).
8700K 5.0Ghz OC (Silicon Lottery Edition)
Noctua NH-15 cooler
Asus Maximus X Hero
16 GB Corsair Vengeance LPX RAM DDR4 3000
1TB Samsung PM961 OEM M.2 NVMe
MSI Gaming X Trio 1080Ti SLI
Corsair 1000RMi PSU
Cougar Conquer Case
Triple Screens Acer Predator 3D Vision XB272
3D Vision 2 Glasses
Win 10 Pro x64
Your understanding here is incorrect. The way that hyperthreading works is by utilizing underused resources because of stalls in the CPU pipeline. It's essentially a way to keep a core working, when it otherwise would be waiting on things like memory accesses. One memory access for example is 100s of instructions for a core, all lost time, unless you just happen to have something ready to go on the corresponding hyperthread.
It's not a real thread in the sense that it cannot replace a full core, for most scenarios. It's a way to use otherwise wasted clock cycles.
If code is super well written, and peformance optimized, it's not going to help much if at all. If a program has been performance optimized to keep the pipeline filled, then there is no 'slop' that can be recovered, and the hyperthread will add nothing. However... we know that most code in the world is terrible, and very rarely well optimized. Which is why in the real world, we actually get some pretty good value out of hyperthreading.
You are absolutely incorrect that it cannot scale to 100%. It can, and it does, depending upon the scenario.
In a hypothetical case, based on simply the design, if a core were getting 50% stalls for some reason, running at exactly half it's rated speed- and a second process/thread happens to not conflict in any way with the first thread's resources... then you could get 100% scaling. Hypothetically.
Now, here is an actual example of this happening.
https://techbuyersguru.com/dual-core-quad-core-and-hyperthreading-benchmark-analysis?page=2
In this graph, we are looking at Far Cry 3. The game parameters are not ideal, but they are the same between each test. And same CPU frequency at 3.3GHz.
The 3220 chip is a 2C/4T chip. 2 real cores, 4 threads.
The 3770K is a 4C/8T chip. 4 real cores, 8 threads.
The 3220 is an older and weaker chip, and yet it is providing almost exactly the same performance as the better chip with HT turned off. So, 2C/4T is working as well as 4C/4T.
If there is one example, clearly there will be more.
Mostly you are right, HT does not scale super well in general, and especially not in games. Although I'm trying to point out that it is highly dependent on the scenario. Averages are OK, but tell you nothing about specific scenarios.
And it is fairly clear that the 4 thread point is the sweet spot for best scaling. As you add more and more threads and HyperThreads, the value of hyperthreading is a lot less.
Edit: RageDemon talks about this example in a few posts, and it seems fairly clear that this is a bad example. The author of the original post shows a GPU graph that is not running at 100%, but for a different game. He does not show one for this test case, so it's entirely possible it is GPU bound for this example.
Here is a better example of high HT scaling. In the comparison of i3 (2C/4T) to i5 (4C/4T) for this video, we see the 2C/4T i3 at roughly 85% of a 4C/4T i5.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers
Even with my 7700k, I'll likely wait till a 12 core on a 7nm process.
Listen; don't worry about intel "generations", they are only marketing speak made up by intel to try and sell more CPUs - they don't actually mean anything. A CoffeeLake is a KabyLake is a SkyLake. There is no real difference. The platforms too are very similar - z370 is a z270 is a z170 with minor differences.
In fact, you would struggle to see any performance difference between a SkyLake @ 5GHz vs a CoffLake @ 5GHz.
Also don't worry about the i9/i7 moniker. In the past, a 6 core would have been a server grade CPU which is poor choice for gaming unless heavily overclocked (I actually did have a 6C 12T Xeon x5660 @ 4.4GHz previous to my 7700k). A 6 Core 8700K was the first chip to break the status quo - to be good for gaming and it was actually cheaper per core and about the same price overall as a 7700k. The 9900k is expected to follow the same train of performance and price,- not the old style i9's fancy pricing and lack of gaming performance.
@ bo3b, you will struggle to find any game which shows even near double the performance on modern 4 core+HT vs 4 core no HT. You speak of Ashes as an outlier and refuse to accept it as a potential future, but then post results from a game from 2012 running on a weak 2 core chip, using words such as "hypothetical scenareos" to try and illustrate how HT can have near 100% scaling in modern times. This is known as the https://en.wikipedia.org/wiki/Faulty_generalization fallacy.
Hypothetically, J-Enermax could become the president of the world tomorrow, but in reality it won't happen. (I actually hope it does, that would be great to see!)
Btw, your example graph is 76% scaling, not 100%, but I see you have cherry picked the best graph to support your claim. Please allow me to post the rest of the graphs from the very article you linked, even on this ancient 2 core processor with 2012 games - please pay attention to the y-axis, it does not start at 0 which I think is throwing you off:
2 core CPU:
HT OFF = 80 fps
HT ON = 92 fps
Scaling = (92-80)/80= 15%
4 core CPU:
2% HT scaling.
2 core CPU:
HT OFF = 52 fps
HT ON = 65 fps
Scaling = (65-52)/52= 25%
4 core CPU:
-2% HT scaling.
2 core CPU:
HT OFF = 54 fps
HT ON = 43 fps
Scaling = (54-43)/54= 25%
4 core CPU:
0% HT scaling.
As you can see, unfortunately, there is no mythical 100% scaling or anywhere near that in modern 4 core+ CPUs and games, as the graphs above from your own article show. I wish this was not true as I'd love to think my 4 HT cores were near the performance of real cores, but alas we all have to live with the disappointment that they might give 25% on a good day, usually give no significant gain, and sometimes even give a negative gain altogether :)
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
As RageDemon notes, there is really no way to tell. This is more of a Rorschach test, and everyone will have different opinions on where it will go.
My take is the opposite of RageDemon's, I don't think we are going to have others follow in the Ashes footsteps, because they really do not need to, and it's damn hard to make software like they have done.
There is also the case that scaling gets worse and worse, the more cores/threads you have, with diminishing returns. If there is required communication between threads, you will follow Amdahl's Law. If you have something truly parallel, it can work, but in general game code is not that way. Graphics are, because one rectangle of the screen is nearly independent of any other, and with tiled lighting, this is even more true. So parallelism works great with pushing pixels.
Keeping track of player movements and maps, and triggering actions, and game play- not so much. For open world, or something like Ashes where things are mostly independent, it can help. For something like a first person shooter, it's not as likely to work, and much harder.
The two big game engines of Unreal and Unity do not currently support this level of concurrency, and it's not at all likely they will, because of the massive code base. And the overwhelming complexity that multi-threading brings. Bugs in multi-threaded are universally acknowledged as the hardest to solve.
In 3D Vision, I see even less thread usage, not keeping up with 2D. Driver maybe, 3 core bug, not enough performance, hard to say, but it's nearly always worse. Same is true for VR usage. The Oculus SDK is better than OpenVR, and neither support multi-threading in any sensible way. I don't care about 2D, so more threads is even less useful to me.
Lastly, on consoles, do they need massive scale? Maybe it can open up some new game types, new multi-player venues or bigger scale, but in general, their target is quite a lot lower, even shooting for 30 fps as their goal.
So for all those reasons, my view is that we are not likely to get great thread use in the next 5 years. Moreover, from my buying perspective, I'm not going to use this computer in 5 years anyway, so it's mostly irrelevant. If I'm right I use this currently best machine for at least 3 years, because 9xxx is not going to turn the dial. If I'm wrong, and somehow the 8 full cores of 9xxx is vastly superior, I'll sell this rig and buy anew. It would be a loss, except that I get to play maxed out for an easy 6 months anyway, so loss is relative.
Still, for each person it will depend upon your prediction.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers
https://youtu.be/8qkXKmpOWa0
I'm actually not sure what to think about this. His test seems pretty solid, and it's a good back to back comparison of On vs. Off in those 9 games. For Hitman, Fallout 4, GTA 5, it suggests that HT on is actual a negative. HT off is faster.
There are no scenarios here where HT On was providing better performance. That's probably because none of these games use more than 6 threads, and so 6 real cores is perfectly fine. Further evidence that the i5-8600K is a perfectly great solution for today.
Here is another test of Hitman specifically, with a number of different CPUs.
Their suggestion is that looking at minimum frame rates, that higher thread count CPUs are showing benefit.
https://arstechnica.com/gadgets/2017/07/intel-core-i9-fastest-chip-but-too-darn-expensive/3/
The graph is mislabled with i7-7600k, which should be i5-7600K. 4C/4T chip, versus the 4C/8T i7-7700K. It's notable that the HT enabled chip is quite a bit better here, which is a bit the opposite of the video. However, if Hitman is really using 6 threads, that would be consistent. It's clear that Hitman does not use more than 6 however, as the i9-7900X with 10C/20T is not better on the average, let alone the more important minimums. And that's at least 10 real cores, compared to 4 real, 4 virtual.
At least in this case, those added 4 virtual cores are acting at least as good as 6 real cores. If a virtual HT core is worth at least 50% of a real core, then an i7-7700K acts like a 6 core chip.
In the Rise of the Tomb Raider case, in the video, it shows as mostly no difference, with a slight nod to the HT Off case, because parts of the benchmark are definitely running faster there. This case I don't like though, because the GPU is at 99% the whole time, but it might be OK if vsync is off and it's free running as fast as it can. That would be consistent with a 4 thread game.
In the Ars Technica case:
We are seeing that HT On for i7-7700K is a better choice than Off for i5-7600K. For the minimums, where it really helps, not the average where there is no difference.
The video doesn't speak to minimums, which is the biggest weakness, as that is always more interesting than average. Seeing the graph live can give you a rough idea though, and it's better than a single number for an entire run.
Here again, the monster chip 7900X with 10C/20T is not as good as the i7-7700K.
My conclusion here, which if you throw in a $1.25 will buy you a cup of coffee, is that HyperThreaded CPUs are more valuable than they seem, sort of filling in gaps that otherwise cost performance.
I plan to buy an i7-8700K and OC it to at least 4.9GHz. The i5-8600K is still a terrific part, and if money is tighter, I would not hesitate to go that route.
For 3D in particular, I would not go even cheaper however, because we need GHz, and the K part is better than something cheaper like the i5-8400. Because of our need for GHz in 3D, and the lack of thread scaling above 6 threads, I would not go with an AMD part.
And I have to agree with RageDemon that you don't want a 4 thread part anymore. Which I did not believe before digging into this, and this discussion. It's not strictly necessary for Unity and Unreal games, but anything FrostBite engine can use 6 threads. The i7-7700K is still a valid choice though, because the HT can legitimately fill in the gaps for 6 thread games.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers
No, you are confused once again. Or deliberately trolling. And bringing up an irrelevant fallacy.
Where do you get 76% from? The two middle pieces of my original best case scenario are the interesting spot. That's 3220@3.3 w/ HT, and 3770K @3.3 w/o HT. The left element of 3220@3.3 w/o HT is not interesting at all, because that's a 2 thread part for a 4 thread game, so of course it will be bad.
In the center two, I'm seeing
35 minimum for 3220
36 minimum for 3770K
That's (36-35)/35=3% less. Not 100%, but surely 97% is good enough?
You said:
Emphasis mine.
I gave you an exact example where your statement is incorrect. A 2C/4T part is the equal of a 4C/4T part in this exact gaming scenario. That means a HyperThread can in fact be as good as a real core, and this is a real world example.
This is what we in engineering call an existence proof. If there is one, there are others.
I never said it was common, I didn't even say it was a very good example. It's an example that disproves your overly broad statement. Using the best example is what you do for an existence proof.
Ashes is good for an existence proof that it is possible for a game to be massively multi-threaded and use all cores. Extending that into the unknowable future as the future of all games is your fallacy.
Again with the rhetorical forum gimmicks, you did not use to be a troll, and it's really starting to piss me off.
I never said that it was a generalization, I said that it can happen. You said that it could not ever happen. Bringing up the generalization fallacy, when I never said that, is particularly lame.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers
I should have chosen my words better -I think it's foolish for anyone to say something can "never happen"; I did. For that I am sorry - There is always the possibility of some scenario where something can happen.
A better and more pedantic phrasing would have been "It is unlikely on a modern 4 core system, even with multi-threaded games nowadays for HT to show any more than ~33% improvement"
For what it's worth: I picked up on your 2 core vs 2 core HT graph and the 76% came from the following calculation:
FC3
2 core w/o HT = 25
2 core w/ HT = 44
44-25/25=76% scaling. It's bad comparison however, because FC3 is a "4 thread game" as you call it, and doesn't really tell us much.
Now, please listen...
Those graphs and your calculations purporting to show 2C-HT being close to 4 Cores are not good, because they are being severely GPU limited by that guy's 670 on Ultra.
Here is an actual better version of your FC3 graph comparing your 2c4t 3220 vs your 4c8t 3770k where there is no GPU bottleneck, or at least less of one.
3770k = 76fps
3220 = 47fps
As shown by your own game test case with your own choice of CPUs being compared, in proper non-bottlenecked scenarios elsewhere in the system, unfortunately HT is not nearly as good as real cores; - though I genuinely wish it was.
That's not to say there are no outliers, there probably are the rare ones.
I don't want to piss anyone off. This is probably a good point to withdraw if that's the case.
I wish you the best of luck with your 8700K build my friend. An 8 core Coffee, in my humble opinion, won't give hugely increased performance over an 8700K,- I don't think you have anything to worry about.
A word of advice when overclocking, if I may:
There is a huge FUD about max voltages while overclocking. Intel have stated max KabyL voltage as something like 1.512 before breakdown - I have not done the leg work for the 8700K but it ought to be similar. As long as your cooling is good and your temps are constantly below 95 when stress testing, IMO it's perfectly safe to run at that max voltage (taking LLC into account). Be weary of people saying it is overly dangerous to go over even 1.4v. My 7700K has been running on 1.5V for 1.5 years now, and any degradation in lifespan will still mean the CPU will have outlived its usefulness to me.
Leave the cache at ~4.7GHz max - cache speed really doesn't matter with any significance IIRC, but increasing its speed destabilises an otherwise solid overclock.
Don't use the AVX negative offset, unless you are actually going to be encoding etc. Games such as The Witcher 3 actually use AVX, though not greatly. By limiting the AVX frequency, you will limit your overclock to this frequency in these games. Running Max OC with AVX at the same frequency will be fine for gaming that use it. This is the difference between 5.1GHz and 4.7GHz. The only downside is that in that rare instance you want to encode, the system will become unstable - but then you can simply turn off AVX in the encoding software or tone down your OC.
Look here to choose your memory:
https://uk.hardware.info/reviews/8085/10/28-ddr4-memory-kits-comparison-test-the-best-memory-for-coffee-lake-and-ryzen-benchmarks-games - I can highly recommend the top Trident 3600 CL16 kit as well as the CL15, which is also available...
All the best!
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
Wow, this is very interesting, I had no idea that's how Intel winds up selling CPU's without HT, i.e. an i5-8600k is an 8700k failed an HT test on one core, I learned something today, thanks for this.
i7 8700k @ 5.1 GHz w/ EK Monoblock | GTX 1080 Ti FE + Full Nickel EK Block | EK SE 420 + EK PE 360 | 16GB G-Skill Trident Z @ 3200 MHz | Samsung 850 Evo | Corsair RM1000x | Asus ROG Swift PG278Q + Alienware AW3418DW | Win10 Pro 1703
https://www.3dmark.com/compare/fs/14520125/fs/11807761#
Yeah I get your argument, but doesn't DX12 change things, I mean looking at the data you cited, where with BF1 the 8700k at 3.7 GHz had higher FPS than the 8600k at 5.0 GHz "threads win", was BF1 running with DX12 enabled?
Because like it or not, both DX12 and Vulkan are the future, and unlike DX11, here thread count absolutely matters.
So sure, you may save $100 (if that) opting for an 8600k over an 8700k but why shoot yourself in the foot in terms of futureproofness, especially considering that we are due for a console refresh in the next year or two from both Microsoft and Sony, who may well implement DX12 and or Vulkan API and abandon DX11 altogether:
https://www.anandtech.com/show/12547/expanding-directx-12-microsoft-announces-directx-raytracing
So yeah, this argument is valid now, where the majority of titles are still DX11, but I mean, we're talking about a cost saving of $100 for basically an 8700k without HT? Considering the cost outlay of a Coffee Lake build and the high probability of DX12 and Vulkan becoming more common in the next 3-5 years, this doesn't seem very prudent to me.
What also isn't stated is whether or not 8600k's, generally speaking, can attain the same kind of overclock as 8700k. I mean, 8600k is basically an 8700k that failed an HT test on at least one core and is now sold as an 8600k with HT disabled, does it also have a weaker IMC and other deficiencies resulting from being cut on the edge of the silicon wafer etc?
All of this may seem ancillary to the comparison between threads and frequency, but we are ultimately comparing an 8600k to an 8700k here in the end and so these, at least to me, are salient points.
Full disclosure: I'm with an 8700k @ 5.0 GHz (5.1 GHz -1 AVX) at 1.365v, delidded under a monoblock and 1kw of radiator surface area. I love this chip and the peace of mind that I won't have to worry about upgrading again for at least 5 years while we move away from DX11 and towards DX12 and Vulkan API and significantly faster GPU's mean that a CPU bottleneck may be more commonplace at 3440x1440 120 Hz (AW3418DW, I've given up on 3D Vision, sorry).
i7 8700k @ 5.1 GHz w/ EK Monoblock | GTX 1080 Ti FE + Full Nickel EK Block | EK SE 420 + EK PE 360 | 16GB G-Skill Trident Z @ 3200 MHz | Samsung 850 Evo | Corsair RM1000x | Asus ROG Swift PG278Q + Alienware AW3418DW | Win10 Pro 1703
https://www.3dmark.com/compare/fs/14520125/fs/11807761#