[quote="coffeeonteacup"]
Ive read that thread and im talking about 3d. My i5 3570k wasnt(At least badly) single thread bottlenecked in witcher, all cores begged close to 100%. Those older i5's are definitely not holding up well in some of the newer games. Witcher 3 definitely saw 30% boost when i switched to 2600k. I tested them for hours, i didnt "upgrade", just switched 2600k from another pc and couldnt believe how much smoother witcher was. So i put 3570k back in just to compare them. Didnt imagine anything, 30% boost on 2600k and much smoother. Ipc is about the same in both, 3570k little faster. Im sure there are places that are more single thread bottlenecked, like the novigrad square. I didnt run through it my testing sadly, i went from the smaller square to the left on the docks and then to the upper town skipping the main square. i can try later to see what fps i get in novigrad main square, but overall 2600k was so much more smoother in the witcher that it was hard to believe. My route in novigrad was actually the biggest difference between the two in minumum fps. from 40's to barely dropping below 60.
Data also heavily suggested that fallout 4 was single thread bottlenecked badly but i got 5fps boost in minimum fps. Sadly i didnt try any other games because it takes so fucking long to benchmark but i stand what i claimed. I think masterotaku will get more than 30% boost in witcher(many other games too, not all) if he upgrades to 5ghz 7700k with fast ddr4 from a 4670K at 4.3GHz. I bet its 50% difference in witcher and fallout 4 for example. [/quote]
I think you will find that the difference is due to the IPC (single threaded performance) improvements with the later generations rather than multi-threaded performance. I say that because I too have extensively benchmarked TW3 both for the CPU usage bug problem and my upgrade path.
I say this because I am coming:
From 6 core 12 thread Xeon x5660 @ OC to 4.4GHz (~same generation core as a 2600K)
To 4 core 8 thread 7700K @ OC 5.1GHz
The difference at novigrad square (arguably the most intensive location within the game), was going from 40fps to 58fps (45% improvement) with LESS cores.
I should also point out that I went from DDR3 1600MHz (equivalent to DDR4 2333MHz) to DDR4 3600MHz, which would have also impacted performance.
For completeness, you can go to this (or any other heavily CPU dependent location where the GPU is <90% usage and FPS is < 60), alt-tab out and disable the 4 virtual cores 1,3,5,7 from the task manager affinity setting. You will note that even though the game is now only using the 4 real cores 0,2,4,8, the game performance is still about the same.
coffeeonteacup said:
Ive read that thread and im talking about 3d. My i5 3570k wasnt(At least badly) single thread bottlenecked in witcher, all cores begged close to 100%. Those older i5's are definitely not holding up well in some of the newer games. Witcher 3 definitely saw 30% boost when i switched to 2600k. I tested them for hours, i didnt "upgrade", just switched 2600k from another pc and couldnt believe how much smoother witcher was. So i put 3570k back in just to compare them. Didnt imagine anything, 30% boost on 2600k and much smoother. Ipc is about the same in both, 3570k little faster. Im sure there are places that are more single thread bottlenecked, like the novigrad square. I didnt run through it my testing sadly, i went from the smaller square to the left on the docks and then to the upper town skipping the main square. i can try later to see what fps i get in novigrad main square, but overall 2600k was so much more smoother in the witcher that it was hard to believe. My route in novigrad was actually the biggest difference between the two in minumum fps. from 40's to barely dropping below 60.
Data also heavily suggested that fallout 4 was single thread bottlenecked badly but i got 5fps boost in minimum fps. Sadly i didnt try any other games because it takes so fucking long to benchmark but i stand what i claimed. I think masterotaku will get more than 30% boost in witcher(many other games too, not all) if he upgrades to 5ghz 7700k with fast ddr4 from a 4670K at 4.3GHz. I bet its 50% difference in witcher and fallout 4 for example.
I think you will find that the difference is due to the IPC (single threaded performance) improvements with the later generations rather than multi-threaded performance. I say that because I too have extensively benchmarked TW3 both for the CPU usage bug problem and my upgrade path.
I say this because I am coming:
From 6 core 12 thread Xeon x5660 @ OC to 4.4GHz (~same generation core as a 2600K)
To 4 core 8 thread 7700K @ OC 5.1GHz
The difference at novigrad square (arguably the most intensive location within the game), was going from 40fps to 58fps (45% improvement) with LESS cores.
I should also point out that I went from DDR3 1600MHz (equivalent to DDR4 2333MHz) to DDR4 3600MHz, which would have also impacted performance.
For completeness, you can go to this (or any other heavily CPU dependent location where the GPU is <90% usage and FPS is < 60), alt-tab out and disable the 4 virtual cores 1,3,5,7 from the task manager affinity setting. You will note that even though the game is now only using the 4 real cores 0,2,4,8, the game performance is still about the same.
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
[quote="RAGEdemon"]
I say this because I am coming:
From 6 core 12 thread Xeon x5660 @ OC to 4.4GHz (~same generation core as a 2600K)
To 4 core 8 thread 7700K @ OC 5.1GHz[/quote]
Man, isn't X5660 (Westmere) same generation as i7 980X/990X (Gulftown) and the newer 2600K (Sandy Bridge) is roughly 15-20% better on single core performance?
On the topic, I am using X5650 @ 4.4GHz paired with single 1080 @ 2000MHz.
Playing Fallout 4 on 1080p I am getting nearly the same FPS on Low as on Ultra settings.
No doubt, this cpu is weak af.
Soon after the Rysen release I hope to figure what would be the best CPU for 3DVision.
RAGEdemon said:
I say this because I am coming:
From 6 core 12 thread Xeon x5660 @ OC to 4.4GHz (~same generation core as a 2600K)
To 4 core 8 thread 7700K @ OC 5.1GHz
Man, isn't X5660 (Westmere) same generation as i7 980X/990X (Gulftown) and the newer 2600K (Sandy Bridge) is roughly 15-20% better on single core performance?
On the topic, I am using X5650 @ 4.4GHz paired with single 1080 @ 2000MHz.
Playing Fallout 4 on 1080p I am getting nearly the same FPS on Low as on Ultra settings.
No doubt, this cpu is weak af.
Soon after the Rysen release I hope to figure what would be the best CPU for 3DVision.
Ryzen 1700X 3.9GHz | Asrock X370 Taichi | 16GB G.Skill
GTX 1080 Ti SLI | 850W EVGA P2 | Win7x64
Asus VG278HR | Panasonic TX-58EX750B 4K Active 3D
[quote="RAGEdemon"][quote="coffeeonteacup"]
Ive read that thread and im talking about 3d. My i5 3570k wasnt(At least badly) single thread bottlenecked in witcher, all cores begged close to 100%. Those older i5's are definitely not holding up well in some of the newer games. Witcher 3 definitely saw 30% boost when i switched to 2600k. I tested them for hours, i didnt "upgrade", just switched 2600k from another pc and couldnt believe how much smoother witcher was. So i put 3570k back in just to compare them. Didnt imagine anything, 30% boost on 2600k and much smoother. Ipc is about the same in both, 3570k little faster. Im sure there are places that are more single thread bottlenecked, like the novigrad square. I didnt run through it my testing sadly, i went from the smaller square to the left on the docks and then to the upper town skipping the main square. i can try later to see what fps i get in novigrad main square, but overall 2600k was so much more smoother in the witcher that it was hard to believe. My route in novigrad was actually the biggest difference between the two in minumum fps. from 40's to barely dropping below 60.
Data also heavily suggested that fallout 4 was single thread bottlenecked badly but i got 5fps boost in minimum fps. Sadly i didnt try any other games because it takes so fucking long to benchmark but i stand what i claimed. I think masterotaku will get more than 30% boost in witcher(many other games too, not all) if he upgrades to 5ghz 7700k with fast ddr4 from a 4670K at 4.3GHz. I bet its 50% difference in witcher and fallout 4 for example. [/quote]
I think you will find that the difference is due to the IPC (single threaded performance) improvements with the later generations rather than multi-threaded performance. I say that because I too have extensively benchmarked TW3 both for the CPU usage bug problem and my upgrade path.
I say this because I am coming:
From 6 core 12 thread Xeon x5660 @ OC to 4.4GHz (~same generation core as a 2600K)
To 4 core 8 thread 7700K @ OC 5.1GHz
The difference at novigrad square (arguably the most intensive location within the game), was going from 40fps to 58fps (45% improvement) with LESS cores.
I should also point out that I went from DDR3 1600MHz (equivalent to DDR4 2333MHz) to DDR4 3600MHz, which would have also impacted performance.
For completeness, you can go to this (or any other heavily CPU dependent location where the GPU is <90% usage and FPS is < 60), alt-tab out and disable the 4 virtual cores 1,3,5,7 from the task manager affinity setting. You will note that even though the game is now only using the 4 real cores 0,2,4,8, the game performance is still about the same.
[/quote]
Still, we are talking about going from 4670k@4.3ghz to 7700k@5ghz. I dont remember exactly how much faster kaby lake single thread performance is compared to haswell, its around 10 percent iirc. When you add the clock difference, that right there is a 30% difference. Going from 1600mhz ram to say 3200mhz or faster ddr4 will have pretty big benefits too.
Its a 50% difference for sure in single thread bottlenecking situations too. Do you think its not? Note that i never claimed hyper threading was the answer to single thread performance. I said games like witcher also benefit from hypertreading. I think i wrote it badly because i hate writing on my phone and was doing just that. Of course the faster single thread performance helps massively too on the situations you are describing. I dont doubt it for a second. But so does hyperthreading on these 4 core cpus with many modern games. And yes im still talking about 3d gaming.
The reason i told how much 2600k boosted my fps because i felt it was somwhat relatable. my 3570k@4.5ghz(faster ram) is very similar in performance compared 4670k@4.3ghz. If i got 30% boost with a 2600k without upgrading ram in most situations in the witcher, you can be damn sure that the 7700k upgrade will see greater benefit. Not only does the single core performance increase, but the clock frequency is also much higher and you have the benefit of a much faster ram(compared to masterotaku's specs).
coffeeonteacup said:
Ive read that thread and im talking about 3d. My i5 3570k wasnt(At least badly) single thread bottlenecked in witcher, all cores begged close to 100%. Those older i5's are definitely not holding up well in some of the newer games. Witcher 3 definitely saw 30% boost when i switched to 2600k. I tested them for hours, i didnt "upgrade", just switched 2600k from another pc and couldnt believe how much smoother witcher was. So i put 3570k back in just to compare them. Didnt imagine anything, 30% boost on 2600k and much smoother. Ipc is about the same in both, 3570k little faster. Im sure there are places that are more single thread bottlenecked, like the novigrad square. I didnt run through it my testing sadly, i went from the smaller square to the left on the docks and then to the upper town skipping the main square. i can try later to see what fps i get in novigrad main square, but overall 2600k was so much more smoother in the witcher that it was hard to believe. My route in novigrad was actually the biggest difference between the two in minumum fps. from 40's to barely dropping below 60.
Data also heavily suggested that fallout 4 was single thread bottlenecked badly but i got 5fps boost in minimum fps. Sadly i didnt try any other games because it takes so fucking long to benchmark but i stand what i claimed. I think masterotaku will get more than 30% boost in witcher(many other games too, not all) if he upgrades to 5ghz 7700k with fast ddr4 from a 4670K at 4.3GHz. I bet its 50% difference in witcher and fallout 4 for example.
I think you will find that the difference is due to the IPC (single threaded performance) improvements with the later generations rather than multi-threaded performance. I say that because I too have extensively benchmarked TW3 both for the CPU usage bug problem and my upgrade path.
I say this because I am coming:
From 6 core 12 thread Xeon x5660 @ OC to 4.4GHz (~same generation core as a 2600K)
To 4 core 8 thread 7700K @ OC 5.1GHz
The difference at novigrad square (arguably the most intensive location within the game), was going from 40fps to 58fps (45% improvement) with LESS cores.
I should also point out that I went from DDR3 1600MHz (equivalent to DDR4 2333MHz) to DDR4 3600MHz, which would have also impacted performance.
For completeness, you can go to this (or any other heavily CPU dependent location where the GPU is <90% usage and FPS is < 60), alt-tab out and disable the 4 virtual cores 1,3,5,7 from the task manager affinity setting. You will note that even though the game is now only using the 4 real cores 0,2,4,8, the game performance is still about the same.
Still, we are talking about going from 4670k@4.3ghz to 7700k@5ghz. I dont remember exactly how much faster kaby lake single thread performance is compared to haswell, its around 10 percent iirc. When you add the clock difference, that right there is a 30% difference. Going from 1600mhz ram to say 3200mhz or faster ddr4 will have pretty big benefits too.
Its a 50% difference for sure in single thread bottlenecking situations too. Do you think its not? Note that i never claimed hyper threading was the answer to single thread performance. I said games like witcher also benefit from hypertreading. I think i wrote it badly because i hate writing on my phone and was doing just that. Of course the faster single thread performance helps massively too on the situations you are describing. I dont doubt it for a second. But so does hyperthreading on these 4 core cpus with many modern games. And yes im still talking about 3d gaming.
The reason i told how much 2600k boosted my fps because i felt it was somwhat relatable. my 3570k@4.5ghz(faster ram) is very similar in performance compared 4670k@4.3ghz. If i got 30% boost with a 2600k without upgrading ram in most situations in the witcher, you can be damn sure that the 7700k upgrade will see greater benefit. Not only does the single core performance increase, but the clock frequency is also much higher and you have the benefit of a much faster ram(compared to masterotaku's specs).
[quote="mihabolil"][quote="RAGEdemon"]
I say this because I am coming:
From 6 core 12 thread Xeon x5660 @ OC to 4.4GHz (~same generation core as a 2600K)
To 4 core 8 thread 7700K @ OC 5.1GHz[/quote]
Man, isn't X5660 (Westmere) same generation as i7 980X/990X (Gulftown) and the newer 2600K (Sandy Bridge) is roughly 15-20% better on single core performance?
On the topic, I am using X5650 @ 4.4GHz paired with single 1080 @ 2000MHz.
Playing Fallout 4 on 1080p I am getting nearly the same FPS on Low as on Ultra settings.
No doubt, this cpu is weak af.
Soon after the Rysen release I hope to figure what would be the best CPU for 3DVision.[/quote]
Yeah, it's ~1 generation. (I should maybe have clarified in my post above that "~" means "Approximation" in technical fields).
For what it's worth, I have done the analysis already, and purchased the system in my sig below. Basically, you will need a 5.5GHz Ryzen to be on par with a 5GHz 7700K. 7700K can easily hit 5GHz. As much as I would like AMD to come out on top, I highly doubt RyZen can do 5GHz; let alone 5.5Ghz
The internet seems to be going crazy for the 4 extra cores. It's strange that people just don't understand that those extra cores will never be utilised unless they are doing rendering or encoding.
We have a choice between gaming/normal house/professional workload at ~20% increased performance or encoding/rendering at 70% increased performance. I chose gaming/normal house/professional workload at ~20%.
I had been keeping a very close eye on Zen for the last year. I was so sure of the result that I purchased my setup last month after the Ryzen name reveal after crunching the numbers. I knew that I wouldn't even have to wait for Ryzen release.
Intel have been optimising the same basic architecture for a long time now, and they are running out of space. Ryzen is new and can be optimised in later revisions much better. I sincerely hope that they will do a far better job of catching up to Intel in both Instructions Per Clock AND clock speed.
Until then, and knowing that 3D Vision suffers from severe CPU single threaded bottlenecking bug, 7700K is the clear choice for us.
If you want to wait; that's great, i admire your patience. I would only ask that you not be a consumer of the core hype train - keep in mind that those extra cores will be useless for the vast majority of tasks - tasks in which IPC and Clock speed have much greater value.
When it comes to gaming, and especially 3D vision gaming due to the CPU core limit bug, IPC and clock speed are king. A higher than 4 Core Count is, unfortunately, a distant concern, as much as I would want to think otherwise.
Even if the 3D Vision CPU core limit bug is fixed, the vast majority of games, especially AAA games are designed for the PS4 and XBOne (and then ported to the PC) - These are designed for consoles on which games only have access to 7 very weak cores barely comparable to virtual cores. Let's call them '3 real cores'. This means that all modern games (except a very small handful such as BF1) even in the next 5 years (until the new generation consoles come out), will be optimized for and limited to '3 cores' (6 'virtual' cores). This means that getting anything above a 4 core HT CPU for gaming for the next 5 years is going to be useless.
RAGEdemon said:
I say this because I am coming:
From 6 core 12 thread Xeon x5660 @ OC to 4.4GHz (~same generation core as a 2600K)
To 4 core 8 thread 7700K @ OC 5.1GHz
Man, isn't X5660 (Westmere) same generation as i7 980X/990X (Gulftown) and the newer 2600K (Sandy Bridge) is roughly 15-20% better on single core performance?
On the topic, I am using X5650 @ 4.4GHz paired with single 1080 @ 2000MHz.
Playing Fallout 4 on 1080p I am getting nearly the same FPS on Low as on Ultra settings.
No doubt, this cpu is weak af.
Soon after the Rysen release I hope to figure what would be the best CPU for 3DVision.
Yeah, it's ~1 generation. (I should maybe have clarified in my post above that "~" means "Approximation" in technical fields).
For what it's worth, I have done the analysis already, and purchased the system in my sig below. Basically, you will need a 5.5GHz Ryzen to be on par with a 5GHz 7700K. 7700K can easily hit 5GHz. As much as I would like AMD to come out on top, I highly doubt RyZen can do 5GHz; let alone 5.5Ghz
The internet seems to be going crazy for the 4 extra cores. It's strange that people just don't understand that those extra cores will never be utilised unless they are doing rendering or encoding.
We have a choice between gaming/normal house/professional workload at ~20% increased performance or encoding/rendering at 70% increased performance. I chose gaming/normal house/professional workload at ~20%.
I had been keeping a very close eye on Zen for the last year. I was so sure of the result that I purchased my setup last month after the Ryzen name reveal after crunching the numbers. I knew that I wouldn't even have to wait for Ryzen release.
Intel have been optimising the same basic architecture for a long time now, and they are running out of space. Ryzen is new and can be optimised in later revisions much better. I sincerely hope that they will do a far better job of catching up to Intel in both Instructions Per Clock AND clock speed.
Until then, and knowing that 3D Vision suffers from severe CPU single threaded bottlenecking bug, 7700K is the clear choice for us.
If you want to wait; that's great, i admire your patience. I would only ask that you not be a consumer of the core hype train - keep in mind that those extra cores will be useless for the vast majority of tasks - tasks in which IPC and Clock speed have much greater value.
When it comes to gaming, and especially 3D vision gaming due to the CPU core limit bug, IPC and clock speed are king. A higher than 4 Core Count is, unfortunately, a distant concern, as much as I would want to think otherwise.
Even if the 3D Vision CPU core limit bug is fixed, the vast majority of games, especially AAA games are designed for the PS4 and XBOne (and then ported to the PC) - These are designed for consoles on which games only have access to 7 very weak cores barely comparable to virtual cores. Let's call them '3 real cores'. This means that all modern games (except a very small handful such as BF1) even in the next 5 years (until the new generation consoles come out), will be optimized for and limited to '3 cores' (6 'virtual' cores). This means that getting anything above a 4 core HT CPU for gaming for the next 5 years is going to be useless.
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
[quote="RAGEdemon"]Until then, and knowing that 3D Vision suffers from severe CPU single threaded bottlenecking bug, 7700K is the clear choice for us.[/quote]Heard it's easier for 7600K to hit 5GHz than the 7700K, unless you disable HT, which make sense. But 7700K have a little more cache which also help in some scenarios so it's hard to say which one is really better. Let's say both are an excellent choice [b][color="orange"][u]for gaming[/u][/color][/b]. Of course 7700K will do better in WinRAR, encoding, rendering, etc.
RAGEdemon said:Until then, and knowing that 3D Vision suffers from severe CPU single threaded bottlenecking bug, 7700K is the clear choice for us.
Heard it's easier for 7600K to hit 5GHz than the 7700K, unless you disable HT, which make sense. But 7700K have a little more cache which also help in some scenarios so it's hard to say which one is really better. Let's say both are an excellent choice for gaming. Of course 7700K will do better in WinRAR, encoding, rendering, etc.
This year Intel will release Coffee Lake, claiming they will be 15% more powerful than Kaby Lake. I wish that to be pure single core performance increase. I would buy a 6c/12t CPU like that (if it can reach 5GHz) so hard right now... It would be more or less the 50% increase that I want, but with even more cores.
I just want to play Dragon's Dogma with constant 60fps in 3D :(.
This year Intel will release Coffee Lake, claiming they will be 15% more powerful than Kaby Lake. I wish that to be pure single core performance increase. I would buy a 6c/12t CPU like that (if it can reach 5GHz) so hard right now... It would be more or less the 50% increase that I want, but with even more cores.
I just want to play Dragon's Dogma with constant 60fps in 3D :(.
Thanks for the responses guys; this is one of the best forums out there. I truly appreciate all the work you guys do.
[quote="RAGEdemon"][quote="coffeeonteacup"][quote="masterotaku"]Very important. I have a 4670K at 4.3GHz and I would need around +50% single core performance to be happy. Which doesn't exist yet. I don't know if DDR4 RAM at 4000MHz would improve single core performance over the 25-30% that a 5GHz 7700K would give me.
I get between 37-45fps in Novigrad at 1080p because of my CPU.[/quote]
You should see much greater boost than 30% if you upgrade to 7700k. I know you said single core but games like witcher benefit much from hyperthreading. Heck i got 30% boost when i switched ivy i5 at 4.5ghz to sandy i7 ocd to the same frequency. I dont drop below 60 in novigrad, although i have couple settings lowered one notch. Grass distance and npc count from ultra to high iirc. Hairwork off, other ultra. Gtx1080 gpu too. I get little framedrops here and there(never below 50) but gpu related. They vanish if i droo 1080p res or other settings further.[/quote]
masterotaku is correct. I believe you are mistaken mate. Are you sure you are talking about 3D Vision and not 2D?
Go to central novigrad trading square. You will not have 60FPS in 3D Vision, and there is little chance that the drop in FPS will be due to GPU. You will note that your FPS will be more like 50 on a 7700k with a GTX 1080, and your GPU will be well below 90% usage when you lower resolution and graphics settings etc.
The game is being CPU bottlenecked due to a huge 3D Vision driver bug. Due to this, hyperthreading won't matter as much as it does in 2D with recent games.
More details here:
https://forums.geforce.com/default/topic/966422/3d-vision/3d-vision-cpu-bottelneck-gathering-information-thread-/
Unlike a GPU, reading CPU usage is tricky and needs a lot of background knowledge. Core usage and CPU usage averages alone are not any kind of indicator of how well or badly the CPU is being utilised.
Generally, I should also mention that, as the more experiences people have mentioned here, memory speed plays a significant role in game performance nowadays. Be very careful on which benchmarks you consume, as most of them have the GPU saturating during testing, meaning that even with a 50GHz CPU and 50GHz memory speed, they won't show any kind of difference.
As someone with an engineering background (as with many others on this forum), it's usually quite appalling to us to see the newbish mistakes these so called professional reviewers and benchmarkers make.
Just recently, I was disgusted to see the majority of 'professional' review sites benchmark the new 7XXXK series of CPUs against older generations in GPU limited scenarios, and expecting FPS differences. Kyle Bennett, the owner of HardOCP and I had a good chuckle about the stupidity of most other review sites via an email exchange, which cheered me up a little :([/quote]Excellent info my friend. It appears that splurging for a 7700 K and high frequency ddr 4 wouldn't help out as much as I would like for 3d. I'm actually okay with playing at a locked 30 FPS as long as the frametimes are even. They weren't even until I capped it with RTSS. The ingame framerate capper isn't very good for witcher 3.
Then again, if these dips are only associated with novigrad square and the rest of the game plays at locked 60, then that's a different story.
I'm assuming this is why I cant hold 60 in mad max either. Is there any hope at all of this driver bug being fixed?
Thanks for the responses guys; this is one of the best forums out there. I truly appreciate all the work you guys do.
RAGEdemon said:
coffeeonteacup said:
masterotaku said:Very important. I have a 4670K at 4.3GHz and I would need around +50% single core performance to be happy. Which doesn't exist yet. I don't know if DDR4 RAM at 4000MHz would improve single core performance over the 25-30% that a 5GHz 7700K would give me.
I get between 37-45fps in Novigrad at 1080p because of my CPU.
You should see much greater boost than 30% if you upgrade to 7700k. I know you said single core but games like witcher benefit much from hyperthreading. Heck i got 30% boost when i switched ivy i5 at 4.5ghz to sandy i7 ocd to the same frequency. I dont drop below 60 in novigrad, although i have couple settings lowered one notch. Grass distance and npc count from ultra to high iirc. Hairwork off, other ultra. Gtx1080 gpu too. I get little framedrops here and there(never below 50) but gpu related. They vanish if i droo 1080p res or other settings further.
masterotaku is correct. I believe you are mistaken mate. Are you sure you are talking about 3D Vision and not 2D?
Go to central novigrad trading square. You will not have 60FPS in 3D Vision, and there is little chance that the drop in FPS will be due to GPU. You will note that your FPS will be more like 50 on a 7700k with a GTX 1080, and your GPU will be well below 90% usage when you lower resolution and graphics settings etc.
The game is being CPU bottlenecked due to a huge 3D Vision driver bug. Due to this, hyperthreading won't matter as much as it does in 2D with recent games.
Unlike a GPU, reading CPU usage is tricky and needs a lot of background knowledge. Core usage and CPU usage averages alone are not any kind of indicator of how well or badly the CPU is being utilised.
Generally, I should also mention that, as the more experiences people have mentioned here, memory speed plays a significant role in game performance nowadays. Be very careful on which benchmarks you consume, as most of them have the GPU saturating during testing, meaning that even with a 50GHz CPU and 50GHz memory speed, they won't show any kind of difference.
As someone with an engineering background (as with many others on this forum), it's usually quite appalling to us to see the newbish mistakes these so called professional reviewers and benchmarkers make.
Just recently, I was disgusted to see the majority of 'professional' review sites benchmark the new 7XXXK series of CPUs against older generations in GPU limited scenarios, and expecting FPS differences. Kyle Bennett, the owner of HardOCP and I had a good chuckle about the stupidity of most other review sites via an email exchange, which cheered me up a little :(
Excellent info my friend. It appears that splurging for a 7700 K and high frequency ddr 4 wouldn't help out as much as I would like for 3d. I'm actually okay with playing at a locked 30 FPS as long as the frametimes are even. They weren't even until I capped it with RTSS. The ingame framerate capper isn't very good for witcher 3.
Then again, if these dips are only associated with novigrad square and the rest of the game plays at locked 60, then that's a different story.
I'm assuming this is why I cant hold 60 in mad max either. Is there any hope at all of this driver bug being fixed?
[quote="masterotaku"]This year Intel will release Coffee Lake, claiming they will be 15% more powerful than Kaby Lake. I wish that to be pure single core performance increase. I would buy a 6c/12t CPU like that (if it can reach 5GHz) so hard right now... It would be more or less the 50% increase that I want, but with even more cores.
I just want to play Dragon's Dogma with constant 60fps in 3D :(.[/quote]Join the locked 30 fps with a 3.33 MS frametime club :)
masterotaku said:This year Intel will release Coffee Lake, claiming they will be 15% more powerful than Kaby Lake. I wish that to be pure single core performance increase. I would buy a 6c/12t CPU like that (if it can reach 5GHz) so hard right now... It would be more or less the 50% increase that I want, but with even more cores.
I just want to play Dragon's Dogma with constant 60fps in 3D :(.
Join the locked 30 fps with a 3.33 MS frametime club :)
[quote="masterotaku"]This year Intel will release Coffee Lake, claiming they will be 15% more powerful than Kaby Lake. I wish that to be pure single core performance increase. I would buy a 6c/12t CPU like that (if it can reach 5GHz) so hard right now... It would be more or less the 50% increase that I want, but with even more cores.
I just want to play Dragon's Dogma with constant 60fps in 3D :(.[/quote]
As far as I am aware mate, both the upcoming coffee lake and cannon lake will be mobile and tablet parts only, not high performance desktop parts. They will be the same 10% performance increase as skylake to kaby lake, i.e. 0% @ IPC but a few hundred MHz on the clock - but there will not be any desktop high performance parts such as a 66000K/6700K or 7600K/7700K.
More info here:
http://wccftech.com/intel-14nm-coffee-lake-10nm-cannonlake-2018/
https://www.pcgamesn.com/intel/intel-14nm-coffee-lake-release-date
For the foreseeable future, it looks like the best desktop performance chip for low thread count gaming will be an OC 7700k. Maybe with the advent of Ryzen, Intel will reconsider its future plans.
If you fellas want to wait, that that's great. Patience is a virtue! But it seems very unlikely that we will get any gaming grade chips better than kaby lake for quite some time, and if one does come out, waiting might just have been a waste of time as during that time you could have been plying most games at 60FPS locked.
Of course, all this wouldn't even be a if issue if nVidia fix their driver bug :)
@tygeezy: With a 7700K at 5.1GHz and GTX 1080 SLi, I can confirm that it is 60fps constant except from a very few areas such as central novigrad and to some extent more so in the blood and wine expansion in the city areas.
masterotaku said:This year Intel will release Coffee Lake, claiming they will be 15% more powerful than Kaby Lake. I wish that to be pure single core performance increase. I would buy a 6c/12t CPU like that (if it can reach 5GHz) so hard right now... It would be more or less the 50% increase that I want, but with even more cores.
I just want to play Dragon's Dogma with constant 60fps in 3D :(.
As far as I am aware mate, both the upcoming coffee lake and cannon lake will be mobile and tablet parts only, not high performance desktop parts. They will be the same 10% performance increase as skylake to kaby lake, i.e. 0% @ IPC but a few hundred MHz on the clock - but there will not be any desktop high performance parts such as a 66000K/6700K or 7600K/7700K.
For the foreseeable future, it looks like the best desktop performance chip for low thread count gaming will be an OC 7700k. Maybe with the advent of Ryzen, Intel will reconsider its future plans.
If you fellas want to wait, that that's great. Patience is a virtue! But it seems very unlikely that we will get any gaming grade chips better than kaby lake for quite some time, and if one does come out, waiting might just have been a waste of time as during that time you could have been plying most games at 60FPS locked.
Of course, all this wouldn't even be a if issue if nVidia fix their driver bug :)
@tygeezy: With a 7700K at 5.1GHz and GTX 1080 SLi, I can confirm that it is 60fps constant except from a very few areas such as central novigrad and to some extent more so in the blood and wine expansion in the city areas.
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
[quote="RAGEdemon"]
@tygeezy: With a 7700K at 5GHz and GTX 1080 SLi, I can confirm that it is 60fps constant except from a very few areas such as central novigrad and to some extent more so in the blood and wine expansion in the city areas.
[/quote]
I am considering CPU/MOBO as yours, however I wonder if you feel any drawback of the PCI-E x8 in SLI and what bridge you are using (nVidia HB or just standart ribbon).
RAGEdemon said:
@tygeezy: With a 7700K at 5GHz and GTX 1080 SLi, I can confirm that it is 60fps constant except from a very few areas such as central novigrad and to some extent more so in the blood and wine expansion in the city areas.
I am considering CPU/MOBO as yours, however I wonder if you feel any drawback of the PCI-E x8 in SLI and what bridge you are using (nVidia HB or just standart ribbon).
Ryzen 1700X 3.9GHz | Asrock X370 Taichi | 16GB G.Skill
GTX 1080 Ti SLI | 850W EVGA P2 | Win7x64
Asus VG278HR | Panasonic TX-58EX750B 4K Active 3D
There is no such 8x PCIe 3.0 limitation, and for the forseeable future, all CPUs will only have 8x lanes for SLi due to the chipset northbridge being integrated onto the CPU itself. This means that going forward it will likely always be 16x lanes to the CPU from the graphics slots so each SLi card will only have 8x lanes.
Before 2011, there was a separate northbridge on the chipset which allowed 16x/16x @ PCIe 2.0.
[img]https://www.extremetech.com/wp-content/uploads/2017/01/Z270-Diagram.png[/img]
There is no difference at all. I have % PCIe bandwidth usage displayed as one of my stats on my custom LCD, which shows that the PCIe BW at x8 on each card is never even higher than 20% utilisation with both GPUs maxed to 99% usage. I play at 720p DSR to 1600p in 3D Vision, which is exactly equivalent to playing at 4K in 2D.
[img]https://s21.postimg.org/cbhzk3syf/IMG_20160915_012529.jpg[/img]
https://forums.geforce.com/default/topic/832496/3d-vision/3d-vision-cpu-core-bottleneck/post/4975938/#4975938
I have used a double ribbon bridge in the past, but currently use a proper HB bridge.
There is no such 8x PCIe 3.0 limitation, and for the forseeable future, all CPUs will only have 8x lanes for SLi due to the chipset northbridge being integrated onto the CPU itself. This means that going forward it will likely always be 16x lanes to the CPU from the graphics slots so each SLi card will only have 8x lanes.
Before 2011, there was a separate northbridge on the chipset which allowed 16x/16x @ PCIe 2.0.
There is no difference at all. I have % PCIe bandwidth usage displayed as one of my stats on my custom LCD, which shows that the PCIe BW at x8 on each card is never even higher than 20% utilisation with both GPUs maxed to 99% usage. I play at 720p DSR to 1600p in 3D Vision, which is exactly equivalent to playing at 4K in 2D.
[quote="RAGEdemon"][quote="masterotaku"]
Of course, all this wouldn't even be a if issue if nVidia fix their driver bug :)
@tygeezy: With a 7700K at 5.1GHz and GTX 1080 SLi, I can confirm that it is 60fps constant except from a very few areas such as central novigrad and to some extent more so in the blood and wine expansion in the city areas.
[/quote]If you turn of one of your 1080's, how does it perform then? I'm down to get a new platform upgrade as this 17 860 is long in the tooth, but i'm more hesitant to add another 1070. Adding more power and heat and not every game being compatible. Also, my monitor is 1080 P, so I would only really stand to gain in 3d... Where it looks like the CPU upgrade gives me the biggest boost. 2 1070's for 1080 P 2d is ludicrous.
Also, how do you know that it's a driver bug that is causing the CPU issues in 3d vision? Is this something that has been brought up to the software team at nvidia?
Also, do you have any frame pacing issues in SLI? That was a big thing in years past, but i heard thats improved. You seem like the right person to ask since you have an sli setup and really know your hardware + software.
Of course, all this wouldn't even be a if issue if nVidia fix their driver bug :)
@tygeezy: With a 7700K at 5.1GHz and GTX 1080 SLi, I can confirm that it is 60fps constant except from a very few areas such as central novigrad and to some extent more so in the blood and wine expansion in the city areas.
If you turn of one of your 1080's, how does it perform then? I'm down to get a new platform upgrade as this 17 860 is long in the tooth, but i'm more hesitant to add another 1070. Adding more power and heat and not every game being compatible. Also, my monitor is 1080 P, so I would only really stand to gain in 3d... Where it looks like the CPU upgrade gives me the biggest boost. 2 1070's for 1080 P 2d is ludicrous.
Also, how do you know that it's a driver bug that is causing the CPU issues in 3d vision? Is this something that has been brought up to the software team at nvidia?
Also, do you have any frame pacing issues in SLI? That was a big thing in years past, but i heard thats improved. You seem like the right person to ask since you have an sli setup and really know your hardware + software.
Based on my testing I didn't notice any difference in using 1 or 2 SLI bridges (ribbons).
This is ofc on 980Ti... Is there a reason to use 2 SLI bridges if you are not going 3x or 4x SLI?
Based on my testing I didn't notice any difference in using 1 or 2 SLI bridges (ribbons).
This is ofc on 980Ti... Is there a reason to use 2 SLI bridges if you are not going 3x or 4x SLI?
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
@helifax
There was speculation that there was no difference between the £40 HB bridge and 2 ribbon cables at £4. This seemed to be confirmed by testing because the SLI bridge warning message disappeared if 2 ribbon bidges were installed onto just 2 cards, side by side. Testing has shown otherwise however - HB bridge does apparently increase performance, whereas a dual ribbon bridge simply acts as a single ribbon bridge, even though it makes the ribbon message go away.
[img]http://images.techhive.com/images/article/2016/06/bridge_comarison_rainbow_six_siege_4k-100667993-large.png[/img]
More here:
http://www.pcworld.com/article/3087524/hardware/tested-the-payoff-in-buying-nvidias-40-sli-hb-bridge.html
@tygeezy
Those are interesting questions. I'll take them one at a time.
[b][color="green"]1.[/color][/b]
It depends on what resolution you play at. I play at DSR 2560x1600 on a 1280x800 projector. When 3D Vision is enabled, it literally is equivalent of processing double the number of pixels.
So quick math:
2560x1600 = 4,096,000 pixels
With 3D Vision, we cna double this:
4,096,000 pixels x2 = [color="green"]8,192,000 pixels[/color]
4K Ultra HD is a resolution of 3840 pixels × 2160 lines = 8,294,400 pixels.
This means that me playing on my 'lowly' 1280x800 projector in 3D Vision is the equivalent of playing in 2D at 4K, taking DSR into account.
For this, I indeed need 1080 SLi - both GPUs in most modern games such as the Witcher 3, Deus Ex Mankind Divided, Rise of the Tomb Rider, etc max out both cards at 99%, but od give me mostly 60FPS constant.
For a single card, I would get 30fps.
The difference is that you probably will not be doing DSR, but there will be other heavy factors such as if you use Anti Aliasing and at what level?
If no antialiasing is used then you will be using 1920x1080 = 2,073,600 pixels.
In 3D vision, this translates to 2 x 2,073,600 = 4,147,200 pixels.
This is equivalent of a resolution of 2560x1600.
So, in theory, your single 1070 will give you ~50fps-60fps not taking into account if you want to have Antialiasing enabled. With a decent form of antialiasing such a MSAA, you are looking at the equivalent of processing 5 to 6,000,000 pixels. This is equivalent of a resolution of 3440x1440.
[b][color="green"]Ok, enough calculations. What does this actually mean?[/color][/b]
Well, it means that all you have to do is go onto various GTX 1070 review websites and look at the benchmarking resolutions of 2560x1600 (2560x1440 as an approximate) and 3440x1440:
[img]http://techgage.com/wp-content/uploads/2016/06/NVIDIA-GeForce-GTX-1070-The-Witcher-3-Wild-Hunt-2560x1440.png[/img]
and
[img]http://techgage.com/wp-content/uploads/2016/06/NVIDIA-GeForce-GTX-1070-The-Witcher-3-Wild-Hunt-3440x1440.png[/img]
More here:
http://techgage.com/article/nvidia-geforce-gtx-1070-review-a-look-at-1440p-4k-ultra-wide-gaming/4/
So bottom line:
With a 7700K at 5GHz, even at 1080p 3D Vision and no DSR, you will most definitely need 2x 1070 in SLi to try and maintain 60FPS in 3D vision. Otherwise, you will be limited to 45-50 average fps and even worse minimums.
Even in 2D, you can enable 4X DSR with SLi and have amazing visual quality while you game, so it's not quite ludicrous. Caution though, only ever use 4x DSR and nothing below that. Also set the DSR blurring to 0%.
[b][color="green"]2.[/color][/b]
Regarding the 3D Vision bug, check out this thread:
https://forums.geforce.com/default/topic/966422/3d-vision/3d-vision-cpu-bottelneck-gathering-information-thread-/
Basically, it has been confirmed as a bug by nVidia and they are working on a fix for us.
[b][color="green"]3.[/color][/b]
Frame pacing issues - not noticeable by me any more. They implemented a hardware fix for this back in the 6XX days and it was been a huge improvement ever since. There is of course a lot of frame pacing issues if the frame rate drops below 60FPS, but that is a problem even on a single card. This is why constant 60FPS is so important, especially as there is no Adaptive VSync or G-Sync or FreeSync in 3D Vision. There is an SLi setting called VSync smooth AFR behaviour, which apparently helps frame pacing issues tremendously but unfortunately also cuts the FPS down to a half, if you ever need it.
Hard numbers via FCAT graph below shows smooth sailing:
[img]https://www.guru3d.com/index.php?ct=articles&action=file&id=22415[/img]
https://www.guru3d.com/articles_pages/geforce_gtx_1080_2_way_sli_review,10.html
[b][color="green"]4.[/color][/b]
I see a lot of people complain about SLi compatibility. I have seldom encountered problems as the vast majority of games have scaled great with SLi and especially 3D Vision. I think the complainers are 1. expect SLi to work great on release, and 2. don't want to "mess around" with NVinspector. I usually get to games 6 months or more after release, at which point a good SLI flag has likely been found, patches have increased performance and fixed bugs / have added content, and most importantly, a 3D vision fix has been released. At this point, I don't have nearly as many "problems" as the anti-SLi crowd claim. Of course, there will always be engines which don't support SLi, but with the advent of DirectX 12 which supports multi-gpu at a fundamental level, it should be much less of an issue going forward.
Am experienced developer and long tome 3D vision fan did extensive 3D vision SLi scaling a few years ago, and his conclusion was that 3D vision SLi scaling was even better than 2D scaling. Scaling has only got better since as I usually see both my cards maxed at 99%, unless I reach a CPU limited scenario (central Novigrad square for example).
[img]http://www.volnapc.com/uploads/3/0/9/1/30918989/8262424_orig.png[/img]
His insightful article is here:
http://www.volnapc.com/all-posts/3d-and-sli-performance-tested
@helifax
There was speculation that there was no difference between the £40 HB bridge and 2 ribbon cables at £4. This seemed to be confirmed by testing because the SLI bridge warning message disappeared if 2 ribbon bidges were installed onto just 2 cards, side by side. Testing has shown otherwise however - HB bridge does apparently increase performance, whereas a dual ribbon bridge simply acts as a single ribbon bridge, even though it makes the ribbon message go away.
Those are interesting questions. I'll take them one at a time.
1.
It depends on what resolution you play at. I play at DSR 2560x1600 on a 1280x800 projector. When 3D Vision is enabled, it literally is equivalent of processing double the number of pixels.
So quick math:
2560x1600 = 4,096,000 pixels
With 3D Vision, we cna double this:
4,096,000 pixels x2 = 8,192,000 pixels
4K Ultra HD is a resolution of 3840 pixels × 2160 lines = 8,294,400 pixels.
This means that me playing on my 'lowly' 1280x800 projector in 3D Vision is the equivalent of playing in 2D at 4K, taking DSR into account.
For this, I indeed need 1080 SLi - both GPUs in most modern games such as the Witcher 3, Deus Ex Mankind Divided, Rise of the Tomb Rider, etc max out both cards at 99%, but od give me mostly 60FPS constant.
For a single card, I would get 30fps.
The difference is that you probably will not be doing DSR, but there will be other heavy factors such as if you use Anti Aliasing and at what level?
If no antialiasing is used then you will be using 1920x1080 = 2,073,600 pixels.
In 3D vision, this translates to 2 x 2,073,600 = 4,147,200 pixels.
This is equivalent of a resolution of 2560x1600.
So, in theory, your single 1070 will give you ~50fps-60fps not taking into account if you want to have Antialiasing enabled. With a decent form of antialiasing such a MSAA, you are looking at the equivalent of processing 5 to 6,000,000 pixels. This is equivalent of a resolution of 3440x1440.
Ok, enough calculations. What does this actually mean?
Well, it means that all you have to do is go onto various GTX 1070 review websites and look at the benchmarking resolutions of 2560x1600 (2560x1440 as an approximate) and 3440x1440:
So bottom line:
With a 7700K at 5GHz, even at 1080p 3D Vision and no DSR, you will most definitely need 2x 1070 in SLi to try and maintain 60FPS in 3D vision. Otherwise, you will be limited to 45-50 average fps and even worse minimums.
Even in 2D, you can enable 4X DSR with SLi and have amazing visual quality while you game, so it's not quite ludicrous. Caution though, only ever use 4x DSR and nothing below that. Also set the DSR blurring to 0%.
Basically, it has been confirmed as a bug by nVidia and they are working on a fix for us.
3.
Frame pacing issues - not noticeable by me any more. They implemented a hardware fix for this back in the 6XX days and it was been a huge improvement ever since. There is of course a lot of frame pacing issues if the frame rate drops below 60FPS, but that is a problem even on a single card. This is why constant 60FPS is so important, especially as there is no Adaptive VSync or G-Sync or FreeSync in 3D Vision. There is an SLi setting called VSync smooth AFR behaviour, which apparently helps frame pacing issues tremendously but unfortunately also cuts the FPS down to a half, if you ever need it.
4.
I see a lot of people complain about SLi compatibility. I have seldom encountered problems as the vast majority of games have scaled great with SLi and especially 3D Vision. I think the complainers are 1. expect SLi to work great on release, and 2. don't want to "mess around" with NVinspector. I usually get to games 6 months or more after release, at which point a good SLI flag has likely been found, patches have increased performance and fixed bugs / have added content, and most importantly, a 3D vision fix has been released. At this point, I don't have nearly as many "problems" as the anti-SLi crowd claim. Of course, there will always be engines which don't support SLi, but with the advent of DirectX 12 which supports multi-gpu at a fundamental level, it should be much less of an issue going forward.
Am experienced developer and long tome 3D vision fan did extensive 3D vision SLi scaling a few years ago, and his conclusion was that 3D vision SLi scaling was even better than 2D scaling. Scaling has only got better since as I usually see both my cards maxed at 99%, unless I reach a CPU limited scenario (central Novigrad square for example).
Thanks RAGEdemon!
Last I checked they said the new HB Bridge is only for the new x1000 generation ?!?! Does it affect the x900 generation ?
Sure, I can buy a new bridge, that is not the problem, but I think is not needed :( I can't find anything on what the new bridge does on the x900 series :( Hence my question.
A good video on this topic:
https://www.youtube.com/watch?time_continue=10&v=mWcsaociTjE
Last I checked they said the new HB Bridge is only for the new x1000 generation ?!?! Does it affect the x900 generation ?
Sure, I can buy a new bridge, that is not the problem, but I think is not needed :( I can't find anything on what the new bridge does on the x900 series :( Hence my question.
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
I think you will find that the difference is due to the IPC (single threaded performance) improvements with the later generations rather than multi-threaded performance. I say that because I too have extensively benchmarked TW3 both for the CPU usage bug problem and my upgrade path.
I say this because I am coming:
From 6 core 12 thread Xeon x5660 @ OC to 4.4GHz (~same generation core as a 2600K)
To 4 core 8 thread 7700K @ OC 5.1GHz
The difference at novigrad square (arguably the most intensive location within the game), was going from 40fps to 58fps (45% improvement) with LESS cores.
I should also point out that I went from DDR3 1600MHz (equivalent to DDR4 2333MHz) to DDR4 3600MHz, which would have also impacted performance.
For completeness, you can go to this (or any other heavily CPU dependent location where the GPU is <90% usage and FPS is < 60), alt-tab out and disable the 4 virtual cores 1,3,5,7 from the task manager affinity setting. You will note that even though the game is now only using the 4 real cores 0,2,4,8, the game performance is still about the same.
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
Man, isn't X5660 (Westmere) same generation as i7 980X/990X (Gulftown) and the newer 2600K (Sandy Bridge) is roughly 15-20% better on single core performance?
On the topic, I am using X5650 @ 4.4GHz paired with single 1080 @ 2000MHz.
Playing Fallout 4 on 1080p I am getting nearly the same FPS on Low as on Ultra settings.
No doubt, this cpu is weak af.
Soon after the Rysen release I hope to figure what would be the best CPU for 3DVision.
Ryzen 1700X 3.9GHz | Asrock X370 Taichi | 16GB G.Skill
GTX 1080 Ti SLI | 850W EVGA P2 | Win7x64
Asus VG278HR | Panasonic TX-58EX750B 4K Active 3D
Still, we are talking about going from 4670k@4.3ghz to 7700k@5ghz. I dont remember exactly how much faster kaby lake single thread performance is compared to haswell, its around 10 percent iirc. When you add the clock difference, that right there is a 30% difference. Going from 1600mhz ram to say 3200mhz or faster ddr4 will have pretty big benefits too.
Its a 50% difference for sure in single thread bottlenecking situations too. Do you think its not? Note that i never claimed hyper threading was the answer to single thread performance. I said games like witcher also benefit from hypertreading. I think i wrote it badly because i hate writing on my phone and was doing just that. Of course the faster single thread performance helps massively too on the situations you are describing. I dont doubt it for a second. But so does hyperthreading on these 4 core cpus with many modern games. And yes im still talking about 3d gaming.
The reason i told how much 2600k boosted my fps because i felt it was somwhat relatable. my 3570k@4.5ghz(faster ram) is very similar in performance compared 4670k@4.3ghz. If i got 30% boost with a 2600k without upgrading ram in most situations in the witcher, you can be damn sure that the 7700k upgrade will see greater benefit. Not only does the single core performance increase, but the clock frequency is also much higher and you have the benefit of a much faster ram(compared to masterotaku's specs).
Yeah, it's ~1 generation. (I should maybe have clarified in my post above that "~" means "Approximation" in technical fields).
For what it's worth, I have done the analysis already, and purchased the system in my sig below. Basically, you will need a 5.5GHz Ryzen to be on par with a 5GHz 7700K. 7700K can easily hit 5GHz. As much as I would like AMD to come out on top, I highly doubt RyZen can do 5GHz; let alone 5.5Ghz
The internet seems to be going crazy for the 4 extra cores. It's strange that people just don't understand that those extra cores will never be utilised unless they are doing rendering or encoding.
We have a choice between gaming/normal house/professional workload at ~20% increased performance or encoding/rendering at 70% increased performance. I chose gaming/normal house/professional workload at ~20%.
I had been keeping a very close eye on Zen for the last year. I was so sure of the result that I purchased my setup last month after the Ryzen name reveal after crunching the numbers. I knew that I wouldn't even have to wait for Ryzen release.
Intel have been optimising the same basic architecture for a long time now, and they are running out of space. Ryzen is new and can be optimised in later revisions much better. I sincerely hope that they will do a far better job of catching up to Intel in both Instructions Per Clock AND clock speed.
Until then, and knowing that 3D Vision suffers from severe CPU single threaded bottlenecking bug, 7700K is the clear choice for us.
If you want to wait; that's great, i admire your patience. I would only ask that you not be a consumer of the core hype train - keep in mind that those extra cores will be useless for the vast majority of tasks - tasks in which IPC and Clock speed have much greater value.
When it comes to gaming, and especially 3D vision gaming due to the CPU core limit bug, IPC and clock speed are king. A higher than 4 Core Count is, unfortunately, a distant concern, as much as I would want to think otherwise.
Even if the 3D Vision CPU core limit bug is fixed, the vast majority of games, especially AAA games are designed for the PS4 and XBOne (and then ported to the PC) - These are designed for consoles on which games only have access to 7 very weak cores barely comparable to virtual cores. Let's call them '3 real cores'. This means that all modern games (except a very small handful such as BF1) even in the next 5 years (until the new generation consoles come out), will be optimized for and limited to '3 cores' (6 'virtual' cores). This means that getting anything above a 4 core HT CPU for gaming for the next 5 years is going to be useless.
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
3D Vision must live! NVIDIA, don't let us down!
I just want to play Dragon's Dogma with constant 60fps in 3D :(.
CPU: Intel Core i7 7700K @ 4.9GHz
Motherboard: Gigabyte Aorus GA-Z270X-Gaming 5
RAM: GSKILL Ripjaws Z 16GB 3866MHz CL18
GPU: MSI GeForce RTX 2080Ti Gaming X Trio
Monitor: Asus PG278QR
Speakers: Logitech Z506
Donations account: masterotakusuko@gmail.com
Excellent info my friend. It appears that splurging for a 7700 K and high frequency ddr 4 wouldn't help out as much as I would like for 3d. I'm actually okay with playing at a locked 30 FPS as long as the frametimes are even. They weren't even until I capped it with RTSS. The ingame framerate capper isn't very good for witcher 3.
Then again, if these dips are only associated with novigrad square and the rest of the game plays at locked 60, then that's a different story.
I'm assuming this is why I cant hold 60 in mad max either. Is there any hope at all of this driver bug being fixed?
As far as I am aware mate, both the upcoming coffee lake and cannon lake will be mobile and tablet parts only, not high performance desktop parts. They will be the same 10% performance increase as skylake to kaby lake, i.e. 0% @ IPC but a few hundred MHz on the clock - but there will not be any desktop high performance parts such as a 66000K/6700K or 7600K/7700K.
More info here:
http://wccftech.com/intel-14nm-coffee-lake-10nm-cannonlake-2018/
https://www.pcgamesn.com/intel/intel-14nm-coffee-lake-release-date
For the foreseeable future, it looks like the best desktop performance chip for low thread count gaming will be an OC 7700k. Maybe with the advent of Ryzen, Intel will reconsider its future plans.
If you fellas want to wait, that that's great. Patience is a virtue! But it seems very unlikely that we will get any gaming grade chips better than kaby lake for quite some time, and if one does come out, waiting might just have been a waste of time as during that time you could have been plying most games at 60FPS locked.
Of course, all this wouldn't even be a if issue if nVidia fix their driver bug :)
@tygeezy: With a 7700K at 5.1GHz and GTX 1080 SLi, I can confirm that it is 60fps constant except from a very few areas such as central novigrad and to some extent more so in the blood and wine expansion in the city areas.
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
I am considering CPU/MOBO as yours, however I wonder if you feel any drawback of the PCI-E x8 in SLI and what bridge you are using (nVidia HB or just standart ribbon).
Ryzen 1700X 3.9GHz | Asrock X370 Taichi | 16GB G.Skill
GTX 1080 Ti SLI | 850W EVGA P2 | Win7x64
Asus VG278HR | Panasonic TX-58EX750B 4K Active 3D
Before 2011, there was a separate northbridge on the chipset which allowed 16x/16x @ PCIe 2.0.
There is no difference at all. I have % PCIe bandwidth usage displayed as one of my stats on my custom LCD, which shows that the PCIe BW at x8 on each card is never even higher than 20% utilisation with both GPUs maxed to 99% usage. I play at 720p DSR to 1600p in 3D Vision, which is exactly equivalent to playing at 4K in 2D.
https://forums.geforce.com/default/topic/832496/3d-vision/3d-vision-cpu-core-bottleneck/post/4975938/#4975938
I have used a double ribbon bridge in the past, but currently use a proper HB bridge.
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
This is ofc on 980Ti... Is there a reason to use 2 SLI bridges if you are not going 3x or 4x SLI?
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
There was speculation that there was no difference between the £40 HB bridge and 2 ribbon cables at £4. This seemed to be confirmed by testing because the SLI bridge warning message disappeared if 2 ribbon bidges were installed onto just 2 cards, side by side. Testing has shown otherwise however - HB bridge does apparently increase performance, whereas a dual ribbon bridge simply acts as a single ribbon bridge, even though it makes the ribbon message go away.
More here:
http://www.pcworld.com/article/3087524/hardware/tested-the-payoff-in-buying-nvidias-40-sli-hb-bridge.html
@tygeezy
Those are interesting questions. I'll take them one at a time.
1.
It depends on what resolution you play at. I play at DSR 2560x1600 on a 1280x800 projector. When 3D Vision is enabled, it literally is equivalent of processing double the number of pixels.
So quick math:
2560x1600 = 4,096,000 pixels
With 3D Vision, we cna double this:
4,096,000 pixels x2 = 8,192,000 pixels
4K Ultra HD is a resolution of 3840 pixels × 2160 lines = 8,294,400 pixels.
This means that me playing on my 'lowly' 1280x800 projector in 3D Vision is the equivalent of playing in 2D at 4K, taking DSR into account.
For this, I indeed need 1080 SLi - both GPUs in most modern games such as the Witcher 3, Deus Ex Mankind Divided, Rise of the Tomb Rider, etc max out both cards at 99%, but od give me mostly 60FPS constant.
For a single card, I would get 30fps.
The difference is that you probably will not be doing DSR, but there will be other heavy factors such as if you use Anti Aliasing and at what level?
If no antialiasing is used then you will be using 1920x1080 = 2,073,600 pixels.
In 3D vision, this translates to 2 x 2,073,600 = 4,147,200 pixels.
This is equivalent of a resolution of 2560x1600.
So, in theory, your single 1070 will give you ~50fps-60fps not taking into account if you want to have Antialiasing enabled. With a decent form of antialiasing such a MSAA, you are looking at the equivalent of processing 5 to 6,000,000 pixels. This is equivalent of a resolution of 3440x1440.
Ok, enough calculations. What does this actually mean?
Well, it means that all you have to do is go onto various GTX 1070 review websites and look at the benchmarking resolutions of 2560x1600 (2560x1440 as an approximate) and 3440x1440:
and
More here:
http://techgage.com/article/nvidia-geforce-gtx-1070-review-a-look-at-1440p-4k-ultra-wide-gaming/4/
So bottom line:
With a 7700K at 5GHz, even at 1080p 3D Vision and no DSR, you will most definitely need 2x 1070 in SLi to try and maintain 60FPS in 3D vision. Otherwise, you will be limited to 45-50 average fps and even worse minimums.
Even in 2D, you can enable 4X DSR with SLi and have amazing visual quality while you game, so it's not quite ludicrous. Caution though, only ever use 4x DSR and nothing below that. Also set the DSR blurring to 0%.
2.
Regarding the 3D Vision bug, check out this thread:
https://forums.geforce.com/default/topic/966422/3d-vision/3d-vision-cpu-bottelneck-gathering-information-thread-/
Basically, it has been confirmed as a bug by nVidia and they are working on a fix for us.
3.
Frame pacing issues - not noticeable by me any more. They implemented a hardware fix for this back in the 6XX days and it was been a huge improvement ever since. There is of course a lot of frame pacing issues if the frame rate drops below 60FPS, but that is a problem even on a single card. This is why constant 60FPS is so important, especially as there is no Adaptive VSync or G-Sync or FreeSync in 3D Vision. There is an SLi setting called VSync smooth AFR behaviour, which apparently helps frame pacing issues tremendously but unfortunately also cuts the FPS down to a half, if you ever need it.
Hard numbers via FCAT graph below shows smooth sailing:
https://www.guru3d.com/articles_pages/geforce_gtx_1080_2_way_sli_review,10.html
4.
I see a lot of people complain about SLi compatibility. I have seldom encountered problems as the vast majority of games have scaled great with SLi and especially 3D Vision. I think the complainers are 1. expect SLi to work great on release, and 2. don't want to "mess around" with NVinspector. I usually get to games 6 months or more after release, at which point a good SLI flag has likely been found, patches have increased performance and fixed bugs / have added content, and most importantly, a 3D vision fix has been released. At this point, I don't have nearly as many "problems" as the anti-SLi crowd claim. Of course, there will always be engines which don't support SLi, but with the advent of DirectX 12 which supports multi-gpu at a fundamental level, it should be much less of an issue going forward.
Am experienced developer and long tome 3D vision fan did extensive 3D vision SLi scaling a few years ago, and his conclusion was that 3D vision SLi scaling was even better than 2D scaling. Scaling has only got better since as I usually see both my cards maxed at 99%, unless I reach a CPU limited scenario (central Novigrad square for example).
His insightful article is here:
http://www.volnapc.com/all-posts/3d-and-sli-performance-tested
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
Last I checked they said the new HB Bridge is only for the new x1000 generation ?!?! Does it affect the x900 generation ?
Sure, I can buy a new bridge, that is not the problem, but I think is not needed :( I can't find anything on what the new bridge does on the x900 series :( Hence my question.
A good video on this topic:
https://www.youtube.com/watch?time_continue=10&v=mWcsaociTjE
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)