Upgrade CPU, which one is the best cost x benefit for 3D Vision?
  3 / 3    
[quote="J0hnnieW4ker"] Anyway, end up buying an i5 8600K :D[/quote] 8400 vs 8600K, buying the latter - you will not regret your decision. In life, one tends to regret the things they did not do, rather than the things they did but for some reason didn't work out. Generally speaking: 1. Re: the bottleneck will always be the CPU because if there is ever a GPU bottleneck, one can always reduce resolution to compensate. 2. Re: your FPS preferences - always aim for 60 fps - never anything under 60fps, even by one frame. This is because anything below VSync (120Hz display VSync = 60 fps), produces a stuttery mess because the GPU is unable to send well timed frames. Frame Times are very important, and is the whole basis of GSync and FreeSync. for gaming, even 30FPS might be deemed acceptable, if frame times did not vary. If you know the game is mostly going to be a certain frame rate, set max frames to be below that frame rate using NVprofile Inspector - this will give you GSync like smoothness even at low frame rates (down to the set frame rate in NVprofile inspector). 3. I only have 1 game that is on your list: The Witcher 3's most demanding area is novigrad central square. With an Xeon x5660 @ 4.4GHz with 1600MHz DDR3 memory (somewhat similar to your i7 860 setup), I would get ~40FPS here. With 5GHz 7700K with 3600MHz DDR4 memory, I can just about manage 60FPS.
J0hnnieW4ker said:
Anyway, end up buying an i5 8600K :D


8400 vs 8600K, buying the latter - you will not regret your decision.

In life, one tends to regret the things they did not do, rather than the things they did but for some reason didn't work out.

Generally speaking:
1. Re: the bottleneck will always be the CPU because if there is ever a GPU bottleneck, one can always reduce resolution to compensate.

2. Re: your FPS preferences - always aim for 60 fps - never anything under 60fps, even by one frame. This is because anything below VSync (120Hz display VSync = 60 fps), produces a stuttery mess because the GPU is unable to send well timed frames. Frame Times are very important, and is the whole basis of GSync and FreeSync. for gaming, even 30FPS might be deemed acceptable, if frame times did not vary.

If you know the game is mostly going to be a certain frame rate, set max frames to be below that frame rate using NVprofile Inspector - this will give you GSync like smoothness even at low frame rates (down to the set frame rate in NVprofile inspector).

3. I only have 1 game that is on your list: The Witcher 3's most demanding area is novigrad central square. With an Xeon x5660 @ 4.4GHz with 1600MHz DDR3 memory (somewhat similar to your i7 860 setup), I would get ~40FPS here. With 5GHz 7700K with 3600MHz DDR4 memory, I can just about manage 60FPS.

Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.

#31
Posted 11/24/2017 09:54 PM   
[quote="RAGEdemon"][quote="J0hnnieW4ker"] Anyway, end up buying an i5 8600K :D[/quote] 8400 vs 8600K, buying the latter - you will not regret your decision. In life, one tends to regret the things they did not do, rather than the things they did but for some reason didn't work out. [/quote] Yep! That's what I thought! Even if I pay a little bit more now, I won't regret it in the future! Its been too long already I've been waiting for this upgrade, so might as well get the processor I was aiming for all along. I was going to wait until next year, but this Rift issues I am having and the black Friday triggered the purchase :D If the Rift does not work on the MSI Z370 SLI Plus motherbord + Inateck, then I know 100% there is something wrong with it. Just need to find someone around here that also have a Rift to help me to test the accessories and hopefully find which one is the defective causing the problems. Thank you for helping and the recommendations!
RAGEdemon said:
J0hnnieW4ker said:
Anyway, end up buying an i5 8600K :D


8400 vs 8600K, buying the latter - you will not regret your decision.

In life, one tends to regret the things they did not do, rather than the things they did but for some reason didn't work out.



Yep! That's what I thought! Even if I pay a little bit more now, I won't regret it in the future!
Its been too long already I've been waiting for this upgrade, so might as well get the processor I was aiming for all along. I was going to wait until next year, but this Rift issues I am having and the black Friday triggered the purchase :D
If the Rift does not work on the MSI Z370 SLI Plus motherbord + Inateck, then I know 100% there is something wrong with it. Just need to find someone around here that also have a Rift to help me to test the accessories and hopefully find which one is the defective causing the problems.

Thank you for helping and the recommendations!

EVGA GTX 1070 FTW
Motherboard MSI Z370 SLI PLUS
Processor i5-8600K @ 4.2 | Cooler SilverStone AR02
Corsair Vengeance 8GB 3000Mhz | Windows 10 Pro
SSD 240gb Kingston UV400 | 2x HDs 1TB RAID0 | 2x HD 2TB RAID1
TV LG Cinema 3D 49lb6200 | ACER EDID override | Oculus Rift CV1
Steam: http://steamcommunity.com/id/J0hnnieW4lker
Screenshots: http://phereo.com/583b3a2f8884282d5d000007

#32
Posted 11/26/2017 12:53 PM   
Always glad to help mate. Re: Inateck, I would only use it for other accessories. Your mobo ought to come with other usb3/3.1 ports. Make sure to let windows 10 install its own drivers for them, and use them for all 3 USB connections for the 2x sensors and the 1x headset. This will remove the Inateck card from the equation.
Always glad to help mate.

Re: Inateck, I would only use it for other accessories. Your mobo ought to come with other usb3/3.1 ports. Make sure to let windows 10 install its own drivers for them, and use them for all 3 USB connections for the 2x sensors and the 1x headset. This will remove the Inateck card from the equation.

Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.

#33
Posted 11/26/2017 11:16 PM   
I was also doing some research to see if it was BlackFriday time to do an upgrade. Didn't pull the trigger, because nothing seemed to be particularly good deals, and my current gear is OK for now. My i7-6700K/GTX 980 machine is perfectly good, but I can't turn up SuperSampling as much as I'd like. Here is the spot I looked at in most depth, AnandTech Bench. This is i5-8400 vs. 7700K. [url]https://www.anandtech.com/bench/product/1826?vs=2024[/url] There are two ways to read the results, 1) With a 7700K, it's possible to do better, because you can OC to a level that is better. 2) They are about the same, certainly within 10%, even with OC. In particular, the single thread performance is notably better on 7700K. Not by a stunning amount, but enough that it might matter for 3D Vision. I'd expect the 8700K to lower this gap, maybe even close it. (They don't have 8700K in their data yet). The OC on 8xxx is better than I expected, and there is that nebulous cache and maybe RAM speed factor that might explain the differences. The 8xxx series chips are a lot better than I expected. If I were doing a machine, today, I would go with the 8700K. It's looking more and more like the real gating factor, the bottleneck, in current single thread performance is actually not clock speed, it's either cache or RAM speed. Here's one for i5-8400 2.8G vs. i5-6600K 3.5G. You'd think that the clock speed would crush the tests, but in fact it's the other way around, the 8400 sweeps the map, including single thread. [url]https://www.anandtech.com/bench/product/2024?vs=1544[/url] We have been attuned to the idea that RAM speed didn't matter, but as chips got faster it looks like the bottleneck has moved to a new location. Maybe. Same with cache. In my particular experience I think larger cache has an outsized impact on single thread performance. An interesting thing I had not run across before is the idea of the 'uncore', which is all parts of the CPU not related to the CPU itself. So including cache, and memory controller. Anandtech have the data for uncore overclocking which I think is really interesting. For this example, we have a stock 4.0GHz i7-6700K vs. an OC to 4.8GHz: https://www.anandtech.com/bench/product/1542?vs=1550 For single-threaded performance, it's a nice bump. Hard to say exactly how that translates to our problem, without detailed testing, but this is our normal approach for better performance. Single threaded Cinebench goes from 183 to 206. Is that really 10%, or exactly what is not clear, but we can agree it's better. Gaming is a wash, nothing notable in 2D, but certainly not worse. Here is the difference in an i5-8400 vs. i7-8700K with uncore OC to 4.4GHz. https://www.anandtech.com/bench/product/2024?vs=2047 Again, Cinebench single threaded, from 167 to 200. Even taking into account that how this maps to 3D Vision is unclear, and how that number works, it's a big bump. Still less than the OC on the i7-6700K above, but only a few % different. Gaming is mostly a wash, some better some worse. 1080p is actually noticeably better with the 8400, 4K definitely leans toward 8700K. All in all, it's not clear. You win some, you lose some, it's not clear how Cinebench single threaded maps to our problem. But if single-thread is our most common bottleneck, because this will have an outsized influence, we might see that a 10% bump in cinebench could give us 50% better frame rates. For single thread only, 7700K with OC seems to be the clear single threaded winner with the limited data we have. But, the OC is better than expected with 8700K, and that extra cache and faster RAM and ability to OC uncore makes it most likely a dead heat. So....... long story short, it's not very likely to make much difference either way. Given that there is not much price difference, it seems pretty clear that going with the latest is not a bad choice, and it's probably not worth the hassle to try to eke out a few % with 7xxx gen. YMMV....
I was also doing some research to see if it was BlackFriday time to do an upgrade. Didn't pull the trigger, because nothing seemed to be particularly good deals, and my current gear is OK for now. My i7-6700K/GTX 980 machine is perfectly good, but I can't turn up SuperSampling as much as I'd like.


Here is the spot I looked at in most depth, AnandTech Bench. This is i5-8400 vs. 7700K.

https://www.anandtech.com/bench/product/1826?vs=2024

There are two ways to read the results,
1) With a 7700K, it's possible to do better, because you can OC to a level that is better.
2) They are about the same, certainly within 10%, even with OC.

In particular, the single thread performance is notably better on 7700K. Not by a stunning amount, but enough that it might matter for 3D Vision. I'd expect the 8700K to lower this gap, maybe even close it. (They don't have 8700K in their data yet).

The OC on 8xxx is better than I expected, and there is that nebulous cache and maybe RAM speed factor that might explain the differences.

The 8xxx series chips are a lot better than I expected. If I were doing a machine, today, I would go with the 8700K.


It's looking more and more like the real gating factor, the bottleneck, in current single thread performance is actually not clock speed, it's either cache or RAM speed.

Here's one for i5-8400 2.8G vs. i5-6600K 3.5G. You'd think that the clock speed would crush the tests, but in fact it's the other way around, the 8400 sweeps the map, including single thread.

https://www.anandtech.com/bench/product/2024?vs=1544


We have been attuned to the idea that RAM speed didn't matter, but as chips got faster it looks like the bottleneck has moved to a new location. Maybe. Same with cache. In my particular experience I think larger cache has an outsized impact on single thread performance.

An interesting thing I had not run across before is the idea of the 'uncore', which is all parts of the CPU not related to the CPU itself. So including cache, and memory controller.

Anandtech have the data for uncore overclocking which I think is really interesting.

For this example, we have a stock 4.0GHz i7-6700K vs. an OC to 4.8GHz:
https://www.anandtech.com/bench/product/1542?vs=1550

For single-threaded performance, it's a nice bump. Hard to say exactly how that translates to our problem, without detailed testing, but this is our normal approach for better performance. Single threaded Cinebench goes from 183 to 206. Is that really 10%, or exactly what is not clear, but we can agree it's better. Gaming is a wash, nothing notable in 2D, but certainly not worse.


Here is the difference in an i5-8400 vs. i7-8700K with uncore OC to 4.4GHz.

https://www.anandtech.com/bench/product/2024?vs=2047

Again, Cinebench single threaded, from 167 to 200. Even taking into account that how this maps to 3D Vision is unclear, and how that number works, it's a big bump. Still less than the OC on the i7-6700K above, but only a few % different. Gaming is mostly a wash, some better some worse. 1080p is actually noticeably better with the 8400, 4K definitely leans toward 8700K.


All in all, it's not clear. You win some, you lose some, it's not clear how Cinebench single threaded maps to our problem. But if single-thread is our most common bottleneck, because this will have an outsized influence, we might see that a 10% bump in cinebench could give us 50% better frame rates.

For single thread only, 7700K with OC seems to be the clear single threaded winner with the limited data we have. But, the OC is better than expected with 8700K, and that extra cache and faster RAM and ability to OC uncore makes it most likely a dead heat.

So....... long story short, it's not very likely to make much difference either way. Given that there is not much price difference, it seems pretty clear that going with the latest is not a bad choice, and it's probably not worth the hassle to try to eke out a few % with 7xxx gen. YMMV....

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

#34
Posted 11/27/2017 03:34 AM   
bo3b, I agree with your solid conclusion that "upgrading" from 6700K would be a complete waste of your time and resources mate. It would be more of a side-grade. As you said, building a system right now it would be best to buy 8700k over 7700k. IMHO, you won't need to upgrade your system for unfixed 3D Vision if you are running a 6700k, for the better part of the next decade. When games normalise to having much better multi-core support, i.e. double the number of cores gives about 1.5x to 2x the performance, then I (and imho you) ought to start thinking about upgrading - not before then. Anything less than that is IMO a waste of time. This might take a few years. There are rumours that next year will see 8 core CPUs using the same mesh as 7700k/8700K. 5 years from now, I see myself upgrading if there's a good 5GHz+ OC CPU with >7700K IPC @ 12 cores. BTW, regarding memory and cache speed - I multiplier OC'd my old x5660 which meant that the memory, uncore, and the BUS frequencies were exactly the same, only the core CPU frequency varied. 3D Vision performance in all tested games varied with the CPU core clock, which suggests that the core frequency is the biggest/onlyculprit. The caveat is that the uncore did not vary cache frequency on the x5660 - cache frequency OC as a separate setting from core OC is a relatively recent thing IIRC. When we do the benchmark comparison, I'll do some tests on the cache frequency's impact on 3D vision performance. It might bring interesting results. What's your all-core OC on the 6700K?
bo3b, I agree with your solid conclusion that "upgrading" from 6700K would be a complete waste of your time and resources mate. It would be more of a side-grade.

As you said, building a system right now it would be best to buy 8700k over 7700k.

IMHO, you won't need to upgrade your system for unfixed 3D Vision if you are running a 6700k, for the better part of the next decade.

When games normalise to having much better multi-core support, i.e. double the number of cores gives about 1.5x to 2x the performance, then I (and imho you) ought to start thinking about upgrading - not before then. Anything less than that is IMO a waste of time. This might take a few years.

There are rumours that next year will see 8 core CPUs using the same mesh as 7700k/8700K. 5 years from now, I see myself upgrading if there's a good 5GHz+ OC CPU with >7700K IPC @ 12 cores.

BTW, regarding memory and cache speed - I multiplier OC'd my old x5660 which meant that the memory, uncore, and the BUS frequencies were exactly the same, only the core CPU frequency varied. 3D Vision performance in all tested games varied with the CPU core clock, which suggests that the core frequency is the biggest/onlyculprit. The caveat is that the uncore did not vary cache frequency on the x5660 - cache frequency OC as a separate setting from core OC is a relatively recent thing IIRC.

When we do the benchmark comparison, I'll do some tests on the cache frequency's impact on 3D vision performance. It might bring interesting results.

What's your all-core OC on the 6700K?

Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.

#35
Posted 11/27/2017 08:19 AM   
[quote="RAGEdemon"]When we do the benchmark comparison, I'll do some tests on the cache frequency's impact on 3D vision performance. It might bring interesting results. What's your all-core OC on the 6700K?[/quote]I'll be really curious if you find any difference in our 3D Vision problem. Certainly there are no other benchmarkers who can give us this info. Running the fast RAM is also an interesting recent tweak. I have not OCed the 6700K, because it's in a laptop that I use for my main coding. For stuff I'm doing recently, there is no particular reason to eke out the next 400MHz (from 4.2Ghz top), and the cooling system does't have a lot of headroom. My other computer is aging, something like a 4790K, and might be the target. That one is only used for co-op in VR, so hard to justify. [quote]This is true only for 3D Vision, at this present point in time, but not on the whole. It is also up for investigation - Metaholic has ordered a 8700K system which he intends to clock to 5GHz, and we are intending to compare 7700K vs 8700K at 5GHz to see what kind of difference there is. We don't want to bias our results by going in with a strong opinion, but we are confident that 3D Vision bottleneck will likely show that 4HT cores on 7700k vs. 6HT cores on 8700k, all @5GHz are all pretty equal in performance, not taking cache into account.[/quote]This will be super interesting, please do report back. Historically, another possible way to optimize is to disable HyperThreading, which has allowed people to gain higher OC. Especially with 6 cores on a 8700K, that is a viable path, and might give good results. That 8700K with HT off, would be similar to the 8400, but with quite a bit more cache. Cache frequency may not matter, but I'm fairly sure that cache size does. It's an interesting test case if you guys might want to take a look. Might be worth considering in advance what we think is the best benchmark for these. It seems like the Cinebench single-thread is not bad as a rough approximation, and it's easy and free. For our purposes though, what 3D games are the hardest to run? GTA5 comes to mind as being particular bottlenecked. Witcher3 downtown is another. [quote]This is not discounting the potential that the 2 more cores might actually make a difference to the 3D Vision bottleneck in games where they make heavy use of >4 cores in 2D. Right now, a safe bet is that an 8700k @ 5 Ghz is at least as good for 3D Vision as a 7700K @ 5GHz, but comes with 50% more cores and cache, and actually works out cheaper than 7700k per core: 7700k = £274/4= £68.50 per core. 8700k = £394/6 £65.60 per core.[/quote]This is my conclusion as well. It's probably more likely to get a good OC on 7700K, but it's a crapshoot, and you might as well get the latest, since there aren't really any drawbacks. (... at least until we see your and Metalholic test results) [quote]1. Although 3D vision bottlenecked gaming is one of the main activities on my system, I do a lot of other things too: professional CAD/rendering, VR, CMode gaming, 2D gaming, encoding, etc. I believe that over the next 10 years, a power user would make decent use of the 2 extra cores.[/quote]I'm the other way around. I respect your opinion, but I'm of the mind that they've been wildly touting the cores for 10 years, and will continue to do so, and the average user will see no improvement. If you do multi-threaded work, it's clearly worth it. Time is money, and the big core counts save time. But it will be workload dependent. In my case, I need really fast storage, because compile time matters to me. I run a 1T 950 Pro Raid-0. This matters much more to my work than core count. [quote]Already consoles use 8 cores (granted they are weak individually) - but the threading potential that Mantle/Vulcan and DX12 have presented is very real and will absolutely be taken advantage of going forward. It would be unwise to invest in a CPU which uses less than these going forward (i.e. i5 wouldn't be a good decision for the future speaking generally, not specifically for 3D Vision bottlenecked gaming).[/quote]This is a good point for core count, and it probably makes sense to stay at the 6 to 8 thread level. Going lower today to 4 threads is probably not a good value. However, I will point out that the consoles still have one incredibly bad characteristic which makes more cores less compelling- their frame rate target is nearly always 30fps. Once they hit that target, they stop development and tuning. We can easily be in scenarios where we should be getting better results, but don't, because their bar is too low. [quote]Also, indeed, the software to take advantage of high core count in gaming was never really there - only recently have we had mantle and DX12 break through some of the limits. Indeed there is a good chance Intel will release 8 core CPUs with similar IPC and clocks to SkyL/Kaby/Coffee, and then 12 cores, and 16 cores within the next 10 years. Whether or not they will be able to compete with the kaby/coffee interlink which does an amazing job for lower threaded applications remains to be seen. Certainly, AMD is going to do exactly this... :)[/quote]Given that we are stuck at 5GHz, I think you are right. Intel and AMD are going to at least copy/paste so they having something to show. It's not going to help us, but there isn't really anything on the horizon. With the possible exception of another process shrink (last one?) On the other hand, they did bump single threaded performance with Coffee, so maybe we can eke some 10-20% gains. I'm contrarian on DX12 though. I think it's mostly a red herring that Microsoft threw up to avoid having anybody migrate from DirectX to Mantle, or Vulkan. The problem here is that it requires the programmers to do even more work, which nobody wants to spend the time and money doing. Couple that with their pathetic 30fps goal, and I will be genuinely surprised if DX12 gets any traction in the next 5 years. There will be some rare one-offs like Ashes, but there were also rare one-offs for DX10. BTW, good discussion, thanks for taking the time.
RAGEdemon said:When we do the benchmark comparison, I'll do some tests on the cache frequency's impact on 3D vision performance. It might bring interesting results.

What's your all-core OC on the 6700K?
I'll be really curious if you find any difference in our 3D Vision problem. Certainly there are no other benchmarkers who can give us this info. Running the fast RAM is also an interesting recent tweak.

I have not OCed the 6700K, because it's in a laptop that I use for my main coding. For stuff I'm doing recently, there is no particular reason to eke out the next 400MHz (from 4.2Ghz top), and the cooling system does't have a lot of headroom. My other computer is aging, something like a 4790K, and might be the target. That one is only used for co-op in VR, so hard to justify.


This is true only for 3D Vision, at this present point in time, but not on the whole. It is also up for investigation - Metaholic has ordered a 8700K system which he intends to clock to 5GHz, and we are intending to compare 7700K vs 8700K at 5GHz to see what kind of difference there is. We don't want to bias our results by going in with a strong opinion, but we are confident that 3D Vision bottleneck will likely show that 4HT cores on 7700k vs. 6HT cores on 8700k, all @5GHz are all pretty equal in performance, not taking cache into account.
This will be super interesting, please do report back.

Historically, another possible way to optimize is to disable HyperThreading, which has allowed people to gain higher OC. Especially with 6 cores on a 8700K, that is a viable path, and might give good results. That 8700K with HT off, would be similar to the 8400, but with quite a bit more cache. Cache frequency may not matter, but I'm fairly sure that cache size does. It's an interesting test case if you guys might want to take a look.

Might be worth considering in advance what we think is the best benchmark for these. It seems like the Cinebench single-thread is not bad as a rough approximation, and it's easy and free. For our purposes though, what 3D games are the hardest to run? GTA5 comes to mind as being particular bottlenecked. Witcher3 downtown is another.


This is not discounting the potential that the 2 more cores might actually make a difference to the 3D Vision bottleneck in games where they make heavy use of >4 cores in 2D. Right now, a safe bet is that an 8700k @ 5 Ghz is at least as good for 3D Vision as a 7700K @ 5GHz, but comes with 50% more cores and cache, and actually works out cheaper than 7700k per core:

7700k = £274/4= £68.50 per core.
8700k = £394/6 £65.60 per core.
This is my conclusion as well. It's probably more likely to get a good OC on 7700K, but it's a crapshoot, and you might as well get the latest, since there aren't really any drawbacks. (... at least until we see your and Metalholic test results)


1. Although 3D vision bottlenecked gaming is one of the main activities on my system, I do a lot of other things too: professional CAD/rendering, VR, CMode gaming, 2D gaming, encoding, etc. I believe that over the next 10 years, a power user would make decent use of the 2 extra cores.
I'm the other way around. I respect your opinion, but I'm of the mind that they've been wildly touting the cores for 10 years, and will continue to do so, and the average user will see no improvement.

If you do multi-threaded work, it's clearly worth it. Time is money, and the big core counts save time. But it will be workload dependent. In my case, I need really fast storage, because compile time matters to me. I run a 1T 950 Pro Raid-0. This matters much more to my work than core count.


Already consoles use 8 cores (granted they are weak individually) - but the threading potential that Mantle/Vulcan and DX12 have presented is very real and will absolutely be taken advantage of going forward. It would be unwise to invest in a CPU which uses less than these going forward (i.e. i5 wouldn't be a good decision for the future speaking generally, not specifically for 3D Vision bottlenecked gaming).
This is a good point for core count, and it probably makes sense to stay at the 6 to 8 thread level. Going lower today to 4 threads is probably not a good value.

However, I will point out that the consoles still have one incredibly bad characteristic which makes more cores less compelling- their frame rate target is nearly always 30fps. Once they hit that target, they stop development and tuning. We can easily be in scenarios where we should be getting better results, but don't, because their bar is too low.


Also, indeed, the software to take advantage of high core count in gaming was never really there - only recently have we had mantle and DX12 break through some of the limits.

Indeed there is a good chance Intel will release 8 core CPUs with similar IPC and clocks to SkyL/Kaby/Coffee, and then 12 cores, and 16 cores within the next 10 years. Whether or not they will be able to compete with the kaby/coffee interlink which does an amazing job for lower threaded applications remains to be seen. Certainly, AMD is going to do exactly this... :)
Given that we are stuck at 5GHz, I think you are right. Intel and AMD are going to at least copy/paste so they having something to show. It's not going to help us, but there isn't really anything on the horizon. With the possible exception of another process shrink (last one?) On the other hand, they did bump single threaded performance with Coffee, so maybe we can eke some 10-20% gains.

I'm contrarian on DX12 though. I think it's mostly a red herring that Microsoft threw up to avoid having anybody migrate from DirectX to Mantle, or Vulkan. The problem here is that it requires the programmers to do even more work, which nobody wants to spend the time and money doing. Couple that with their pathetic 30fps goal, and I will be genuinely surprised if DX12 gets any traction in the next 5 years. There will be some rare one-offs like Ashes, but there were also rare one-offs for DX10.


BTW, good discussion, thanks for taking the time.

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

#36
Posted 11/27/2017 05:29 PM   
I only have to say that I tested with a AOURUS GTX 1080 TI and the graphic card was in every situation the cause of the bottleneck in my rig, even though my CPU is not preciselly recent, and even thouhg the famous CPU bottleneck created when playing with 3D vision. Once I play in 2K resolution + 3D the 1080 TI is good, but not enought. By the way, I returned the "AORUS 1080 TI" because I can not tolerate that big noise when playing any game, specially when you have to pay so much money for something that is ridiculously overpriced. At least Nvidia should give us a good product, and provided the BIIIIIIIIG heat that this card introduces in the case it should be sold only with good watercooling systems, not just noisy fans.
I only have to say that I tested with a AOURUS GTX 1080 TI and the graphic card was in every situation the cause of the bottleneck in my rig, even though my CPU is not preciselly recent, and even thouhg the famous CPU bottleneck created when playing with 3D vision. Once I play in 2K resolution + 3D the 1080 TI is good, but not enought.

By the way, I returned the "AORUS 1080 TI" because I can not tolerate that big noise when playing any game, specially when you have to pay so much money for something that is ridiculously overpriced. At least Nvidia should give us a good product, and provided the BIIIIIIIIG heat that this card introduces in the case it should be sold only with good watercooling systems, not just noisy fans.

- Windows 7 64bits (SSD OCZ-Vertez2 128Gb)
- "ASUS P6X58D-E" motherboard
- "MSI GTX 660 TI"
- "Intel Xeon X5670" @4000MHz CPU (20.0[12-25]x200MHz)
- RAM 16 Gb DDR3 1600
- "Dell S2716DG" monitor (2560x1440 @144Hz)
- "Corsair Carbide 600C" case
- Labrador dog (cinnamon edition)

#37
Posted 11/27/2017 06:32 PM   
[quote="J0hnnieW4ker"][quote="bo3b"][quote="J0hnnieW4ker"] I went to a friends house and the same thing happened, error messages on the sensors and stuck during the pairing of the contollers. [/quote] This is the good test. You tried it on another computer, completely unrelated to yours. And it failed. That demonstrates that it is the Rift. Let Oculus Support know this, and be clear with them, don't add a lot of extra tests. This one test proves the Rift is faulty. BTW, that Inatek board is bogus. Oculus recommended it for a long time, but it causes a lot of problems. Read the comments section: https://www.oculus.com/blog/oculus-roomscale-extra-equipment/ At every boot it resets power management, which sounds like it might be your problem.[/quote] Unfortunately, I dont have any friend here that has a Rift. I could probably count on my fingers the number of people that have a Rift here in South America. Oculus support believe that the issue is my Touch controllers, but I am not too sure about that. Anyway I purchased another one through a friend and hopefully, I can find the defected component by swapping them. Oculus support won't send a replacement to my country, so I still need to figure out how I would get it replaced and avoid tax fees when the replacement returns as if I use a different address in US to receive and sent to me I might have to pay the tax fees, which are huuuuuge here.[/quote] This subject is not related to this section of the forum, but as I mentioned that I was having problems with my Rift, just wanted to share with you guys what I found out. So basically as I mentioned my sensors started showing error messages of poor tracking - wireless sync timed out and my touch controllers stopped working, when trying to pair the controllers again I get stuck on the third step of the pairing with the message "finalizing controller" and error messages on the sensors starts to show again. This problem doesn't seem to happen if I only use an xbox controller and I could play just fine with the xbox controller. Now after doing so many tests and tested with another Rift I found the cause of the problems. Replaced my sensors and touch and the problem persisted, so after replacing the headset the problem was gone, therefore the issue is the headset as I was suspecting from the beginning. That confirmed my suspicions as when I found out that the Rift headset has a wireless connection that communicates with the sensors and touch controllers, the "poor tracking message - wireless sync timed out" kind of make more sense. Probably something wrong with this wireless connection on my headset! Anyway, I am organizing a replacement with Oculus support, let's see how it goes. BTW, could not find anybody with the same issues as me when googling about this, so I must be the lucky one :(
J0hnnieW4ker said:
bo3b said:
J0hnnieW4ker said: I went to a friends house and the same thing happened, error messages on the sensors and stuck during the pairing of the contollers.

This is the good test. You tried it on another computer, completely unrelated to yours. And it failed. That demonstrates that it is the Rift. Let Oculus Support know this, and be clear with them, don't add a lot of extra tests. This one test proves the Rift is faulty.


BTW, that Inatek board is bogus. Oculus recommended it for a long time, but it causes a lot of problems. Read the comments section: https://www.oculus.com/blog/oculus-roomscale-extra-equipment/

At every boot it resets power management, which sounds like it might be your problem.


Unfortunately, I dont have any friend here that has a Rift. I could probably count on my fingers the number of people that have a Rift here in South America. Oculus support believe that the issue is my Touch controllers, but I am not too sure about that. Anyway I purchased another one through a friend and hopefully, I can find the defected component by swapping them.

Oculus support won't send a replacement to my country, so I still need to figure out how I would get it replaced and avoid tax fees when the replacement returns as if I use a different address in US to receive and sent to me I might have to pay the tax fees, which are huuuuuge here.


This subject is not related to this section of the forum, but as I mentioned that I was having problems with my Rift, just wanted to share with you guys what I found out.
So basically as I mentioned my sensors started showing error messages of poor tracking - wireless sync timed out and my touch controllers stopped working, when trying to pair the controllers again I get stuck on the third step of the pairing with the message "finalizing controller" and error messages on the sensors starts to show again. This problem doesn't seem to happen if I only use an xbox controller and I could play just fine with the xbox controller.

Now after doing so many tests and tested with another Rift I found the cause of the problems.
Replaced my sensors and touch and the problem persisted, so after replacing the headset the problem was gone, therefore the issue is the headset as I was suspecting from the beginning. That confirmed my suspicions as when I found out that the Rift headset has a wireless connection that communicates with the sensors and touch controllers, the "poor tracking message - wireless sync timed out" kind of make more sense. Probably something wrong with this wireless connection on my headset! Anyway, I am organizing a replacement with Oculus support, let's see how it goes.

BTW, could not find anybody with the same issues as me when googling about this, so I must be the lucky one :(

EVGA GTX 1070 FTW
Motherboard MSI Z370 SLI PLUS
Processor i5-8600K @ 4.2 | Cooler SilverStone AR02
Corsair Vengeance 8GB 3000Mhz | Windows 10 Pro
SSD 240gb Kingston UV400 | 2x HDs 1TB RAID0 | 2x HD 2TB RAID1
TV LG Cinema 3D 49lb6200 | ACER EDID override | Oculus Rift CV1
Steam: http://steamcommunity.com/id/J0hnnieW4lker
Screenshots: http://phereo.com/583b3a2f8884282d5d000007

#38
Posted 12/01/2017 05:04 PM   
Potentially sounds like some of the LED's which the sensors use to track head movement are damaged on your headset... [img]https://roadtovrlive-5ea0.kxcdn.com/wp-content/uploads/2016/04/oculu-rift-teardown-1-681x511.jpg[/img]
Potentially sounds like some of the LED's which the sensors use to track head movement are damaged on your headset...

Image

Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.

#39
Posted 12/01/2017 08:53 PM   
[quote="RAGEdemon"]Potentially sounds like some of the LED's which the sensors use to track head movement are damaged on your headset... [img]https://roadtovrlive-5ea0.kxcdn.com/wp-content/uploads/2016/04/oculu-rift-teardown-1-681x511.jpg[/img][/quote] Nope, I've checked with a camera too During the setup of the touch controllers that gets stuck the sensors doesn't even need to find the headset, only the controllers. Pretty sure this is an issue with the wireless component on the headset. The headset works with the xbox controller, but as soon as I try to pair the touch controllers I get these error messages. I think the wireless component of the headset must be damaged and it can somehow work with the xbox controller as the connection might either not be necessary or it can still barely work somehow. But once added the touch controllers it cannot handle it. Dont know for sure, but I think should be something like that. The Oculus support knows nothing, they asked me to send the touch controllers for a replacement, luckly I work with IT and I have a pretty damn good gut feeling and experience with computer issues, so I've done all possibles tests because I was sure that it could not be both touch controllers damaged. check out this article: https://uploadvr.com/oculus-touch-controllers-communicate-directly-with-the-headsets-no-usb-dongles-required/ DK2 had a sync cable and it seems cv1 use a wireless connection, so there must be data that my headset is unable to send/receive, therefore the reason for the poor tracking wireless sync time out messages all over.
RAGEdemon said:Potentially sounds like some of the LED's which the sensors use to track head movement are damaged on your headset...

Image


Nope, I've checked with a camera too
During the setup of the touch controllers that gets stuck the sensors doesn't even need to find the headset, only the controllers. Pretty sure this is an issue with the wireless component on the headset.
The headset works with the xbox controller, but as soon as I try to pair the touch controllers I get these error messages. I think the wireless component of the headset must be damaged and it can somehow work with the xbox controller as the connection might either not be necessary or it can still barely work somehow. But once added the touch controllers it cannot handle it. Dont know for sure, but I think should be something like that.
The Oculus support knows nothing, they asked me to send the touch controllers for a replacement, luckly I work with IT and I have a pretty damn good gut feeling and experience with computer issues, so I've done all possibles tests because I was sure that it could not be both touch controllers damaged.

check out this article:
https://uploadvr.com/oculus-touch-controllers-communicate-directly-with-the-headsets-no-usb-dongles-required/

DK2 had a sync cable and it seems cv1 use a wireless connection, so there must be data that my headset is unable to send/receive, therefore the reason for the poor tracking wireless sync time out messages all over.

EVGA GTX 1070 FTW
Motherboard MSI Z370 SLI PLUS
Processor i5-8600K @ 4.2 | Cooler SilverStone AR02
Corsair Vengeance 8GB 3000Mhz | Windows 10 Pro
SSD 240gb Kingston UV400 | 2x HDs 1TB RAID0 | 2x HD 2TB RAID1
TV LG Cinema 3D 49lb6200 | ACER EDID override | Oculus Rift CV1
Steam: http://steamcommunity.com/id/J0hnnieW4lker
Screenshots: http://phereo.com/583b3a2f8884282d5d000007

#40
Posted 12/01/2017 09:14 PM   
That's good to know. It looks like the sensors detect Touch position, while the wireless pairing connection is used for the communication of button press registration / vibration / battery level comms etc.
That's good to know. It looks like the sensors detect Touch position, while the wireless pairing connection is used for the communication of button press registration / vibration / battery level comms etc.

Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.

#41
Posted 12/02/2017 02:32 AM   
[quote="bo3b"]Here's one for i5-8400 2.8G vs. i5-6600K 3.5G. You'd think that the clock speed would crush the tests, but in fact it's the other way around, the 8400 sweeps the map, including single thread. [url]https://www.anandtech.com/bench/product/2024?vs=1544[/url] [/quote] This is a pretty good compare as I was able to get slightly better results than the i5-6600K@3,5GHz with my system. And my hardware is nearly identical to J0hnnieW4ker's hardware. So 20fps more in the best case for the i5-8400? I guess that is not the performance-jump he is expecting from a newer system. @J0hnnieW4ker: Don't go for the i5-8400. Take something faster.
bo3b said:Here's one for i5-8400 2.8G vs. i5-6600K 3.5G. You'd think that the clock speed would crush the tests, but in fact it's the other way around, the 8400 sweeps the map, including single thread.

https://www.anandtech.com/bench/product/2024?vs=1544


This is a pretty good compare as I was able to get slightly better results than the i5-6600K@3,5GHz with my system. And my hardware is nearly identical to J0hnnieW4ker's hardware.
So 20fps more in the best case for the i5-8400? I guess that is not the performance-jump he is expecting from a newer system.

@J0hnnieW4ker: Don't go for the i5-8400. Take something faster.

Desktop-PC

i7 870 @ 3.8GHz + MSI GTX1070 Gaming X + 16GB RAM + Win10 64Bit Home + AW2310+3D-Vision

#42
Posted 12/02/2017 05:21 PM   
  3 / 3    
Scroll To Top