How important is CPU performance for 3d vision?
  6 / 22    
[quote="RAGEdemon"]I would wager that he has accidentally used different game settings such as MSAA etc on one of the systems. If you look at the card memory usage, it's 5.7GB vs 1.7GB, which suggests a huge mess-up somewhere in the game settings, perhaps even resolution. If he has missed the DX11 vs DX12, and has had only 1 day to make this video, I don't blame him. In the end, as both GPU's are well below 90%, it shouldn't have significant impact on CPU based FPS results.[/quote]Is this game really that well threaded? Or is there some bug that is causing that much usage in all of AMD's threads? i7 is 90 + in all threads at all times. I'm just curious if wine wine prevails with ryzen over the i7 in lets say three years when it comes to gaming.
RAGEdemon said:I would wager that he has accidentally used different game settings such as MSAA etc on one of the systems.

If you look at the card memory usage, it's 5.7GB vs 1.7GB, which suggests a huge mess-up somewhere in the game settings, perhaps even resolution.

If he has missed the DX11 vs DX12, and has had only 1 day to make this video, I don't blame him.

In the end, as both GPU's are well below 90%, it shouldn't have significant impact on CPU based FPS results.
Is this game really that well threaded? Or is there some bug that is causing that much usage in all of AMD's threads? i7 is 90 + in all threads at all times.

I'm just curious if wine wine prevails with ryzen over the i7 in lets say three years when it comes to gaming.

#76
Posted 03/04/2017 12:06 AM   
To be fair here all games have been optimised for intel CPU's for the last 3-5 years. This would explain why Ryzens synthetic benchmarks are higher than Intels whilst gaming performance lower https://www.kitguru.net/components/cpu/matthew-wilson/amd-explains-why-ryzen-doesnt-seem-to-keep-up-in-1080p-gaming/ Probably why AMD announced at GDC about AMD working with Bethesda on every single title that they have in their library. http://www.pcgamer.com/bethesda-partners-with-amd-to-optimize-games-for-ryzen-and-vega/m Saying that though, the xbox / PS4 / WiiU have AMD chips in them so shouldn't they be optimised already? To be fair again who does buy a £400 cpu to game at 1080p? It might be faster at that resolution but you are well past the point of diminishing returns. Maybe for the pro gamers with 200hz 1080p TN monitors, but I drink from the ultra widescreen 4k G-Sync IPS panel cup and 1080p gaming is something I left behind over 5 years ago. Those benchmarks don't concern me. If you did try a game from years ago and run it at 8k, like Ragedemon suggests, I bet the performance would be closer than the games developed in the last 5 years. Games developers probably actually still used AMD CPUs to develop their games on back then. The 4k benchmarks have 1 fps between them. Yes I understand how it works and the gfx card is under 100% load and is bottlenecking the cpu, but I can't remember the last time I played a game when my card wasn't at a 100% load (not through lack of trying anyway) Apart from a few 3D ones but that's due to Nvidia being inept. If your gfx card isn't near 100% load you are doing it wrong! If you don't drive a fast car fast, what's the point, you should have bought a slower one. Basically the point is moot. Disabling hyerthreading is supposed to give a 5% fps boost in games so there are obviously still some kinks to be ironed out. To be honest this shows how badly optimised games must be for Ryzen. Most games probably cap at 8 cores and I bet the games is taking the first 8 threads instead.( i.e 4 cores and 4 hyperthreaded cores instead of 8 cores and 0 hyperthreaded cores) The intel 7700K is still the fastest for gaming, for now, at least. Rd[rcislly for the 3D community trying to work round Nvidia's shitty 3 core 3D vision bug etc. It's just so tempting to have twice the crunching power for music production and other apps whilst only losing 1 fps at 4k resolution gaming. Are the Ryzen chips HDCP 2.2 compliant?
To be fair here all games have been optimised for intel CPU's for the last 3-5 years.
This would explain why Ryzens synthetic benchmarks are higher than Intels whilst gaming performance lower

https://www.kitguru.net/components/cpu/matthew-wilson/amd-explains-why-ryzen-doesnt-seem-to-keep-up-in-1080p-gaming/

Probably why AMD announced at GDC about AMD working with Bethesda on every single title that they have in their library.

http://www.pcgamer.com/bethesda-partners-with-amd-to-optimize-games-for-ryzen-and-vega/m

Saying that though, the xbox / PS4 / WiiU have AMD chips in them so shouldn't they be optimised already?

To be fair again who does buy a £400 cpu to game at 1080p?
It might be faster at that resolution but you are well past the point of diminishing returns.
Maybe for the pro gamers with 200hz 1080p TN monitors, but I drink from the ultra widescreen 4k G-Sync IPS panel cup and 1080p gaming is something I left behind over 5 years ago.
Those benchmarks don't concern me.

If you did try a game from years ago and run it at 8k, like Ragedemon suggests, I bet the performance would be closer than the games developed in the last 5 years.
Games developers probably actually still used AMD CPUs to develop their games on back then.

The 4k benchmarks have 1 fps between them.
Yes I understand how it works and the gfx card is under 100% load and is bottlenecking the cpu, but I can't remember the last time I played a game when my card wasn't at a 100% load (not through lack of trying anyway) Apart from a few 3D ones but that's due to Nvidia being inept.
If your gfx card isn't near 100% load you are doing it wrong! If you don't drive a fast car fast, what's the point, you should have bought a slower one.


Basically the point is moot.

Disabling hyerthreading is supposed to give a 5% fps boost in games so there are obviously still some kinks to be ironed out.
To be honest this shows how badly optimised games must be for Ryzen. Most games probably cap at 8 cores and I bet the games is taking the first 8 threads instead.( i.e 4 cores and 4 hyperthreaded cores instead of 8 cores and 0 hyperthreaded cores)

The intel 7700K is still the fastest for gaming, for now, at least. Rd[rcislly for the 3D community trying to work round Nvidia's shitty 3 core 3D vision bug etc.
It's just so tempting to have twice the crunching power for music production and other apps whilst only losing 1 fps at 4k resolution gaming.

Are the Ryzen chips HDCP 2.2 compliant?

#77
Posted 03/04/2017 12:06 AM   
[quote="GibsonRed"]To be fair here all games have been optimised for intel CPU's for the last 3-5 years. This would explain why Ryzens synthetic benchmarks are higher than Intels whilst gaming performance lower https://www.kitguru.net/components/cpu/matthew-wilson/amd-explains-why-ryzen-doesnt-seem-to-keep-up-in-1080p-gaming/ Probably why AMD announced at GDC about AMD working with Bethesda on every single title that they have in their library. http://www.pcgamer.com/bethesda-partners-with-amd-to-optimize-games-for-ryzen-and-vega/m Saying that though, the xbox / PS4 / WiiU have AMD chips in them so shouldn't they be optimised already? To be fair again who does buy a £400 cpu to game at 1080p? It might be faster at that resolution but you are well past the point of diminishing returns. Maybe for the pro gamers with 200hz 1080p TN monitors, but I drink from the ultra widescreen 4k G-Sync IPS panel cup and 1080p gaming is something I left behind over 5 years ago. Those benchmarks don't concern me. If you did try a game from years ago and run it at 8k, like Ragedemon suggests, I bet the performance would be closer than the games developed in the last 5 years. Games developers probably actually still used AMD CPUs to develop their games on back then. The 4k benchmarks have 1 fps between them. Yes I understand how it works and the gfx card is under 100% load and is bottlenecking the cpu, but I can't remember the last time I played a game when my card wasn't at a 100% load (not through lack of trying anyway) Apart from a few 3D ones but that's due to Nvidia being inept. If your gfx card isn't near 100% load you are doing it wrong! If you don't drive a fast car fast, what's the point, you should have bought a slower one. Basically the point is moot. Disabling hyerthreading is supposed to give a 5% fps boost in games so there are obviously still some kinks to be ironed out. To be honest this shows how badly optimised games must be for Ryzen. Most games probably cap at 8 cores and I bet the games is taking the first 8 threads instead.( i.e 4 cores and 4 hyperthreaded cores instead of 8 cores and 0 hyperthreaded cores) The intel 7700K is still the fastest for gaming, for now, at least. Rd[rcislly for the 3D community trying to work round Nvidia's shitty 3 core 3D vision bug etc. It's just so tempting to have twice the crunching power for music production and other apps whilst only losing 1 fps at 4k resolution gaming. Are the Ryzen chips HDCP 2.2 compliant? [/quote]I for one still game at 1080 P. And i'm far from the only one. It's useful to test low res/settings to see how the cpu is actually doing. You may not be cpu bottlenecked now, but with a future gpu you might be. For you it seems like Ryzen is a good bet because of the other things you do with your computer, and the fact that you play at a high resolution. I like how you can be on the ryzen platform and then simply swap out cpus later if their next processor is much better. You don't really get that assurance with intel.
GibsonRed said:To be fair here all games have been optimised for intel CPU's for the last 3-5 years.
This would explain why Ryzens synthetic benchmarks are higher than Intels whilst gaming performance lower


https://www.kitguru.net/components/cpu/matthew-wilson/amd-explains-why-ryzen-doesnt-seem-to-keep-up-in-1080p-gaming/


Probably why AMD announced at GDC about AMD working with Bethesda on every single title that they have in their library.


http://www.pcgamer.com/bethesda-partners-with-amd-to-optimize-games-for-ryzen-and-vega/m


Saying that though, the xbox / PS4 / WiiU have AMD chips in them so shouldn't they be optimised already?

To be fair again who does buy a £400 cpu to game at 1080p?
It might be faster at that resolution but you are well past the point of diminishing returns.
Maybe for the pro gamers with 200hz 1080p TN monitors, but I drink from the ultra widescreen 4k G-Sync IPS panel cup and 1080p gaming is something I left behind over 5 years ago.
Those benchmarks don't concern me.

If you did try a game from years ago and run it at 8k, like Ragedemon suggests, I bet the performance would be closer than the games developed in the last 5 years.
Games developers probably actually still used AMD CPUs to develop their games on back then.

The 4k benchmarks have 1 fps between them.
Yes I understand how it works and the gfx card is under 100% load and is bottlenecking the cpu, but I can't remember the last time I played a game when my card wasn't at a 100% load (not through lack of trying anyway) Apart from a few 3D ones but that's due to Nvidia being inept.
If your gfx card isn't near 100% load you are doing it wrong! If you don't drive a fast car fast, what's the point, you should have bought a slower one.


Basically the point is moot.

Disabling hyerthreading is supposed to give a 5% fps boost in games so there are obviously still some kinks to be ironed out.
To be honest this shows how badly optimised games must be for Ryzen. Most games probably cap at 8 cores and I bet the games is taking the first 8 threads instead.( i.e 4 cores and 4 hyperthreaded cores instead of 8 cores and 0 hyperthreaded cores)

The intel 7700K is still the fastest for gaming, for now, at least. Rd[rcislly for the 3D community trying to work round Nvidia's shitty 3 core 3D vision bug etc.
It's just so tempting to have twice the crunching power for music production and other apps whilst only losing 1 fps at 4k resolution gaming.

Are the Ryzen chips HDCP 2.2 compliant?
I for one still game at 1080 P. And i'm far from the only one. It's useful to test low res/settings to see how the cpu is actually doing. You may not be cpu bottlenecked now, but with a future gpu you might be. For you it seems like Ryzen is a good bet because of the other things you do with your computer, and the fact that you play at a high resolution. I like how you can be on the ryzen platform and then simply swap out cpus later if their next processor is much better. You don't really get that assurance with intel.

#78
Posted 03/04/2017 12:19 AM   
Yeah it is tempting. Think ill get one anyway. If anything to keep them going! :D
Yeah it is tempting. Think ill get one anyway.
If anything to keep them going! :D

#79
Posted 03/04/2017 12:24 AM   
A big part of optimizing for Intel/AMD is just a compile-time switch, so games optimized for an AMD console can still favor Intel on their PC version. I wonder if Unity or Unreal let you actually compile the engine itself, so this could be tested? Anyway, I know for enthusiasts like here, clock speed and therefore Intel is still king... but I wonder how things would measure up with a bunch of background processes. Imagine the person who has voice chat, web browser, steam, virus scanner, maybe rainmeter, etc. running in the background (as well as Win10 itself doing whatever it pleases). With 8 cores you've got a lot more room before they start to compete with your game. Still probably not worth buying into until they fix the memory issues though.
A big part of optimizing for Intel/AMD is just a compile-time switch, so games optimized for an AMD console can still favor Intel on their PC version. I wonder if Unity or Unreal let you actually compile the engine itself, so this could be tested?



Anyway, I know for enthusiasts like here, clock speed and therefore Intel is still king... but I wonder how things would measure up with a bunch of background processes. Imagine the person who has voice chat, web browser, steam, virus scanner, maybe rainmeter, etc. running in the background (as well as Win10 itself doing whatever it pleases). With 8 cores you've got a lot more room before they start to compete with your game.

Still probably not worth buying into until they fix the memory issues though.

#80
Posted 03/04/2017 12:28 AM   
[quote="~Poke~"]A big part of optimizing for Intel/AMD is just a compile-time switch, so games optimized for an AMD console can still favor Intel on their PC version. I wonder if Unity or Unreal let you actually compile the engine itself, so this could be tested? Anyway, I know for enthusiasts like here, clock speed and therefore Intel is still king... but I wonder how things would measure up with a bunch of background processes. Imagine the person who has voice chat, web browser, steam, virus scanner, maybe rainmeter, etc. running in the background (as well as Win10 itself doing whatever it pleases). With 8 cores you've got a lot more room before they start to compete with your game. Still probably not worth buying into until they fix the memory issues though.[/quote]Indeed, id say i'm two months out on a new build. I just bought us a new puppy and a ps4 because they had them on sale and I love the uncharted series (was bundled). So I know my lady would complain if I then drop 600+ or so on a new mobo, ram, tower and cpu. I'll be waiting to see if these bugs get sorted out and if Intel blinks with their prices. I'm also interested to see what their 6 core lineup does.
[quote="~Poke~"]A big part of optimizing for Intel/AMD is just a compile-time switch, so games optimized for an AMD console can still favor Intel on their PC version. I wonder if Unity or Unreal let you actually compile the engine itself, so this could be tested?



Anyway, I know for enthusiasts like here, clock speed and therefore Intel is still king... but I wonder how things would measure up with a bunch of background processes. Imagine the person who has voice chat, web browser, steam, virus scanner, maybe rainmeter, etc. running in the background (as well as Win10 itself doing whatever it pleases). With 8 cores you've got a lot more room before they start to compete with your game.

Still probably not worth buying into until they fix the memory issues though.Indeed, id say i'm two months out on a new build. I just bought us a new puppy and a ps4 because they had them on sale and I love the uncharted series (was bundled). So I know my lady would complain if I then drop 600+ or so on a new mobo, ram, tower and cpu.

I'll be waiting to see if these bugs get sorted out and if Intel blinks with their prices. I'm also interested to see what their 6 core lineup does.

#81
Posted 03/04/2017 12:57 AM   
[quote="RAGEdemon"]I would wager that he has accidentally used different game settings such as MSAA etc on one of the systems. If you look at the card memory usage, it's 5.7GB vs 1.7GB, which suggests a huge mess-up somewhere in the game settings, perhaps even resolution. If he has missed the DX11 vs DX12, and has had only 1 day to make this video, I don't blame him. In the end, as both GPU's are well below 90%, it shouldn't have significant impact on CPU based FPS results.[/quote] GPU's were below 90% but there are graphics settings that influece CPU usage also, how do we know is not the case?
RAGEdemon said:I would wager that he has accidentally used different game settings such as MSAA etc on one of the systems.

If you look at the card memory usage, it's 5.7GB vs 1.7GB, which suggests a huge mess-up somewhere in the game settings, perhaps even resolution.

If he has missed the DX11 vs DX12, and has had only 1 day to make this video, I don't blame him.

In the end, as both GPU's are well below 90%, it shouldn't have significant impact on CPU based FPS results.


GPU's were below 90% but there are graphics settings that influece CPU usage also, how do we know is not the case?

Intel i7 8086K
Gigabyte GTX 1080Ti Aorus Extreme
DDR4 2x8gb 3200mhz Cl14
TV LG OLED65E6V
Avegant Glyph
Windows 10 64bits

#82
Posted 03/04/2017 05:32 AM   
[quote="tygeezy"]Is this game really that well threaded? Or is there some bug that is causing that much usage in all of AMD's threads? i7 is 90 + in all threads at all times. I'm just curious if wine wine prevails with ryzen over the i7 in lets say three years when it comes to gaming.[/quote] That's the big question mate. There are 3 avenues here. 1. People think that games will become more multi-threaded. While I believe that sentiment is generally true, games such as BF1 and Watchdogs 2 are already extremely well multithreaded. On computerbase.de, you can see that they scale extremely well with 8 cores Intel, which unfortunately does not translate well to Ryzen. I don't think games becoming more multithreaded will solve all of Ryzen's issues. 2. Ryzen has hardware-side issues which /might/ be patched out such as high memory and cache latency, and high bandwidth memory compatibility going forward. On the other hand, they might be plagued by them for the full life cycle and possibly beyond. 3. Games are generally compiled for and optimised for Intel processors, as some have already stated; and Intel aren't looking to do AMD any favours. Check out how Intel purposefully sabotaged AMD CPU performance: http://www.agner.org/optimize/blog/read.php?i=49 Games can, then, be optimised for the AMD platform - potentially. So, in this regard, taking all 3 into account, Ryzen has the potential to improve to within -10% to 160% of a 7700K with games starting development right now and designed for Ryzen. It wouldn't make sense for developers to optimise code for current and past games however, so we might only see a small handful of very popular and still active titles being actively worked on. It also heavily depends on how many people buy Ryzen for gaming right now. If no one buys it (and from the look of things, the general review consensus is to stay away from Ryzen for gaming), then developers have no incentive to even optimise for it into the future. The reality will be somewhere between the 2 scenarios. IMO, can it happen? Absolutely, and I really hope it does. Will it happen? IMHO, chances are slim for the current generation, but Zen 2+ has a bright future if they can catch up. If you want to know my personal opinion on the here and now for a top end gaming CPU side of the system for the next 5 years however, I already made my bets after crunching the numbers back in December 16, and they can be seen in my sig below.
tygeezy said:Is this game really that well threaded? Or is there some bug that is causing that much usage in all of AMD's threads? i7 is 90 + in all threads at all times.

I'm just curious if wine wine prevails with ryzen over the i7 in lets say three years when it comes to gaming.


That's the big question mate. There are 3 avenues here.

1. People think that games will become more multi-threaded. While I believe that sentiment is generally true, games such as BF1 and Watchdogs 2 are already extremely well multithreaded. On computerbase.de, you can see that they scale extremely well with 8 cores Intel, which unfortunately does not translate well to Ryzen. I don't think games becoming more multithreaded will solve all of Ryzen's issues.

2. Ryzen has hardware-side issues which /might/ be patched out such as high memory and cache latency, and high bandwidth memory compatibility going forward. On the other hand, they might be plagued by them for the full life cycle and possibly beyond.

3. Games are generally compiled for and optimised for Intel processors, as some have already stated; and Intel aren't looking to do AMD any favours. Check out how Intel purposefully sabotaged AMD CPU performance:

http://www.agner.org/optimize/blog/read.php?i=49

Games can, then, be optimised for the AMD platform - potentially.


So, in this regard, taking all 3 into account, Ryzen has the potential to improve to within -10% to 160% of a 7700K with games starting development right now and designed for Ryzen.

It wouldn't make sense for developers to optimise code for current and past games however, so we might only see a small handful of very popular and still active titles being actively worked on.

It also heavily depends on how many people buy Ryzen for gaming right now. If no one buys it (and from the look of things, the general review consensus is to stay away from Ryzen for gaming), then developers have no incentive to even optimise for it into the future.

The reality will be somewhere between the 2 scenarios.

IMO, can it happen? Absolutely, and I really hope it does.
Will it happen? IMHO, chances are slim for the current generation, but Zen 2+ has a bright future if they can catch up.

If you want to know my personal opinion on the here and now for a top end gaming CPU side of the system for the next 5 years however, I already made my bets after crunching the numbers back in December 16, and they can be seen in my sig below.

Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.

#83
Posted 03/04/2017 06:40 AM   
Intel Compiler FTW! I want to see AMD making their OWN compiler:)) I am an AMD CPU fan (all my cpu's from 1995 to 2010 were all AMD)! But, software wise.. the Intel Compiler is the best for the PC (And obviously for the INTEL CPU). Without the Intel Compiler we wouldn't have get the 3D VIsion OpenGL wrapper btw;) as the Microsoft compiler is very generalistic and they removed ASM support from x64 platforms. But, even on the x86 platform profiling and comparing the same code built with Visual Studio compiler VS Intel Compiler, shows clearly how the Intel Compiler generates "better" code :) One thing people tend to forget... the Gaming Industry is not driven by PCs but by Consoles. On all consoles the CPU is driven by AMD 8 Cores / 8 Threads and (some shitty GPUs - compared to the PC). (Both PS4 and XBOX One) For example the PS4 GPU has 1.84 Teraflops: http://www.techradar.com/news/gaming/consoles/ps4-vs-xbox-720-which-is-better-1127315/2 VS the Nvidia GPUs which varies from 3.9 TFLOPS all the way up to 9.0 TFLOPS: https://www.vrfocus.com/2016/05/nvidia-geforce-gtx-1080-1070-980-ti-980-970-comparison-guide/ Just talking all of this into the bigger picture, it seems that AMD has some "compiler" version for the Consoles but not the PC. So, this is the "fishy" part.
Intel Compiler FTW!

I want to see AMD making their OWN compiler:))

I am an AMD CPU fan (all my cpu's from 1995 to 2010 were all AMD)! But, software wise.. the Intel Compiler is the best for the PC (And obviously for the INTEL CPU). Without the Intel Compiler we wouldn't have get the 3D VIsion OpenGL wrapper btw;) as the Microsoft compiler is very generalistic and they removed ASM support from x64 platforms.

But, even on the x86 platform profiling and comparing the same code built with Visual Studio compiler VS Intel Compiler, shows clearly how the Intel Compiler generates "better" code :)

One thing people tend to forget... the Gaming Industry is not driven by PCs but by Consoles. On all consoles the CPU is driven by AMD 8 Cores / 8 Threads and (some shitty GPUs - compared to the PC).
(Both PS4 and XBOX One)

For example the PS4 GPU has 1.84 Teraflops:

http://www.techradar.com/news/gaming/consoles/ps4-vs-xbox-720-which-is-better-1127315/2


VS

the Nvidia GPUs which varies from 3.9 TFLOPS all the way up to 9.0 TFLOPS:

https://www.vrfocus.com/2016/05/nvidia-geforce-gtx-1080-1070-980-ti-980-970-comparison-guide/


Just talking all of this into the bigger picture, it seems that AMD has some "compiler" version for the Consoles but not the PC.

So, this is the "fishy" part.

1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc


My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com

(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)

#84
Posted 03/04/2017 11:30 AM   
[quote="RAGEdemon"]......If you want to know my personal opinion on the here and now for a top end gaming CPU side of the system for the next 5 years however, I already made my bets after crunching the numbers back in December 16, and they can be seen in my sig below.[/quote] I suppose you have delided 7700k for that 5.1Ghz, what cooler do you use?
RAGEdemon said:......If you want to know my personal opinion on the here and now for a top end gaming CPU side of the system for the next 5 years however, I already made my bets after crunching the numbers back in December 16, and they can be seen in my sig below.


I suppose you have delided 7700k for that 5.1Ghz, what cooler do you use?

Ryzen 1700X 3.9GHz | Asrock X370 Taichi | 16GB G.Skill
GTX 1080 Ti SLI | 850W EVGA P2 | Win7x64
Asus VG278HR | Panasonic TX-58EX750B 4K Active 3D

#85
Posted 03/04/2017 12:21 PM   
Delid using razor method + liquid metal grizzly kryonaut + NZXT Kraken X62 liquid cooler with custom profile (not using buggy kraken software). I lost the silicon lottery and got a mediocre-bad chip. I'm running at 1.49v for gaming stable (higher than 1.4 is not recommended, though 1.51 is max that Intel recommends). I'm using LLC @ highest levels to push the voltage to 1.56V under heavy load. It is not fully 24 hour Prime stable nor AVX stable at 5.1GHz, but at 5.0GHz, it should be if I wanted it. For gaming, 5.1 is solid even with games using AVX. There is a real possibility that the chip will degrade going into the future, but I'm taking the risk - tried to make the best of a bad situation. More than anything else, it all comes down to luck in the silicon lottery. What's your voltage on Xeon 5650 4.4 GHz? I was running Xeon 5660 @ 1.44v for 4.4GHz Prime stable for years. I don't think your Vcore will be as high as mine, just letting you know that 1.44v was perfectly safe on my chip; you might be able to get a few hundred more MHz out of your chip, for what it's worth.
Delid using razor method + liquid metal grizzly kryonaut + NZXT Kraken X62 liquid cooler with custom profile (not using buggy kraken software).

I lost the silicon lottery and got a mediocre-bad chip. I'm running at 1.49v for gaming stable (higher than 1.4 is not recommended, though 1.51 is max that Intel recommends). I'm using LLC @ highest levels to push the voltage to 1.56V under heavy load. It is not fully 24 hour Prime stable nor AVX stable at 5.1GHz, but at 5.0GHz, it should be if I wanted it. For gaming, 5.1 is solid even with games using AVX.

There is a real possibility that the chip will degrade going into the future, but I'm taking the risk - tried to make the best of a bad situation.

More than anything else, it all comes down to luck in the silicon lottery.

What's your voltage on Xeon 5650 4.4 GHz?
I was running Xeon 5660 @ 1.44v for 4.4GHz Prime stable for years. I don't think your Vcore will be as high as mine, just letting you know that 1.44v was perfectly safe on my chip; you might be able to get a few hundred more MHz out of your chip, for what it's worth.

Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.

#86
Posted 03/04/2017 02:26 PM   
LLC on, HT off, 1.4375v in BIOS for 4.4Ghz rock-solid for 2 years now. I could do 4.5 with ~1.48v, however my mobo can't handle the power draw when testing stability. Usually after 15min in Linx, PC will restart itself or cpu will revert the clock to stock and to get it work again I have to power off and disconnect the PSU of the wall. I have good temps with custom wc loop - around 65-70 in Linx/Prime95, but 4.4 appear to be the maximum for my setup. I feel this cpu is just not enough, because in Fallout 4 and Dying Light for example I am not getting any FPS boost by minimising the graphic settings. What bothers me is the win 7 support, according to the description z270 would work under win 7 only with 6th gen I7. AMD on other hand claims support for Ryzen and win7.
LLC on, HT off, 1.4375v in BIOS for 4.4Ghz rock-solid for 2 years now.
I could do 4.5 with ~1.48v, however my mobo can't handle the power draw when testing stability.
Usually after 15min in Linx, PC will restart itself or cpu will revert the clock to stock and to get it work again I have to power off and disconnect the PSU of the wall.
I have good temps with custom wc loop - around 65-70 in Linx/Prime95, but 4.4 appear to be the maximum for my setup.

I feel this cpu is just not enough, because in Fallout 4 and Dying Light for example I am not getting any FPS boost by minimising the graphic settings.

What bothers me is the win 7 support, according to the description z270 would work under win 7 only with 6th gen I7.

AMD on other hand claims support for Ryzen and win7.

Ryzen 1700X 3.9GHz | Asrock X370 Taichi | 16GB G.Skill
GTX 1080 Ti SLI | 850W EVGA P2 | Win7x64
Asus VG278HR | Panasonic TX-58EX750B 4K Active 3D

#87
Posted 03/04/2017 07:27 PM   
That's a great OC! Looks like we both maxed that chip at the same reasonable voltage. I had similar issues @ 1.475 for 4.5 on Asus P6T power choking. But unlike the 7700K, burn tests nor gaming were stable @ 4.5. FWIW, I had the 7700K z270 system running Win7 with ASUS provided drivers on CD without 'real' problems, except not being able to use the iGPU of course, which is a non-issue. So far, Win10 has also been behaving itself, and a lot of the more active users use it for 3D gaming, including helifax and pirateguybrush. bo3b hates win10, and his word is very strong on these boards too. I had to upgrade to Win10 for reasons not related to 7700K, but I do indeed see annoyances with it that he mentions. My numbers showed a 50%-70% improvement depending on game, going from Westmere Xeon @4.4 @ DDR3 1600 --> 7700K 5.1 @ DDR4 3600, and these figures are indeed what I see in games. Rise of the Tomb Raider maxed does get to ~55FPS due to the CPU in places such as the soviet installation, but most modern games I have tried drive the 1080 SLi system up to ~99% usage @ 1600p to give 60FPS locked in 3D Vision. For 3D Vision, it has worked really well, as the CPU bottleneck bug manifests itself as a much lower problem than on the lower CPU system. It's like 3D Vision causes a set wasted number of CPU cycles, not a wasted % of CPU cycles, if that makes sense. I am happy with the upgrade, however, I was hoping Ryzen would do better than the numbers I took for my number crunching in December. It's a shame that the real numbers from reviews were a lot worse than I had anticipated - I take no pleasure from my so called "correct choice"; in fact, it's quite the opposite sadly.
That's a great OC! Looks like we both maxed that chip at the same reasonable voltage. I had similar issues @ 1.475 for 4.5 on Asus P6T power choking. But unlike the 7700K, burn tests nor gaming were stable @ 4.5.

FWIW, I had the 7700K z270 system running Win7 with ASUS provided drivers on CD without 'real' problems, except not being able to use the iGPU of course, which is a non-issue.

So far, Win10 has also been behaving itself, and a lot of the more active users use it for 3D gaming, including helifax and pirateguybrush. bo3b hates win10, and his word is very strong on these boards too. I had to upgrade to Win10 for reasons not related to 7700K, but I do indeed see annoyances with it that he mentions.

My numbers showed a 50%-70% improvement depending on game, going from Westmere Xeon @4.4 @ DDR3 1600 --> 7700K 5.1 @ DDR4 3600, and these figures are indeed what I see in games.

Rise of the Tomb Raider maxed does get to ~55FPS due to the CPU in places such as the soviet installation, but most modern games I have tried drive the 1080 SLi system up to ~99% usage @ 1600p to give 60FPS locked in 3D Vision.

For 3D Vision, it has worked really well, as the CPU bottleneck bug manifests itself as a much lower problem than on the lower CPU system. It's like 3D Vision causes a set wasted number of CPU cycles, not a wasted % of CPU cycles, if that makes sense.

I am happy with the upgrade, however, I was hoping Ryzen would do better than the numbers I took for my number crunching in December. It's a shame that the real numbers from reviews were a lot worse than I had anticipated - I take no pleasure from my so called "correct choice"; in fact, it's quite the opposite sadly.

Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.

#88
Posted 03/04/2017 07:51 PM   
[quote="mihabolil"] What bothers me is the win 7 support, according to the description z270 would work under win 7 only with 6th gen I7. AMD on other hand claims support for Ryzen and win7. [/quote] 7700k runs just fine in windows 7 with z170... If you want z270 board however, I think its a bit more difficult.
mihabolil said:

What bothers me is the win 7 support, according to the description z270 would work under win 7 only with 6th gen I7.

AMD on other hand claims support for Ryzen and win7.



7700k runs just fine in windows 7 with z170...

If you want z270 board however, I think its a bit more difficult.

I'm ishiki, forum screwed up my name.

9900K @5.0 GHZ, 16GBDDR4@4233MHZ, 2080 Ti

#89
Posted 03/05/2017 03:22 AM   
Apparently, Joker wasn't honest in his benchmarks. Sad. I hope this is just some kind of simple misunderstanding. https://www.youtube.com/watch?v=VWarC_Nygew
Apparently, Joker wasn't honest in his benchmarks. Sad. I hope this is just some kind of simple misunderstanding.

Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.

#90
Posted 03/06/2017 05:32 AM   
  6 / 22    
Scroll To Top