680 sli or 780

what woudl be better for a 680 user

get another 680
get rid of 6x and invest in 780
  3 / 3    
You read conflicting things about threading performance. I read a couple of interviews with the technical director of 4A games (Metro) recently. The guy sounded extremely knowledgeable (by which I mean most of what he said went over my head, lol). He was adamant that from the perspective of his engine, it was more important to have more cores than powerful single core performance. He'd much rather be working with lots of parallel cores even if it meant they were weak. That situation describes the new consoles, by the way. As I understand it, both of them will have 8 cores, and both will use CPUs whose single-thread performance is dirtpoor by our standards. On paper, it certainly seems like devs will need to push multicore performance if they want to get great performance out of the next consoles. I guess we won't know until 'true' next gen games come out - games that run on engines specifically designed for next-gen, rather than games like BF4 that straddle both worlds. But on paper, it certainly seems like a mid-strength core i7 would be a much better bet next year than a top-notch core i5, since only the former will match (and probably exceed) all the specs of the consoles CPUs.
You read conflicting things about threading performance.

I read a couple of interviews with the technical director of 4A games (Metro) recently. The guy sounded extremely knowledgeable (by which I mean most of what he said went over my head, lol). He was adamant that from the perspective of his engine, it was more important to have more cores than powerful single core performance. He'd much rather be working with lots of parallel cores even if it meant they were weak.

That situation describes the new consoles, by the way. As I understand it, both of them will have 8 cores, and both will use CPUs whose single-thread performance is dirtpoor by our standards. On paper, it certainly seems like devs will need to push multicore performance if they want to get great performance out of the next consoles.

I guess we won't know until 'true' next gen games come out - games that run on engines specifically designed for next-gen, rather than games like BF4 that straddle both worlds. But on paper, it certainly seems like a mid-strength core i7 would be a much better bet next year than a top-notch core i5, since only the former will match (and probably exceed) all the specs of the consoles CPUs.

ImageVolnaPC.com - Tips, tweaks, performance comparisons (PhysX card, SLI scaling, etc)

#31
Posted 11/13/2013 12:47 AM   
I think I can answer your previous question about why there aren't drop in CPUs to improve performance. There actually are some boards like that, if you want to run a Xeon, you can get multiple CPU, multiple core motherboards. The Xeon will cost you like $1500 alone, so it's not for the faint of heart. But, the real story is that your multiple cores ARE multiple CPUs, in all aspects. Your 4 core Ivy Bridge is in fact 4 CPUs on a single chip. The real problem is on the software end. Writing multi-threaded code is hard, and some problems just don't lend themselves to being parallized. In general in games, there has to be a Master Control Program if you will, that keeps things organized and in sync. If things aren't in sync, you can wind up with explosions happening before you set them off, teammates dying when they were just patched up, all sorts of chaos. Older game engines that we are presently playing, earlier Frostbite, Unity, Unreal engines that nearly all games are written with, are all single threaded essentially. They farm out some work like sound, music, loading to other threads/cores/cpus, but they still have an MCP that is single threaded. Now somebody as good as the 4A Games devs can get around most of that by being a lot more clever about the software and using notifications from different pieces. I think they might be on the front edge of game engine development, now that it's clear that Moore's Law has been busted and we are stuck with multiple cores as the scaling (which we could have done in 1985). You also see this in the new FrostBite engine. It's much more multi-threaded, and will use up to 6 threads/cores/cpus effectively. It doesn't seem to be as good as Metro in that there seems to be a 6 thread cap, which shows up in benchmark bottlenecks. I think you are probably right that the new consoles are probably going to push this to the fore, as the only way to get effective improvements. I just bought a 4670K Haswell, with only 4 threads/cores/cpus, and that's right for me, because I buy for last gen, not the future. For you, when you update your Ivy Bridge, you will probably want to at least go i7 to match the expected software improvement from consoles. edit: right CPU duh
I think I can answer your previous question about why there aren't drop in CPUs to improve performance.

There actually are some boards like that, if you want to run a Xeon, you can get multiple CPU, multiple core motherboards. The Xeon will cost you like $1500 alone, so it's not for the faint of heart.

But, the real story is that your multiple cores ARE multiple CPUs, in all aspects. Your 4 core Ivy Bridge is in fact 4 CPUs on a single chip.

The real problem is on the software end. Writing multi-threaded code is hard, and some problems just don't lend themselves to being parallized. In general in games, there has to be a Master Control Program if you will, that keeps things organized and in sync. If things aren't in sync, you can wind up with explosions happening before you set them off, teammates dying when they were just patched up, all sorts of chaos.

Older game engines that we are presently playing, earlier Frostbite, Unity, Unreal engines that nearly all games are written with, are all single threaded essentially. They farm out some work like sound, music, loading to other threads/cores/cpus, but they still have an MCP that is single threaded.

Now somebody as good as the 4A Games devs can get around most of that by being a lot more clever about the software and using notifications from different pieces. I think they might be on the front edge of game engine development, now that it's clear that Moore's Law has been busted and we are stuck with multiple cores as the scaling (which we could have done in 1985).

You also see this in the new FrostBite engine. It's much more multi-threaded, and will use up to 6 threads/cores/cpus effectively. It doesn't seem to be as good as Metro in that there seems to be a 6 thread cap, which shows up in benchmark bottlenecks.


I think you are probably right that the new consoles are probably going to push this to the fore, as the only way to get effective improvements.

I just bought a 4670K Haswell, with only 4 threads/cores/cpus, and that's right for me, because I buy for last gen, not the future. For you, when you update your Ivy Bridge, you will probably want to at least go i7 to match the expected software improvement from consoles.

edit: right CPU duh

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

#32
Posted 11/13/2013 05:27 AM   
Actually, I too bought a 4770k recently, because I'm a tech shopaholic. I just haven't upgraded my sig yet :D (I also took helifax's advice and got a good motherboard, like you did I believe) Seems I got a pretty decent one too, as I could take it to 4.7Ghz without any real effort, and with temparatures still really quite low. From what I've read, that's almost unusual. It probably could go higher with more bios tweaks, though I just put it at 4.6 with a moderate voltage and left it at that. But surely you meant to say 8 threads, no? Or did you mean to write 4570k? Thanks for the great answer, by the way. So, basically, they potentially could release motherboards with two sockets (like my Mac Pro at work has), but they just don't bother, because 4 physical cores and/or 8 threads is more than enough in the current software climate, yeah? I wonder if that sort of thing might become practical in a few years' time.
Actually, I too bought a 4770k recently, because I'm a tech shopaholic. I just haven't upgraded my sig yet :D (I also took helifax's advice and got a good motherboard, like you did I believe)

Seems I got a pretty decent one too, as I could take it to 4.7Ghz without any real effort, and with temparatures still really quite low. From what I've read, that's almost unusual. It probably could go higher with more bios tweaks, though I just put it at 4.6 with a moderate voltage and left it at that.

But surely you meant to say 8 threads, no? Or did you mean to write 4570k?

Thanks for the great answer, by the way. So, basically, they potentially could release motherboards with two sockets (like my Mac Pro at work has), but they just don't bother, because 4 physical cores and/or 8 threads is more than enough in the current software climate, yeah? I wonder if that sort of thing might become practical in a few years' time.

ImageVolnaPC.com - Tips, tweaks, performance comparisons (PhysX card, SLI scaling, etc)

#33
Posted 11/13/2013 09:56 AM   
Sorry, I sometimes lose it with all the numbers. I meant i5-4670K. I came __> <__ this close to getting a i7-4770K, but decided it was overkill for what I need, and that HT can sometimes cost you some OverClock. If the consoles pan out and we get effective use of 8 threads, I can always swap to an i7. I also need to update my sig. :-> Yeah, they could do multiple CPUs, but realistic workloads barely tap even 12 thread Sandy Bridge chips, with the possible exception of encoding video. For servers, it's a different story, and that's why server machines can tend to be multi-chip.
Sorry, I sometimes lose it with all the numbers. I meant i5-4670K. I came __> <__ this close to getting a i7-4770K, but decided it was overkill for what I need, and that HT can sometimes cost you some OverClock. If the consoles pan out and we get effective use of 8 threads, I can always swap to an i7.

I also need to update my sig. :->

Yeah, they could do multiple CPUs, but realistic workloads barely tap even 12 thread Sandy Bridge chips, with the possible exception of encoding video. For servers, it's a different story, and that's why server machines can tend to be multi-chip.

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

#34
Posted 11/13/2013 12:28 PM   
I'm not a tech guy but from what I understand the 'problem' lies within the software, exactly as bo3b stated, we're just not there yet. In w7 I'm using even the HT threads are recognised as another CPU so I have twelve 3930K processors in my device manager. Same goes for 64bit which seems to be there forever but we're just recently getting native support for it. Since we've talked a bit about bf4, I think that Frostbite 3 is a mess, it was suppose to be fb2.5 as they have it stated like 6 months ago until EA started to push their marketing bs and they suddenly converted it into 3.0 because 2.5 sounds lame. They also put mantle support in there and this engine had to run on current gen consoles so I bet its a completely mess rushed out just to meet the deadline (which might explain the sheer amount of technical malfunctions users are experiencing on their forums). From what I've seen UE4 will be the first real next-gen (it still sound silly for me to use the term next-gen since PCs have been on that level for a while now) game engine. http://www.youtube.com/watch?v=-VANuJCM29E (not much of a technical info there but the effects are pretty amazing) all we're getting now is just silly, rushed and totally not optimized :/ Unfortunatley the game industry is on the consoles so I'm really glad they finally decided to make a leap and we would be able to utilize the pcs the way they meant to be used.
I'm not a tech guy but from what I understand the 'problem' lies within the software, exactly as bo3b stated, we're just not there yet.

In w7 I'm using even the HT threads are recognised as another CPU so I have twelve 3930K processors in my device manager. Same goes for 64bit which seems to be there forever but we're just recently getting native support for it.

Since we've talked a bit about bf4, I think that Frostbite 3 is a mess, it was suppose to be fb2.5 as they have it stated like 6 months ago until EA started to push their marketing bs and they suddenly converted it into 3.0 because 2.5 sounds lame. They also put mantle support in there and this engine had to run on current gen consoles so I bet its a completely mess rushed out just to meet the deadline (which might explain the sheer amount of technical malfunctions users are experiencing on their forums).

From what I've seen UE4 will be the first real next-gen (it still sound silly for me to use the term next-gen since PCs have been on that level for a while now) game engine.



(not much of a technical info there but the effects are pretty amazing) all we're getting now is just silly, rushed and totally not optimized :/

Unfortunatley the game industry is on the consoles so I'm really glad they finally decided to make a leap and we would be able to utilize the pcs the way they meant to be used.

Acer H5360 / BenQ XL2420T + 3D Vision 2 Kit - EVGA GTX 980TI 6GB - i7-3930K@4.0GHz - DX79SI- 16GB RAM@2133 - Win10x64 Home - HTC VIVE

#35
Posted 11/13/2013 12:38 PM   
We can see the benefits (and downsides) of massive parallelism with our GPU's. A whole bunch of 'processing units' that can be configured to do all sorts of graphics processing in parallel. The idea that not all game engines (and software/game titles) are created equal gets brought home with seemingly constant driver+profile updates - all of which are attempts to get the most performance by changing how the driver allocates all that available parallelism which may or may not be available across multiple generations of GPU product. Much of the same optimization can and is (and for some titles) done with game release patches however I don't believe its on the same scale (or can be) as the GPU driver+profiles. When you have 100's to 1000's of parallelism to work with the coding for a GPU with 300 'processing units' that could scale to 10x that amount (and everywhere in between) would seem to be an easier task than a much smaller 2x (4 core or thread), 3x (6 core or thread), 4x (8 core or thread) to 6x (12 cores or threads). (over a baseline of two cores or threads) If it were coded for 12 thread hardware the 2 thread hardware would likely suffer performance loss's. If coded for only 2 threads the 4-12 thread hardware would be under used. What developers seem to do is pick what they feel is a 'happy medium' of what their target market might have (looking at the Steam 2013 hardware+software report seems to indicate 2-4 threads) and call it good. I'd agree with others that with both Consoles sporting 8 cores (not sure if they are all available to developers on both or ether platform) we might start seeing PC games making use of more cores/threads. However, no mater how many cores/threads they optimize a game for the speed of the game/application will always be governed by its slowest thread. Be it encrypting information for the ethernet port for online services, waiting on player input or doing something else, without careful fine tuning/compiling of the game/application for the hardware it's ran on there will always something taking longer than they hoped/planned and slowing the entire game/application down. This is one of the major reasons why many of the console releases are pretty amazing for the hardware that they are running on. I'm hopeful again that since their 'next generation' has better hardware to work with that this will in turn make PC ports better. (I still may jump the gun and upgrade my CPU, my H100I needs something more of a challenge to keep cool)
We can see the benefits (and downsides) of massive parallelism with our GPU's. A whole bunch of 'processing units' that can be configured to do all sorts of graphics processing in parallel. The idea that not all game engines (and software/game titles) are created equal gets brought home with seemingly constant driver+profile updates - all of which are attempts to get the most performance by changing how the driver allocates all that available parallelism which may or may not be available across multiple generations of GPU product.

Much of the same optimization can and is (and for some titles) done with game release patches however I don't believe its on the same scale (or can be) as the GPU driver+profiles.

When you have 100's to 1000's of parallelism to work with the coding for a GPU with 300 'processing units' that could scale to 10x that amount (and everywhere in between) would seem to be an easier task than a much smaller 2x (4 core or thread), 3x (6 core or thread), 4x (8 core or thread) to 6x (12 cores or threads). (over a baseline of two cores or threads)

If it were coded for 12 thread hardware the 2 thread hardware would likely suffer performance loss's. If coded for only 2 threads the 4-12 thread hardware would be under used. What developers seem to do is pick what they feel is a 'happy medium' of what their target market might have (looking at the Steam 2013 hardware+software report seems to indicate 2-4 threads) and call it good.

I'd agree with others that with both Consoles sporting 8 cores (not sure if they are all available to developers on both or ether platform) we might start seeing PC games making use of more cores/threads.

However, no mater how many cores/threads they optimize a game for the speed of the game/application will always be governed by its slowest thread. Be it encrypting information for the ethernet port for online services, waiting on player input or doing something else, without careful fine tuning/compiling of the game/application for the hardware it's ran on there will always something taking longer than they hoped/planned and slowing the entire game/application down.

This is one of the major reasons why many of the console releases are pretty amazing for the hardware that they are running on. I'm hopeful again that since their 'next generation' has better hardware to work with that this will in turn make PC ports better.

(I still may jump the gun and upgrade my CPU, my H100I needs something more of a challenge to keep cool)

i7-2600K-4.5Ghz/Corsair H100i/8GB/GTX780SC-SLI/Win7-64/1200W-PSU/Samsung 840-500GB SSD/Coolermaster-Tower/Benq 1080ST @ 100"

#36
Posted 11/14/2013 12:43 AM   
[quote="tehace"]bf4 is somewhat important as I'm going to be playing this game a lot. I would probably go r290x since its in amd pocket atm but the lack of 3d in AMD is a no go for me since I do enjoy playing other games in 3d as well. That being said I dont care for 3d in bf4 nor I care about 4k gaming atm. I'm just worried a bit since more and more EA games will use frostbite 3 and I would love to be able to play ME4 in 3d :) yes HT off didnt improve my frames (maybe sligtly but it was due to the fact that I crank the CPU up to 4.5) I suppose I'm going to wait a month or so and see if the 680 will go down in price if so I'm gonna grab one if not I'm going to wait as I can play bf4 locked at 60fps constant for now and there are no other games so far that will require an upgrade. I suppose it would be wise to wait until Q1 2014 and see what will happen once the real next-gen games start to come out. [/quote]That seems like a good approach to take. Doesn't hurt to wait until you at least see what 8xx cards can do, and how the Mantle thing plays out with BF4. With your current single 680, you are limited to lower resolutions, but it sounds like you are fine with that for now. The single thing you can do to increase your frame rates would be to overclock your CPU further. You've got essentially the best CPU you can buy right now, but I'd recommend ditching the Scythe and going with a Kraken x60 if it fits. That will probably give you a bit more headroom and maybe get to 5.0GHz. The Scythe is a good air cooler, but it's no match for closed loop water cooling.
tehace said:bf4 is somewhat important as I'm going to be playing this game a lot. I would probably go r290x since its in amd pocket atm but the lack of 3d in AMD is a no go for me since I do enjoy playing other games in 3d as well. That being said I dont care for 3d in bf4 nor I care about 4k gaming atm.

I'm just worried a bit since more and more EA games will use frostbite 3 and I would love to be able to play ME4 in 3d :)

yes HT off didnt improve my frames (maybe sligtly but it was due to the fact that I crank the CPU up to 4.5)

I suppose I'm going to wait a month or so and see if the 680 will go down in price if so I'm gonna grab one if not I'm going to wait as I can play bf4 locked at 60fps constant for now and there are no other games so far that will require an upgrade. I suppose it would be wise to wait until Q1 2014 and see what will happen once the real next-gen games start to come out.
That seems like a good approach to take. Doesn't hurt to wait until you at least see what 8xx cards can do, and how the Mantle thing plays out with BF4.

With your current single 680, you are limited to lower resolutions, but it sounds like you are fine with that for now.


The single thing you can do to increase your frame rates would be to overclock your CPU further. You've got essentially the best CPU you can buy right now, but I'd recommend ditching the Scythe and going with a Kraken x60 if it fits. That will probably give you a bit more headroom and maybe get to 5.0GHz. The Scythe is a good air cooler, but it's no match for closed loop water cooling.

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

#37
Posted 11/14/2013 02:01 AM   
[quote="Volnaiskra"]I think that can sometimes be said for things like buying lots of RAM, a top-end motherboard, or perhaps a hyperthreaded CPU. People often buy those without needing to. But I've never heard of a video card that was so over-powerful that its power went to waste. Even if you don't use it to its full potential today, you certainly will tomorrow. In the case of BF4, with its crazy CPU bottleneck, you're right, 680 SLI is an excellent choice, while anything higher is probably overkill. But most games don't have a CPU bottleneck - certainly not at that exact point (where 780SLI becomes the same as 680 SLI) anyway. That's kind of a fluke, I think. And many don't have SLI scaling as good as BF4.[/quote]Actually, when I started looking into this in depth, I found that more often than not, I was CPU limited. The caveat there is that I play at 1280x720, because that's all projectors can do in 3D today. I started down this path with GTA4, because I could not get it to run smoothly. I upgraded to a single GTX 580, then to GTX 580 SLI because frame rates were bad. I went to x64 with 8G of RAM. Finally realized that my wimpy 860 was holding me back, as GTA is pretty much single threaded. In more future looking games like BF4, and MetroLL we are starting to see some value to multi-core systems. Games before that really need a single fast core. If you can allow or force a single core to turbo up, you are better off. Turning off HT, closed loop water cooling, and even disabling two cores allowed me to overclock far enough to make a dramatic improvement in GTA4. So, my real point is that you need to get what is right for your setup and what you play. I'm sorely tempted by the new 780 cards, but the reality is that I have only a single game that I have to turn the settings down one notch on- MetroLL. And I need to do that anyway, because the lights are slightly busted in 3D at Ultra. That's with GTX 580 SLI. So, I agree in principle that you should just buy up for convenience and simplicity, the component that makes the most difference may not be what you think.
Volnaiskra said:I think that can sometimes be said for things like buying lots of RAM, a top-end motherboard, or perhaps a hyperthreaded CPU. People often buy those without needing to. But I've never heard of a video card that was so over-powerful that its power went to waste. Even if you don't use it to its full potential today, you certainly will tomorrow.

In the case of BF4, with its crazy CPU bottleneck, you're right, 680 SLI is an excellent choice, while anything higher is probably overkill. But most games don't have a CPU bottleneck - certainly not at that exact point (where 780SLI becomes the same as 680 SLI) anyway. That's kind of a fluke, I think. And many don't have SLI scaling as good as BF4.
Actually, when I started looking into this in depth, I found that more often than not, I was CPU limited. The caveat there is that I play at 1280x720, because that's all projectors can do in 3D today.

I started down this path with GTA4, because I could not get it to run smoothly. I upgraded to a single GTX 580, then to GTX 580 SLI because frame rates were bad. I went to x64 with 8G of RAM. Finally realized that my wimpy 860 was holding me back, as GTA is pretty much single threaded.

In more future looking games like BF4, and MetroLL we are starting to see some value to multi-core systems. Games before that really need a single fast core. If you can allow or force a single core to turbo up, you are better off. Turning off HT, closed loop water cooling, and even disabling two cores allowed me to overclock far enough to make a dramatic improvement in GTA4.


So, my real point is that you need to get what is right for your setup and what you play. I'm sorely tempted by the new 780 cards, but the reality is that I have only a single game that I have to turn the settings down one notch on- MetroLL. And I need to do that anyway, because the lights are slightly busted in 3D at Ultra. That's with GTX 580 SLI.

So, I agree in principle that you should just buy up for convenience and simplicity, the component that makes the most difference may not be what you think.

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

#38
Posted 11/14/2013 02:19 AM   
I'm not surprised GTA is cpu limited. The graphics aren't particularly ground breaking, yet the amount of vehicles and npcs it keeps track of - with all the resulting procedural animations, ai, audio and physics - is staggering, and very impressive. A multiplayer game like bf4 is always going to tax the cpu too. I'm sure there are plenty of games that are GPU bound though. Like you say, it depends on what you're after. I'd love to play with full ssaa in tomb Raider or metro, or with a silky smooth 120fps in Crysis 3 with high aa, but my Titans won't let me. I guess my main point is that we're in a very unusual time: the cusp of a new console cycle that only happens once or twice a Decade. Ordinarily I'd agree wholeheartedly agree that you should buy for the present and immediate future. But in our case, the near-though-not-quite-immediate future is going to render any card bought this year mediocre. Which is why I don't think conservative upgrades make too much sense at the moment when compared to either buying big or saving and waiting. Then again, I wasn't around during the last console cycle change 7 years ago, so I don't know how these things usually pan out. Perhaps I'm overestimating how quickly games will improve
I'm not surprised GTA is cpu limited. The graphics aren't particularly ground breaking, yet the amount of vehicles and npcs it keeps track of - with all the resulting procedural animations, ai, audio and physics - is staggering, and very impressive. A multiplayer game like bf4 is always going to tax the cpu too.

I'm sure there are plenty of games that are GPU bound though. Like you say, it depends on what you're after. I'd love to play with full ssaa in tomb Raider or metro, or with a silky smooth 120fps in Crysis 3 with high aa, but my Titans won't let me.

I guess my main point is that we're in a very unusual time: the cusp of a new console cycle that only happens once or twice a Decade. Ordinarily I'd agree wholeheartedly agree that you should buy for the present and immediate future. But in our case, the near-though-not-quite-immediate future is going to render any card bought this year mediocre. Which is why I don't think conservative upgrades make too much sense at the moment when compared to either buying big or saving and waiting.

Then again, I wasn't around during the last console cycle change 7 years ago, so I don't know how these things usually pan out. Perhaps I'm overestimating how quickly games will improve

ImageVolnaPC.com - Tips, tweaks, performance comparisons (PhysX card, SLI scaling, etc)

#39
Posted 11/14/2013 07:23 AM   
[quote="Volnaiskra"] I'd love to play with full ssaa in tomb Raider or metro, or with a silky smooth 120fps in Crysis 3 with high aa, but my Titans won't let me. [/quote] I'm a projector user so I'm stuck at 1280x720 in 3D but this comment caught my eye as I play both these games. (I also exclusively play with a 360 game pad controller if that makes any play ability difference) We read articles+comments all the time that high end graphics cards are overkill @1080P, let alone in SLI or CFX. While @720P I use 'high' instead of 'very high' in Metro2033 to remove annoying DOF and a fairly large FPS droop at the beginning 1/3 of the Metro2033 benchmark, TR2013 looks/plays silky smooth with everything but the "TressFX Hair" turned on (I turn it off more because I don't like the look of it, not the performance hit) and Crysis3 everything is cranked up and looks+plays smoothly. I was a bit surprised at your comment as I'd thought 780 or Titan-SLI was over kill @1080P let alone way over kill @720P. I'll take this as testament that the current gaming hardware/software press still does not have a clue as to how real gamers play and maybe the 8xx or 9xx series might finally get us stable 120FPS @1080P with everything turned on+up.
Volnaiskra said: I'd love to play with full ssaa in tomb Raider or metro, or with a silky smooth 120fps in Crysis 3 with high aa, but my Titans won't let me.


I'm a projector user so I'm stuck at 1280x720 in 3D but this comment caught my eye as I play both these games. (I also exclusively play with a 360 game pad controller if that makes any play ability difference)

We read articles+comments all the time that high end graphics cards are overkill @1080P, let alone in SLI or CFX. While @720P I use 'high' instead of 'very high' in Metro2033 to remove annoying DOF and a fairly large FPS droop at the beginning 1/3 of the Metro2033 benchmark, TR2013 looks/plays silky smooth with everything but the "TressFX Hair" turned on (I turn it off more because I don't like the look of it, not the performance hit) and Crysis3 everything is cranked up and looks+plays smoothly.

I was a bit surprised at your comment as I'd thought 780 or Titan-SLI was over kill @1080P let alone way over kill @720P. I'll take this as testament that the current gaming hardware/software press still does not have a clue as to how real gamers play and maybe the 8xx or 9xx series might finally get us stable 120FPS @1080P with everything turned on+up.

i7-2600K-4.5Ghz/Corsair H100i/8GB/GTX780SC-SLI/Win7-64/1200W-PSU/Samsung 840-500GB SSD/Coolermaster-Tower/Benq 1080ST @ 100"

#40
Posted 11/14/2013 01:49 PM   
good read guys. I also think that the 4K gamming is more far away than nvidia try to convince us it is, unless somebody is happy with 30fps, like if you can't max crysis at 1080p how much frames it would have on a 4k display yet on 3x4k in surround? On the second thought its been said that the infiltrator demo from UE4 was running at a single gtx680 so I think that the games we're seeing now are just very poorly optimized or not optimized at all. Its like android devices they keep pushing hardware updates to have like 4 cores 2GHz because its not smooth when the reason they're not is because the system become clumsy due to constant additions and no optimization.
good read guys. I also think that the 4K gamming is more far away than nvidia try to convince us it is, unless somebody is happy with 30fps, like if you can't max crysis at 1080p how much frames it would have on a 4k display yet on 3x4k in surround?

On the second thought its been said that the infiltrator demo from UE4 was running at a single gtx680 so I think that the games we're seeing now are just very poorly optimized or not optimized at all. Its like android devices they keep pushing hardware updates to have like 4 cores 2GHz because its not smooth when the reason they're not is because the system become clumsy due to constant additions and no optimization.

Acer H5360 / BenQ XL2420T + 3D Vision 2 Kit - EVGA GTX 980TI 6GB - i7-3930K@4.0GHz - DX79SI- 16GB RAM@2133 - Win10x64 Home - HTC VIVE

#41
Posted 11/14/2013 05:50 PM   
Look, don't get me wrong: my system lets me run the most demanding games nicely, and is probably overkill for the rest. So I'm not really complaining. I'm not that much of a princess, I promise. But you know that silky smooth feeling you get when you play an old game? Where it's not only a steady 60fps, but there are no distracting dips during intense moments, and not a trace of microstuttering or inside-the-second variation (and your GPU is also running whisper quiet)? My holy grail is to open up something like the latest Crysis or Metro and get that feeling. But, despite essentially hitting the hardware ceiling with two titans and my new 4770k, that holy grail is a way off. MLL runs great, but does still dip below 60x2fps. And that's without SSAA. Adding ssaa makes the image noticeably nicer, but hurts the framerate. Running the benchmark with max everything and ssaa gives me 58fps in 2D, with microstuttering thrown in. I haven't played tomb Raider shanty town since I went sli, but I doubt I could get straight 120 with ssaa there. I guess the reliance on ssaa in some of these dx11 games doesn't help, since it looks fantastic but is such an expensive and brute force method. I haven't played much of hitman absolution, so I don't know how representative its benchmark is. But its benchmark is murder, giving me about 45-60 in 2D, and much worse in 3D........even with some settings turned down. If that's how the game runs, I'll have to choose between 3d or lowering various settings. And even if a vanilla game runs well, there might still be desirable additions like mods, injectors, ENB, etc. Skyrim with lots of mods installed looks 3 times better than vanilla, but it's also 3 times more taxing. On the upside, Cryostasis finally works acceptably, lol. If you don't know that game, it's an old and dog-ugly game that has the worst optimisation I've ever seen. I load it up after every upgrade for a laugh, to see if I can finally run it without it being a stuttering mess. It took 5 years and 4 upgrades, but now I finally can, lol. I think review sites have gotten better since TechReport helped usher in the era of inside-the-second monitoring. Though most still seem to consider a variable framerate that floats somewhere around 40 to be quite attractive. Some only measure average framerates, even though it's the minimum ones that actually catch your attention when you're playing. And of course they only measure 2D performance in the first place. I have to assume that mainstream gamers find those charts on all the game sites accurate and relevant, but personally, I can only use them as a rough guide. Kind of like when I'm surprised that a game with "6-8 hours worth of content" took me 20 hours, and then I realised it's because I played it on hard and also spent a lot of time sightseeing. I totally agree with tehace. Who on earth is going to be playing Battlefield 4 and Watch Dogs on a 4K monitor anytime soon? A choppy 30fps might be good enough for a casual gamer who just wants to play something on his old PC, but the sort of person who will invest in a 4K system probably isn't going to be happy with that. I guess Surround gamers willingly accept lowish framerates today, but Surround gaming profoundly changes the experience, so it's worth it for them. But 4k? What will it offer? Extra size (but not as impressive as a projector), less noticeable jaggies (which could have been achieved with AA anyway), and the ability to see extra detail in textures that isn't actually there, since textures are never hig-res enough at the best of times. Even on a crazy "Battlebox" PC, I reckon 4K would actually represent a downgraded experience on many new games (at least in 3D)
Look, don't get me wrong: my system lets me run the most demanding games nicely, and is probably overkill for the rest. So I'm not really complaining. I'm not that much of a princess, I promise.

But you know that silky smooth feeling you get when you play an old game? Where it's not only a steady 60fps, but there are no distracting dips during intense moments, and not a trace of microstuttering or inside-the-second variation (and your GPU is also running whisper quiet)? My holy grail is to open up something like the latest Crysis or Metro and get that feeling. But, despite essentially hitting the hardware ceiling with two titans and my new 4770k, that holy grail is a way off.

MLL runs great, but does still dip below 60x2fps. And that's without SSAA. Adding ssaa makes the image noticeably nicer, but hurts the framerate. Running the benchmark with max everything and ssaa gives me 58fps in 2D, with microstuttering thrown in.

I haven't played tomb Raider shanty town since I went sli, but I doubt I could get straight 120 with ssaa there.

I guess the reliance on ssaa in some of these dx11 games doesn't help, since it looks fantastic but is such an expensive and brute force method.

I haven't played much of hitman absolution, so I don't know how representative its benchmark is. But its benchmark is murder, giving me about 45-60 in 2D, and much worse in 3D........even with some settings turned down. If that's how the game runs, I'll have to choose between 3d or lowering various settings.

And even if a vanilla game runs well, there might still be desirable additions like mods, injectors, ENB, etc. Skyrim with lots of mods installed looks 3 times better than vanilla, but it's also 3 times more taxing.

On the upside, Cryostasis finally works acceptably, lol. If you don't know that game, it's an old and dog-ugly game that has the worst optimisation I've ever seen. I load it up after every upgrade for a laugh, to see if I can finally run it without it being a stuttering mess. It took 5 years and 4 upgrades, but now I finally can, lol.



I think review sites have gotten better since TechReport helped usher in the era of inside-the-second monitoring. Though most still seem to consider a variable framerate that floats somewhere around 40 to be quite attractive. Some only measure average framerates, even though it's the minimum ones that actually catch your attention when you're playing. And of course they only measure 2D performance in the first place.

I have to assume that mainstream gamers find those charts on all the game sites accurate and relevant, but personally, I can only use them as a rough guide. Kind of like when I'm surprised that a game with "6-8 hours worth of content" took me 20 hours, and then I realised it's because I played it on hard and also spent a lot of time sightseeing.




I totally agree with tehace. Who on earth is going to be playing Battlefield 4 and Watch Dogs on a 4K monitor anytime soon? A choppy 30fps might be good enough for a casual gamer who just wants to play something on his old PC, but the sort of person who will invest in a 4K system probably isn't going to be happy with that.

I guess Surround gamers willingly accept lowish framerates today, but Surround gaming profoundly changes the experience, so it's worth it for them.

But 4k? What will it offer? Extra size (but not as impressive as a projector), less noticeable jaggies (which could have been achieved with AA anyway), and the ability to see extra detail in textures that isn't actually there, since textures are never hig-res enough at the best of times. Even on a crazy "Battlebox" PC, I reckon 4K would actually represent a downgraded experience on many new games (at least in 3D)

ImageVolnaPC.com - Tips, tweaks, performance comparisons (PhysX card, SLI scaling, etc)

#42
Posted 11/14/2013 11:58 PM   
By the way, mbloof, I find that gamepads are much more forgiving of framerates than mice. Whip around 180 degrees with a mouse, and the framerate will likely dip significantly in that moment as the GPU has to render practically a whole new scene in a split second. But turn around 180 degrees with a gamepad, and the movement will be slower, steady and gradual, letting the GPU kind of 'stream' it in. I find good quality motion blur, like in Witcher 2, to be an extremely welcome improvement when I'm playing with a mouse....and almost totally unecessary when playing with a gamepad. PS - Have you seen TressFX since the update they did a few weeks into release? It seemed to greatly improve the realism of the physics, and on the whole I think her hair looks pretty amazing. Though it may have some issues with SLI - not sure.
By the way, mbloof, I find that gamepads are much more forgiving of framerates than mice. Whip around 180 degrees with a mouse, and the framerate will likely dip significantly in that moment as the GPU has to render practically a whole new scene in a split second. But turn around 180 degrees with a gamepad, and the movement will be slower, steady and gradual, letting the GPU kind of 'stream' it in.

I find good quality motion blur, like in Witcher 2, to be an extremely welcome improvement when I'm playing with a mouse....and almost totally unecessary when playing with a gamepad.

PS - Have you seen TressFX since the update they did a few weeks into release? It seemed to greatly improve the realism of the physics, and on the whole I think her hair looks pretty amazing. Though it may have some issues with SLI - not sure.

ImageVolnaPC.com - Tips, tweaks, performance comparisons (PhysX card, SLI scaling, etc)

#43
Posted 11/15/2013 12:06 AM   
@Volnaiskra: I kinda figured as much is why I mentioned it. While I can be caught 'sight seeing' and grabbing the camera stick and doing a quick 360 when the mood strikes me just to see what the numbers in the top left corner of the screen do (or not!) gamepads can be much more forgiving to camera movement. I've played with the TressFX on and off and prefer it off. I try to turn off motion blur and DOF effects when given a chance to, just my personal preference not liking it as I think it breaks the immersion. I'm forever hopeful that when/if there's a 1080P 120Hz 3D projector I can afford that Nvidia+Intel/AMD will have the hardware to drive it and Nvidia or _someone_ will have drivers/patches/fixes to play games in S3D on it.
@Volnaiskra: I kinda figured as much is why I mentioned it. While I can be caught 'sight seeing' and grabbing the camera stick and doing a quick 360 when the mood strikes me just to see what the numbers in the top left corner of the screen do (or not!) gamepads can be much more forgiving to camera movement.

I've played with the TressFX on and off and prefer it off. I try to turn off motion blur and DOF effects when given a chance to, just my personal preference not liking it as I think it breaks the immersion.

I'm forever hopeful that when/if there's a 1080P 120Hz 3D projector I can afford that Nvidia+Intel/AMD will have the hardware to drive it and Nvidia or _someone_ will have drivers/patches/fixes to play games in S3D on it.

i7-2600K-4.5Ghz/Corsair H100i/8GB/GTX780SC-SLI/Win7-64/1200W-PSU/Samsung 840-500GB SSD/Coolermaster-Tower/Benq 1080ST @ 100"

#44
Posted 11/15/2013 01:27 AM   
Hehe, everything's so personal and subjective. I find *lack* of motion blur to be unrealistic and immersion breaking. The lack of it in id's game Rage turned me off so much that I quit the game about a year ago and still haven't returned. It even prompted me to write an annoyed [url="http://www.volnaiskra.com/2012/03/why-more-games-should-support-motion.html"]blog post[/url] about it, :D
Hehe, everything's so personal and subjective. I find *lack* of motion blur to be unrealistic and immersion breaking. The lack of it in id's game Rage turned me off so much that I quit the game about a year ago and still haven't returned. It even prompted me to write an annoyed blog post about it, :D

ImageVolnaPC.com - Tips, tweaks, performance comparisons (PhysX card, SLI scaling, etc)

#45
Posted 11/15/2013 02:30 AM   
  3 / 3    
Scroll To Top