In early unofficial tests 980ti is about 2-5% slower then Titan X and OC version is up to 10% faster. More or less.
In 3D Mark FireStrike Performance Titan X got 17396 vs 16978 on 980ti. OC version got 20021.
More about it in here:
http://videocardz.com/55566/nvidia-geforce-gtx-980-ti-performance-benchmarks
In early unofficial tests 980ti is about 2-5% slower then Titan X and OC version is up to 10% faster. More or less.
In 3D Mark FireStrike Performance Titan X got 17396 vs 16978 on 980ti. OC version got 20021.
Doesn't seem likely to hold up. Makes no sense that a video card on the same architecture with 10% more cores, clocked at the same speed, would perform the same.
It's an interesting thought exercise though of 'what price would make you switch.' At $800, no way, go for the top dog. What about $700? Hmm... How far about in performance, really? How about $600?
Doesn't seem likely to hold up. Makes no sense that a video card on the same architecture with 10% more cores, clocked at the same speed, would perform the same.
It's an interesting thought exercise though of 'what price would make you switch.' At $800, no way, go for the top dog. What about $700? Hmm... How far about in performance, really? How about $600?
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607 Latest 3Dmigoto Release Bo3b's School for ShaderHackers
[quote=""]In early unofficial tests 980ti is about 2-5% slower then Titan X and OC version is up to 10% faster. More or less.
In 3D Mark FireStrike Performance Titan X got 17396 vs 16978 on 980ti. OC version got 20021.
More about it in here:
http://videocardz.com/55566/nvidia-geforce-gtx-980-ti-performance-benchmarks[/quote]
This is not true my stock titan x is 10 to 12 % faster then the leaked benchmarks on Guru3D site.
and specs show this with 10 % less cuda cores on the 980Ti.
said:In early unofficial tests 980ti is about 2-5% slower then Titan X and OC version is up to 10% faster. More or less.
In 3D Mark FireStrike Performance Titan X got 17396 vs 16978 on 980ti. OC version got 20021.
[quote=""][quote=""]In early unofficial tests 980ti is about 2-5% slower then Titan X and OC version is up to 10% faster. More or less.
In 3D Mark FireStrike Performance Titan X got 17396 vs 16978 on 980ti. OC version got 20021.
More about it in here:
http://videocardz.com/55566/nvidia-geforce-gtx-980-ti-performance-benchmarks[/quote]
This is not true my stock titan x is 10 to 12 % faster then the leak benchmarks on Guru3D site.
and specs show this with 10 % less cuda cores on the 980Ti.[/quote]
Sometimes "unofficial" means: "we have no idea what we writing about but we might be really close after all".
I am only hoping that KINGPIN won`t be more expensive then Titan X and that will end up as a win for me. Would be so happy that i have cancel my pre-order.
said:In early unofficial tests 980ti is about 2-5% slower then Titan X and OC version is up to 10% faster. More or less.
In 3D Mark FireStrike Performance Titan X got 17396 vs 16978 on 980ti. OC version got 20021.
This is not true my stock titan x is 10 to 12 % faster then the leak benchmarks on Guru3D site.
and specs show this with 10 % less cuda cores on the 980Ti.
Sometimes "unofficial" means: "we have no idea what we writing about but we might be really close after all".
I am only hoping that KINGPIN won`t be more expensive then Titan X and that will end up as a win for me. Would be so happy that i have cancel my pre-order.
I can't help but wonder if waiting for Pascal may be more prudent, provided Nvidia doesn't gimp the Pascal GPUs at launch. The die shrink alone will bring a compelling performance boost.
SLI will be a thing of the past, instead NVlink will scale workload between multiple GPUs.
The new NVlink architecture, coupled with a large amount of GPU memory could be a game changer. Nvidia will increase the GPU workload, significantly reducing the inherent overhead that results from passing information back and forth to the CPU.
https://www.youtube.com/watch?v=RBf8FLS6q8E
I can't help but wonder if waiting for Pascal may be more prudent, provided Nvidia doesn't gimp the Pascal GPUs at launch. The die shrink alone will bring a compelling performance boost.
SLI will be a thing of the past, instead NVlink will scale workload between multiple GPUs.
The new NVlink architecture, coupled with a large amount of GPU memory could be a game changer. Nvidia will increase the GPU workload, significantly reducing the inherent overhead that results from passing information back and forth to the CPU.
[quote=""]
Yes, the lesson here, far from what Nvidia wants it to be (they want you to pay them $1k every year for new architecture) is to skip generations. If Nvidia is going to completely drop performance improvements for the previous architecture in an attempt to compel us to upgrade to new architecture every year (i.e. Nvidia seems to have completely stopped improving Kepler performance with recent driver releases, while Maxwell simply gets faster and faster with GTX 970 now officially faster than 780 Ti) to me that motivates me to not upgrade at all.
[/quote]
It's like that:
- Nvidia blatantly cheats and lies: They are not giving the changes in drivers to the Kepler (and Fermi) architectures. The changes they [b]ALREADY HAVE[/b] done, ready, and waiting. Remember BF3? Suddenly gtx570 scored 80% better results than GTX470. I'd like to hear Nvidia saying it's because of the fact, that optimizations were architecture-specific. :/
The game is done, it works poorly. Nvidia looks into it, sees where the performance is wasted, and does what it can do, to improve this. Look here:
http://www.gamedev.net/topic/666419-what-are-your-opinions-on-dx12vulkanmantle/#entry5215019
- The improvements are not Maxwell-exclusive. No. They are easily transferrable to Fermi and Kepler cards. Otherwise, you'd see massive improvements for Maxwell in older games.
- No one can prove that is the case, so no one can sue Nvidia for not fair business practices.
- Meanwhile, in the last 2 big games (Project Cars and Witcher 3) Kepler card's performance is artificially lowered. People already bought 9xx cards because of the reviews and benchmarks.
- after the damage is done, and money unfairly stolen by Nvidia (I call buying a gtx960 as a card faster than GTX770, being robbed). Not many reviews and tests will update it's scores for anyone visiting the pages in the future to see the updated scores. 90% of people who wanted a new card for W3 or P.C. already fell into the trap. Many 970's and 960's were sold. Many people will now think it's wise to upgrade from GTX780 to a GTX970, or from GTX760 to GTX960. And it's not.
- they even have the nerve to say "we have FOUND the issue with Keplers". Really? They think we're idiots?
Nvidia is no different to the worst politician scumms walking on this planet.
"We've found issue with Kepler cards..." - and I thought there's no further slide below "we admit to making a mistake with that 970. We're sorry... that we did release any specs. We officially never posted the details, so we're good. Next time we'll try to hide the specs even deeper than what we're doing with laptop GPUs. We apologize for non-perfect business decission, and we won't make a mistake of sharing specs again."
Yeah. Instead of saying "sorry" to the people who bought 970's, they only released a statements to investors saying "we're sorry we got caught on cheating. We'll be better cheaters next time, no worries, our clients are idiots, just relax and watch us mastering the art of cheating."
D-Man11
Sorry to break your dreams, man, but that whole NVlink thing is not for normal PCs. It's for servers, for computing. You won't see it in PCs used for gaming, because it's another bus. It would require Intel to implement this in every new mainboard, and last time I checked, Nvidia and Intel were not talking to each other too well, which ended in Nvidia stopping to produce chipsets for Intel CPU mainboards.
We really need GAMING oriented bus in PCs, since PCI-E is like using a van on a race track. You can pass a lot of data, but the latency and other issues, are making important improvements impossible. For example GPU-aided physics computing. Because you need to wait for the data to travel betwen the GPU to the CPU and RAM, and back, we have PCs with 5x more powerful GPUs than PS4, and yet we have no better physics than on the consoles, and when we get some minor improvements (Witcher and hairworks for example), Titan Xs are experiencing significant framerate drops.
I'm just afraid NVlink is not what we (gamers) are waiting for.
I'd like to see some revolution soon. For example totally resigning from CPUs and system RAM. Imagine this: HBM cards can have 16 or even 32GB of memory in 2016. How about adding some ARM CPU to the GPU die, add a USB port on the graphics card, and let the game work totally on the card, with CPU and RAM even completely switched off.
This would require so much effort that it's not possible in the nearterm future, I'm afraid, since everyone ignores the PC gaming and even more so - there's noone who understands the potential.
We would need a new CPU architecture designed from the scratch for games and being implemented into the GPU. We would need new mainboard and PC architecture standards. We would need a totally new way of operating in OSes, APIs, drivers and in game engines code.
It's possible to see it a wise decission, especially when we have certain 10-20 years of VR eating every bit of speed it can get (16K x 16K display is the point when VR looks similar to real life in terms of the resolution and FOV), and physics calculations could be 1000x better than what we have now, and still increasing it to another 10x more, would result in serious advantages and progress, especially in VR, when the immersion breaks so easily even now, with not so rich graphics, for example when you pick up an object with your hand or throw it. Virtual worlds are very easy to create on today's hardware, in terms of 100% successfully tricking our brains it's some other world, but interacting with it will break the immersion even in 2020 and 2030, if industry won't use "drastic measures"
I don't have a link, but I saw an article once, where they showed the numbers. The latency of PCI-E blew my mind. It's really, really bad.
Either my (probably stupid) idea, or we'd need to give CPUs some dedicated physics core(s). And that, again, would require Intel or AMD taking the high-end gaming seriously.
In the meantime, Intel stops caring about any progress for gaming (Skylake seems to be a huge disappointment to those comparing CPU OC'ed on air vs. CPU OC'ed on air), doesn't show any process on the roadmap that's optimized for high single-thread performance (just as GloFo, and Samsung, BTW), AMD sleeps for a year in GPUs and going crazy stupid with positioning the 390X card as a TitanX competitor (there's new name already leaked - it will be called Radeon Fury), plus they shot the PC gaming industry in the foot be co-releasing the crAPU equipped consoles which cannot outperform 6 year old PCs even now, at the start of their life-cycle.
And there's Nvidia so much focused on PC gaming, that:
- it sells 499$ range cards for 1000$, and what should be called gtx870, or gtx970, is being called 980Ti and sold for not 399, but for 799$.
- Totally pulls the plug out of 3D, even the only 3D Vision certified game of 2015, GTA V, has issues resulting in slowing the performance by 75 percent.
- Ends the "show of how much we're focused on PC gaming" by releasing some Android console-wannabee called Shield. And they even work on some Androidworks. Something that's not needed by anybody when Vulcan is already done, and something that needs 10x more workforce than 3D vision support.
Yeah. PC gaming is dying. People don't buy as much grapics cards as they used to. It's surely the matter of fashion and people not needing PCs. It's even confirmed by analysis, by questioning the group of people "How much they want better gaming" when they never saw any, and have no knowledge required to use the imagination to reasonably answer the question. The surveys confirmed that nonone needs PC gaming, and it's just a coincidence Sony's management didn't think it's wise to enter the market with a 3D capable console in 1993/4, until some smart person took a prototype and showed them how much potential it has.
.......................................
(I've wrote more, but the sarcasm melted the letters here...)
Yes, the lesson here, far from what Nvidia wants it to be (they want you to pay them $1k every year for new architecture) is to skip generations. If Nvidia is going to completely drop performance improvements for the previous architecture in an attempt to compel us to upgrade to new architecture every year (i.e. Nvidia seems to have completely stopped improving Kepler performance with recent driver releases, while Maxwell simply gets faster and faster with GTX 970 now officially faster than 780 Ti) to me that motivates me to not upgrade at all.
It's like that:
- Nvidia blatantly cheats and lies: They are not giving the changes in drivers to the Kepler (and Fermi) architectures. The changes they ALREADY HAVE done, ready, and waiting. Remember BF3? Suddenly gtx570 scored 80% better results than GTX470. I'd like to hear Nvidia saying it's because of the fact, that optimizations were architecture-specific. :/
The game is done, it works poorly. Nvidia looks into it, sees where the performance is wasted, and does what it can do, to improve this. Look here:
- The improvements are not Maxwell-exclusive. No. They are easily transferrable to Fermi and Kepler cards. Otherwise, you'd see massive improvements for Maxwell in older games.
- No one can prove that is the case, so no one can sue Nvidia for not fair business practices.
- Meanwhile, in the last 2 big games (Project Cars and Witcher 3) Kepler card's performance is artificially lowered. People already bought 9xx cards because of the reviews and benchmarks.
- after the damage is done, and money unfairly stolen by Nvidia (I call buying a gtx960 as a card faster than GTX770, being robbed). Not many reviews and tests will update it's scores for anyone visiting the pages in the future to see the updated scores. 90% of people who wanted a new card for W3 or P.C. already fell into the trap. Many 970's and 960's were sold. Many people will now think it's wise to upgrade from GTX780 to a GTX970, or from GTX760 to GTX960. And it's not.
- they even have the nerve to say "we have FOUND the issue with Keplers". Really? They think we're idiots?
Nvidia is no different to the worst politician scumms walking on this planet.
"We've found issue with Kepler cards..." - and I thought there's no further slide below "we admit to making a mistake with that 970. We're sorry... that we did release any specs. We officially never posted the details, so we're good. Next time we'll try to hide the specs even deeper than what we're doing with laptop GPUs. We apologize for non-perfect business decission, and we won't make a mistake of sharing specs again."
Yeah. Instead of saying "sorry" to the people who bought 970's, they only released a statements to investors saying "we're sorry we got caught on cheating. We'll be better cheaters next time, no worries, our clients are idiots, just relax and watch us mastering the art of cheating."
D-Man11
Sorry to break your dreams, man, but that whole NVlink thing is not for normal PCs. It's for servers, for computing. You won't see it in PCs used for gaming, because it's another bus. It would require Intel to implement this in every new mainboard, and last time I checked, Nvidia and Intel were not talking to each other too well, which ended in Nvidia stopping to produce chipsets for Intel CPU mainboards.
We really need GAMING oriented bus in PCs, since PCI-E is like using a van on a race track. You can pass a lot of data, but the latency and other issues, are making important improvements impossible. For example GPU-aided physics computing. Because you need to wait for the data to travel betwen the GPU to the CPU and RAM, and back, we have PCs with 5x more powerful GPUs than PS4, and yet we have no better physics than on the consoles, and when we get some minor improvements (Witcher and hairworks for example), Titan Xs are experiencing significant framerate drops.
I'm just afraid NVlink is not what we (gamers) are waiting for.
I'd like to see some revolution soon. For example totally resigning from CPUs and system RAM. Imagine this: HBM cards can have 16 or even 32GB of memory in 2016. How about adding some ARM CPU to the GPU die, add a USB port on the graphics card, and let the game work totally on the card, with CPU and RAM even completely switched off.
This would require so much effort that it's not possible in the nearterm future, I'm afraid, since everyone ignores the PC gaming and even more so - there's noone who understands the potential.
We would need a new CPU architecture designed from the scratch for games and being implemented into the GPU. We would need new mainboard and PC architecture standards. We would need a totally new way of operating in OSes, APIs, drivers and in game engines code.
It's possible to see it a wise decission, especially when we have certain 10-20 years of VR eating every bit of speed it can get (16K x 16K display is the point when VR looks similar to real life in terms of the resolution and FOV), and physics calculations could be 1000x better than what we have now, and still increasing it to another 10x more, would result in serious advantages and progress, especially in VR, when the immersion breaks so easily even now, with not so rich graphics, for example when you pick up an object with your hand or throw it. Virtual worlds are very easy to create on today's hardware, in terms of 100% successfully tricking our brains it's some other world, but interacting with it will break the immersion even in 2020 and 2030, if industry won't use "drastic measures"
I don't have a link, but I saw an article once, where they showed the numbers. The latency of PCI-E blew my mind. It's really, really bad.
Either my (probably stupid) idea, or we'd need to give CPUs some dedicated physics core(s). And that, again, would require Intel or AMD taking the high-end gaming seriously.
In the meantime, Intel stops caring about any progress for gaming (Skylake seems to be a huge disappointment to those comparing CPU OC'ed on air vs. CPU OC'ed on air), doesn't show any process on the roadmap that's optimized for high single-thread performance (just as GloFo, and Samsung, BTW), AMD sleeps for a year in GPUs and going crazy stupid with positioning the 390X card as a TitanX competitor (there's new name already leaked - it will be called Radeon Fury), plus they shot the PC gaming industry in the foot be co-releasing the crAPU equipped consoles which cannot outperform 6 year old PCs even now, at the start of their life-cycle.
And there's Nvidia so much focused on PC gaming, that:
- it sells 499$ range cards for 1000$, and what should be called gtx870, or gtx970, is being called 980Ti and sold for not 399, but for 799$.
- Totally pulls the plug out of 3D, even the only 3D Vision certified game of 2015, GTA V, has issues resulting in slowing the performance by 75 percent.
- Ends the "show of how much we're focused on PC gaming" by releasing some Android console-wannabee called Shield. And they even work on some Androidworks. Something that's not needed by anybody when Vulcan is already done, and something that needs 10x more workforce than 3D vision support.
Yeah. PC gaming is dying. People don't buy as much grapics cards as they used to. It's surely the matter of fashion and people not needing PCs. It's even confirmed by analysis, by questioning the group of people "How much they want better gaming" when they never saw any, and have no knowledge required to use the imagination to reasonably answer the question. The surveys confirmed that nonone needs PC gaming, and it's just a coincidence Sony's management didn't think it's wise to enter the market with a 3D capable console in 1993/4, until some smart person took a prototype and showed them how much potential it has.
.......................................
(I've wrote more, but the sarcasm melted the letters here...)
For everybody who complain about drivers not working very well on earlier chipset generations Nvidia released statement that they working on major update for all previous generations of gpu which will give them performance increase or something like that.
Can`t find where i read about it because that was a part of bigger topic so please don`t ask me about details.
For everybody who complain about drivers not working very well on earlier chipset generations Nvidia released statement that they working on major update for all previous generations of gpu which will give them performance increase or something like that.
Can`t find where i read about it because that was a part of bigger topic so please don`t ask me about details.
I agree. Nvidia is running on thin ice.
Lack of 3D Vision support
+
The whole Kepler fiasco...
If I didn't invest a couple of hundred in 3D Vision 2, I would switch to AMD at this point.
However, AMD are not nearly as good as Nvidia with driver releases and day 1 drivers for games.
But that is the last straw. How could they sabotage Keplar just to increase sales? this should be illegal and consumers should not let them get away with such practices.
If I didn't invest a couple of hundred in 3D Vision 2, I would switch to AMD at this point.
However, AMD are not nearly as good as Nvidia with driver releases and day 1 drivers for games.
But that is the last straw. How could they sabotage Keplar just to increase sales? this should be illegal and consumers should not let them get away with such practices.
[quote=""]I agree. Nvidia is running on thin ice.
Lack of 3D Vision support
+
The whole Kepler fiasco...
If I didn't invest a couple of hundred in 3D Vision 2, I would switch to AMD at this point.
However, AMD are not nearly as good as Nvidia with driver releases and day 1 drivers for games.
But that is the last straw. How could they sabotage Keplar just to increase sales? this should be illegal and consumers should not let them get away with such practices.[/quote]
I looked at the supposed Kepler crippling, and in my opinion that is total bullshit, made up by people who are angry at NVidia for other reasons.
I'm not a huge fan of their practices either, but I like to keep my anger reality based. Mad they are expensive, sure. Mad they faked up 4G on 970s, damn right.
The closest I saw to a benchmark that was evenly remotely reasonable was complaints that a 780 ti didn't run faster than a 960. In one game.
People don't seem to want to do proper benchmarks. Very much a shame that Anandtech lost their mojo.
Here's a question for all the haters- how do you know this isn't an AMD marketing campaign?
If you think I'm wrong, please link your proof of crippled cards. Benchmarks, not forums of people bitching.
If I didn't invest a couple of hundred in 3D Vision 2, I would switch to AMD at this point.
However, AMD are not nearly as good as Nvidia with driver releases and day 1 drivers for games.
But that is the last straw. How could they sabotage Keplar just to increase sales? this should be illegal and consumers should not let them get away with such practices.
I looked at the supposed Kepler crippling, and in my opinion that is total bullshit, made up by people who are angry at NVidia for other reasons.
I'm not a huge fan of their practices either, but I like to keep my anger reality based. Mad they are expensive, sure. Mad they faked up 4G on 970s, damn right.
The closest I saw to a benchmark that was evenly remotely reasonable was complaints that a 780 ti didn't run faster than a 960. In one game.
People don't seem to want to do proper benchmarks. Very much a shame that Anandtech lost their mojo.
Here's a question for all the haters- how do you know this isn't an AMD marketing campaign?
If you think I'm wrong, please link your proof of crippled cards. Benchmarks, not forums of people bitching.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607 Latest 3Dmigoto Release Bo3b's School for ShaderHackers
Well.. Kepler did get a tiny bit of downgrading;)) Unfortunately I don't have the links of 3D Mark + other games that I tested...
HOWEVER!!!! There are certain games where performance IS exactly the SAME or even better than before!!!!
I am always getting full GPU usage and even so the FPS number is high... I made some tests between 350.12 and 340.64 and in some games the newer driver behaves worse but by 5% at most...
Now you can believe me or not, but if you don't you can make the tests yourself:) So... the "gimping" is there and not there...
The main thing hit the roof when SLI didn't behave properly in Witcher 3 IMO. In Surround SLI is basically NON EXISTENT!!! However in single monitor I get 99% usage on both GPU and DOUBLE FPS from a single card so SLI IS WORKING (Check the Witcher 3 thread page 44 I believe and you will see my results). Single GPU I get 60-80 fps on ultra @ 1680x1050. I can't really say something is not working properly... Even on the 880M I get around 35fps on Ultra (which is a Kepler Mobile GPU)...
So, Kepler is gimped and not really... (More likely a FREAKING bug since Nvidia seems to be good at this lately... BTW, surround is also semi-working and they HAVE NO REASON to "gimp" this feature in any way..)
This reminds me of the GTX500 series when GTX700 was released:)) Fermi got the same crap since they focused only on Kepler... In the end they "resurrected" the performance in Fermi, but not quite 100%. Still Fermi behaves very good :) for that generation...
Problem is the UNIFIED driver stuff... One driver to "Rule them all". I still believe they should release drivers PER GPU generation and TECH (like in the old days). Once a "GPU" is at the end of development just RELEASE GAME PROFILES rather than whole driver pack! It can be done even in practice (all profiles are in DRS folder btw:) ).
This way you don't introduce unwanted bugs and stuff like is happening now...
Well.. Kepler did get a tiny bit of downgrading;)) Unfortunately I don't have the links of 3D Mark + other games that I tested...
HOWEVER!!!! There are certain games where performance IS exactly the SAME or even better than before!!!!
I am always getting full GPU usage and even so the FPS number is high... I made some tests between 350.12 and 340.64 and in some games the newer driver behaves worse but by 5% at most...
Now you can believe me or not, but if you don't you can make the tests yourself:) So... the "gimping" is there and not there...
The main thing hit the roof when SLI didn't behave properly in Witcher 3 IMO. In Surround SLI is basically NON EXISTENT!!! However in single monitor I get 99% usage on both GPU and DOUBLE FPS from a single card so SLI IS WORKING (Check the Witcher 3 thread page 44 I believe and you will see my results). Single GPU I get 60-80 fps on ultra @ 1680x1050. I can't really say something is not working properly... Even on the 880M I get around 35fps on Ultra (which is a Kepler Mobile GPU)...
So, Kepler is gimped and not really... (More likely a FREAKING bug since Nvidia seems to be good at this lately... BTW, surround is also semi-working and they HAVE NO REASON to "gimp" this feature in any way..)
This reminds me of the GTX500 series when GTX700 was released:)) Fermi got the same crap since they focused only on Kepler... In the end they "resurrected" the performance in Fermi, but not quite 100%. Still Fermi behaves very good :) for that generation...
Problem is the UNIFIED driver stuff... One driver to "Rule them all". I still believe they should release drivers PER GPU generation and TECH (like in the old days). Once a "GPU" is at the end of development just RELEASE GAME PROFILES rather than whole driver pack! It can be done even in practice (all profiles are in DRS folder btw:) ).
This way you don't introduce unwanted bugs and stuff like is happening now...
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
So let me see if I have this straight- people are losing their minds with rage because of a 5% dip in performance in some games, and a bad result in a single brand new and buggy game? That constitutes gimping nowadays?
Here's what I see:
1) NVidia releases Maxwell cards- nonstop bitching ensues because they aren't as fast as promised, and not enough better than last gen.
2) NVidia improves drivers for Maxwell fixing bugs and improving performance- nonstop bitching ensues because the new products are better than the old ones.
What's the common thread here? That's right, nonstop bitching.
So let me see if I have this straight- people are losing their minds with rage because of a 5% dip in performance in some games, and a bad result in a single brand new and buggy game? That constitutes gimping nowadays?
Here's what I see:
1) NVidia releases Maxwell cards- nonstop bitching ensues because they aren't as fast as promised, and not enough better than last gen.
2) NVidia improves drivers for Maxwell fixing bugs and improving performance- nonstop bitching ensues because the new products are better than the old ones.
What's the common thread here? That's right, nonstop bitching.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607 Latest 3Dmigoto Release Bo3b's School for ShaderHackers
Time will tell, but I am leaning toward getting a 980ti at this point. Titans are just too expensive here, around about 2k. With the possibility of adding a second one further down the road depending how SLI support pans out. (I think Nvidia driver team has really taken a dive as of late.)
If it is indeed 95% the speed of a titan and dx 12 having stack-able Vram in the future. It seems like the best option for me at this point.
Then again I probably should wait till next gen, but who knows how 3d is going to act at that point under the new architecture and honestly its the only reason im still with Nvidia. Unfortunately at this point my 780 SLI setup is beginning to feel the strain especially with the state of SLI atm.
Time will tell, but I am leaning toward getting a 980ti at this point. Titans are just too expensive here, around about 2k. With the possibility of adding a second one further down the road depending how SLI support pans out. (I think Nvidia driver team has really taken a dive as of late.)
If it is indeed 95% the speed of a titan and dx 12 having stack-able Vram in the future. It seems like the best option for me at this point.
Then again I probably should wait till next gen, but who knows how 3d is going to act at that point under the new architecture and honestly its the only reason im still with Nvidia. Unfortunately at this point my 780 SLI setup is beginning to feel the strain especially with the state of SLI atm.
i7-4790K CPU 4.8Ghz stable overclock.
16 GB RAM Corsair
EVGA 1080TI SLI
Samsung SSD 840Pro
ASUS Z97-WS
3D Surround ASUS Rog Swift PG278Q(R), 2x PG278Q (yes it works)
Obutto R3volution.
Windows 10 pro 64x (Windows 7 Dual boot)
Kolreth (or anyone else), what's the performance like going from a 690gtx to a Titan X?
I'm playing at 4k and can't quite keep 60 fps at ultra setting in battlefield 4 (no AA)
It always stutters as I die (and the kill cam turns people orange) and it's been driving me wild since I bought my 4k monitor last August.
I really want to buy a 980ti or Titan X but I'm worried that it won't be much of an upgrade.
Please advise as there doesn't seem to be any performance benchmarks comparing the Titan X with a 690 at 4k resolution.
I have my 690 overclocked by 100mhz CPU and 400mhz memory and it plays most games maxed out at 60fps in 4k.
Won't direct X 12 make the 690gtx a 4GB card with a 512bit bus in theory? That should be a monster right?
Is it worth waiting to find out do you think?
Thanks.
Kolreth (or anyone else), what's the performance like going from a 690gtx to a Titan X?
I'm playing at 4k and can't quite keep 60 fps at ultra setting in battlefield 4 (no AA)
It always stutters as I die (and the kill cam turns people orange) and it's been driving me wild since I bought my 4k monitor last August.
I really want to buy a 980ti or Titan X but I'm worried that it won't be much of an upgrade.
Please advise as there doesn't seem to be any performance benchmarks comparing the Titan X with a 690 at 4k resolution.
I have my 690 overclocked by 100mhz CPU and 400mhz memory and it plays most games maxed out at 60fps in 4k.
Won't direct X 12 make the 690gtx a 4GB card with a 512bit bus in theory? That should be a monster right?
Is it worth waiting to find out do you think?
Thanks.
The titan x is almost equal to a titan z in benchmarks The titan x is roughly about 10% less.
The titan x is an idea solution for 1080 P 3D 60 frames with all the eye candy turned on. To play at 4K in 3D sli would be required .
By to answer you question the titan x should be 2.5 faster then the 690 at 4K due to larger bus and frame buffer.
2.5 times faster? Really?
I hope you are right as I can't imagine getting 140-150fps in battlefield 4 at 4k with no AA.
SLI works really well in Battlefield 4 but I'm so tempted to go AMD as I play mostly AMD sponsored/endorsed/optimised games these days. Battlefield 4, Dirt Rally, and the new battlefront with atmos will be AMD too.
Does the Titan X and 980TI support hardware decoding of 4k? Main10 support I believe it's called? I know the 970 and 980 don't but the 960 does!?!
I honestly can't believe it will be 2.5 times faster but I would love to be wrong!
2.5 times faster? Really?
I hope you are right as I can't imagine getting 140-150fps in battlefield 4 at 4k with no AA.
SLI works really well in Battlefield 4 but I'm so tempted to go AMD as I play mostly AMD sponsored/endorsed/optimised games these days. Battlefield 4, Dirt Rally, and the new battlefront with atmos will be AMD too.
Does the Titan X and 980TI support hardware decoding of 4k? Main10 support I believe it's called? I know the 970 and 980 don't but the 960 does!?!
I honestly can't believe it will be 2.5 times faster but I would love to be wrong!
In 3D Mark FireStrike Performance Titan X got 17396 vs 16978 on 980ti. OC version got 20021.
More about it in here:
http://videocardz.com/55566/nvidia-geforce-gtx-980-ti-performance-benchmarks
https://steamcommunity.com/profiles/76561198014296177/
It's an interesting thought exercise though of 'what price would make you switch.' At $800, no way, go for the top dog. What about $700? Hmm... How far about in performance, really? How about $600?
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers
This is not true my stock titan x is 10 to 12 % faster then the leaked benchmarks on Guru3D site.
and specs show this with 10 % less cuda cores on the 980Ti.
Gigabyte Z370 Gaming 7 32GB Ram i9-9900K GigaByte Aorus Extreme Gaming 2080TI (single) Game Blaster Z Windows 10 X64 build #17763.195 Define R6 Blackout Case Corsair H110i GTX Sandisk 1TB (OS) SanDisk 2TB SSD (Games) Seagate EXOs 8 and 12 TB drives Samsung UN46c7000 HD TV Samsung UN55HU9000 UHD TVCurrently using ACER PASSIVE EDID override on 3D TVs LG 55
Sometimes "unofficial" means: "we have no idea what we writing about but we might be really close after all".
I am only hoping that KINGPIN won`t be more expensive then Titan X and that will end up as a win for me. Would be so happy that i have cancel my pre-order.
https://steamcommunity.com/profiles/76561198014296177/
SLI will be a thing of the past, instead NVlink will scale workload between multiple GPUs.
The new NVlink architecture, coupled with a large amount of GPU memory could be a game changer. Nvidia will increase the GPU workload, significantly reducing the inherent overhead that results from passing information back and forth to the CPU.
It's like that:
- Nvidia blatantly cheats and lies: They are not giving the changes in drivers to the Kepler (and Fermi) architectures. The changes they ALREADY HAVE done, ready, and waiting. Remember BF3? Suddenly gtx570 scored 80% better results than GTX470. I'd like to hear Nvidia saying it's because of the fact, that optimizations were architecture-specific. :/
The game is done, it works poorly. Nvidia looks into it, sees where the performance is wasted, and does what it can do, to improve this. Look here:
http://www.gamedev.net/topic/666419-what-are-your-opinions-on-dx12vulkanmantle/#entry5215019
- The improvements are not Maxwell-exclusive. No. They are easily transferrable to Fermi and Kepler cards. Otherwise, you'd see massive improvements for Maxwell in older games.
- No one can prove that is the case, so no one can sue Nvidia for not fair business practices.
- Meanwhile, in the last 2 big games (Project Cars and Witcher 3) Kepler card's performance is artificially lowered. People already bought 9xx cards because of the reviews and benchmarks.
- after the damage is done, and money unfairly stolen by Nvidia (I call buying a gtx960 as a card faster than GTX770, being robbed). Not many reviews and tests will update it's scores for anyone visiting the pages in the future to see the updated scores. 90% of people who wanted a new card for W3 or P.C. already fell into the trap. Many 970's and 960's were sold. Many people will now think it's wise to upgrade from GTX780 to a GTX970, or from GTX760 to GTX960. And it's not.
- they even have the nerve to say "we have FOUND the issue with Keplers". Really? They think we're idiots?
Nvidia is no different to the worst politician scumms walking on this planet.
"We've found issue with Kepler cards..." - and I thought there's no further slide below "we admit to making a mistake with that 970. We're sorry... that we did release any specs. We officially never posted the details, so we're good. Next time we'll try to hide the specs even deeper than what we're doing with laptop GPUs. We apologize for non-perfect business decission, and we won't make a mistake of sharing specs again."
Yeah. Instead of saying "sorry" to the people who bought 970's, they only released a statements to investors saying "we're sorry we got caught on cheating. We'll be better cheaters next time, no worries, our clients are idiots, just relax and watch us mastering the art of cheating."
D-Man11
Sorry to break your dreams, man, but that whole NVlink thing is not for normal PCs. It's for servers, for computing. You won't see it in PCs used for gaming, because it's another bus. It would require Intel to implement this in every new mainboard, and last time I checked, Nvidia and Intel were not talking to each other too well, which ended in Nvidia stopping to produce chipsets for Intel CPU mainboards.
We really need GAMING oriented bus in PCs, since PCI-E is like using a van on a race track. You can pass a lot of data, but the latency and other issues, are making important improvements impossible. For example GPU-aided physics computing. Because you need to wait for the data to travel betwen the GPU to the CPU and RAM, and back, we have PCs with 5x more powerful GPUs than PS4, and yet we have no better physics than on the consoles, and when we get some minor improvements (Witcher and hairworks for example), Titan Xs are experiencing significant framerate drops.
I'm just afraid NVlink is not what we (gamers) are waiting for.
I'd like to see some revolution soon. For example totally resigning from CPUs and system RAM. Imagine this: HBM cards can have 16 or even 32GB of memory in 2016. How about adding some ARM CPU to the GPU die, add a USB port on the graphics card, and let the game work totally on the card, with CPU and RAM even completely switched off.
This would require so much effort that it's not possible in the nearterm future, I'm afraid, since everyone ignores the PC gaming and even more so - there's noone who understands the potential.
We would need a new CPU architecture designed from the scratch for games and being implemented into the GPU. We would need new mainboard and PC architecture standards. We would need a totally new way of operating in OSes, APIs, drivers and in game engines code.
It's possible to see it a wise decission, especially when we have certain 10-20 years of VR eating every bit of speed it can get (16K x 16K display is the point when VR looks similar to real life in terms of the resolution and FOV), and physics calculations could be 1000x better than what we have now, and still increasing it to another 10x more, would result in serious advantages and progress, especially in VR, when the immersion breaks so easily even now, with not so rich graphics, for example when you pick up an object with your hand or throw it. Virtual worlds are very easy to create on today's hardware, in terms of 100% successfully tricking our brains it's some other world, but interacting with it will break the immersion even in 2020 and 2030, if industry won't use "drastic measures"
I don't have a link, but I saw an article once, where they showed the numbers. The latency of PCI-E blew my mind. It's really, really bad.
Either my (probably stupid) idea, or we'd need to give CPUs some dedicated physics core(s). And that, again, would require Intel or AMD taking the high-end gaming seriously.
In the meantime, Intel stops caring about any progress for gaming (Skylake seems to be a huge disappointment to those comparing CPU OC'ed on air vs. CPU OC'ed on air), doesn't show any process on the roadmap that's optimized for high single-thread performance (just as GloFo, and Samsung, BTW), AMD sleeps for a year in GPUs and going crazy stupid with positioning the 390X card as a TitanX competitor (there's new name already leaked - it will be called Radeon Fury), plus they shot the PC gaming industry in the foot be co-releasing the crAPU equipped consoles which cannot outperform 6 year old PCs even now, at the start of their life-cycle.
And there's Nvidia so much focused on PC gaming, that:
- it sells 499$ range cards for 1000$, and what should be called gtx870, or gtx970, is being called 980Ti and sold for not 399, but for 799$.
- Totally pulls the plug out of 3D, even the only 3D Vision certified game of 2015, GTA V, has issues resulting in slowing the performance by 75 percent.
- Ends the "show of how much we're focused on PC gaming" by releasing some Android console-wannabee called Shield. And they even work on some Androidworks. Something that's not needed by anybody when Vulcan is already done, and something that needs 10x more workforce than 3D vision support.
Yeah. PC gaming is dying. People don't buy as much grapics cards as they used to. It's surely the matter of fashion and people not needing PCs. It's even confirmed by analysis, by questioning the group of people "How much they want better gaming" when they never saw any, and have no knowledge required to use the imagination to reasonably answer the question. The surveys confirmed that nonone needs PC gaming, and it's just a coincidence Sony's management didn't think it's wise to enter the market with a 3D capable console in 1993/4, until some smart person took a prototype and showed them how much potential it has.
.......................................
(I've wrote more, but the sarcasm melted the letters here...)
Can`t find where i read about it because that was a part of bigger topic so please don`t ask me about details.
https://steamcommunity.com/profiles/76561198014296177/
Lack of 3D Vision support
+
The whole Kepler fiasco...
If I didn't invest a couple of hundred in 3D Vision 2, I would switch to AMD at this point.
However, AMD are not nearly as good as Nvidia with driver releases and day 1 drivers for games.
But that is the last straw. How could they sabotage Keplar just to increase sales? this should be illegal and consumers should not let them get away with such practices.
I looked at the supposed Kepler crippling, and in my opinion that is total bullshit, made up by people who are angry at NVidia for other reasons.
I'm not a huge fan of their practices either, but I like to keep my anger reality based. Mad they are expensive, sure. Mad they faked up 4G on 970s, damn right.
The closest I saw to a benchmark that was evenly remotely reasonable was complaints that a 780 ti didn't run faster than a 960. In one game.
People don't seem to want to do proper benchmarks. Very much a shame that Anandtech lost their mojo.
Here's a question for all the haters- how do you know this isn't an AMD marketing campaign?
If you think I'm wrong, please link your proof of crippled cards. Benchmarks, not forums of people bitching.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers
HOWEVER!!!! There are certain games where performance IS exactly the SAME or even better than before!!!!
I am always getting full GPU usage and even so the FPS number is high... I made some tests between 350.12 and 340.64 and in some games the newer driver behaves worse but by 5% at most...
Now you can believe me or not, but if you don't you can make the tests yourself:) So... the "gimping" is there and not there...
The main thing hit the roof when SLI didn't behave properly in Witcher 3 IMO. In Surround SLI is basically NON EXISTENT!!! However in single monitor I get 99% usage on both GPU and DOUBLE FPS from a single card so SLI IS WORKING (Check the Witcher 3 thread page 44 I believe and you will see my results). Single GPU I get 60-80 fps on ultra @ 1680x1050. I can't really say something is not working properly... Even on the 880M I get around 35fps on Ultra (which is a Kepler Mobile GPU)...
So, Kepler is gimped and not really... (More likely a FREAKING bug since Nvidia seems to be good at this lately... BTW, surround is also semi-working and they HAVE NO REASON to "gimp" this feature in any way..)
This reminds me of the GTX500 series when GTX700 was released:)) Fermi got the same crap since they focused only on Kepler... In the end they "resurrected" the performance in Fermi, but not quite 100%. Still Fermi behaves very good :) for that generation...
Problem is the UNIFIED driver stuff... One driver to "Rule them all". I still believe they should release drivers PER GPU generation and TECH (like in the old days). Once a "GPU" is at the end of development just RELEASE GAME PROFILES rather than whole driver pack! It can be done even in practice (all profiles are in DRS folder btw:) ).
This way you don't introduce unwanted bugs and stuff like is happening now...
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
Here's what I see:
1) NVidia releases Maxwell cards- nonstop bitching ensues because they aren't as fast as promised, and not enough better than last gen.
2) NVidia improves drivers for Maxwell fixing bugs and improving performance- nonstop bitching ensues because the new products are better than the old ones.
What's the common thread here? That's right, nonstop bitching.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers
If it is indeed 95% the speed of a titan and dx 12 having stack-able Vram in the future. It seems like the best option for me at this point.
Then again I probably should wait till next gen, but who knows how 3d is going to act at that point under the new architecture and honestly its the only reason im still with Nvidia. Unfortunately at this point my 780 SLI setup is beginning to feel the strain especially with the state of SLI atm.
i7-4790K CPU 4.8Ghz stable overclock.
16 GB RAM Corsair
EVGA 1080TI SLI
Samsung SSD 840Pro
ASUS Z97-WS
3D Surround ASUS Rog Swift PG278Q(R), 2x PG278Q (yes it works)
Obutto R3volution.
Windows 10 pro 64x (Windows 7 Dual boot)
I'm playing at 4k and can't quite keep 60 fps at ultra setting in battlefield 4 (no AA)
It always stutters as I die (and the kill cam turns people orange) and it's been driving me wild since I bought my 4k monitor last August.
I really want to buy a 980ti or Titan X but I'm worried that it won't be much of an upgrade.
Please advise as there doesn't seem to be any performance benchmarks comparing the Titan X with a 690 at 4k resolution.
I have my 690 overclocked by 100mhz CPU and 400mhz memory and it plays most games maxed out at 60fps in 4k.
Won't direct X 12 make the 690gtx a 4GB card with a 512bit bus in theory? That should be a monster right?
Is it worth waiting to find out do you think?
Thanks.
The titan x is an idea solution for 1080 P 3D 60 frames with all the eye candy turned on. To play at 4K in 3D sli would be required .
By to answer you question the titan x should be 2.5 faster then the 690 at 4K due to larger bus and frame buffer.
Gigabyte Z370 Gaming 7 32GB Ram i9-9900K GigaByte Aorus Extreme Gaming 2080TI (single) Game Blaster Z Windows 10 X64 build #17763.195 Define R6 Blackout Case Corsair H110i GTX Sandisk 1TB (OS) SanDisk 2TB SSD (Games) Seagate EXOs 8 and 12 TB drives Samsung UN46c7000 HD TV Samsung UN55HU9000 UHD TVCurrently using ACER PASSIVE EDID override on 3D TVs LG 55
I hope you are right as I can't imagine getting 140-150fps in battlefield 4 at 4k with no AA.
SLI works really well in Battlefield 4 but I'm so tempted to go AMD as I play mostly AMD sponsored/endorsed/optimised games these days. Battlefield 4, Dirt Rally, and the new battlefront with atmos will be AMD too.
Does the Titan X and 980TI support hardware decoding of 4k? Main10 support I believe it's called? I know the 970 and 980 don't but the 960 does!?!
I honestly can't believe it will be 2.5 times faster but I would love to be wrong!