I also have a Panasonic ptae6000e 3D projector and a 127" screen I use for films and 3D.
Doesn't really take much to run 3D though compared to 4k. I only asked because some said they upgraded from a 690 to a Titan Xand wanted to know his experience.
Sorry if I hijacked the thread.
I also have a Panasonic ptae6000e 3D projector and a 127" screen I use for films and 3D.
Doesn't really take much to run 3D though compared to 4k. I only asked because some said they upgraded from a 690 to a Titan Xand wanted to know his experience.
Sorry if I hijacked the thread.
Sometimes i am playing at 2,5K (ROG Swift 1440p) and i have never exceeded 2GB of ram on mine 680 sli including Witcher 3, Shadow Mordor and GTAV. I don`t know how people does that - going above 4GB.
Anyway since i was on 3D Surround with two 680 (which is close or even more then 4K) and i was perfectly fine then i thing i`ll be way better off with one Gtx 980ti - because of the numbers i see on Titan X and i can assume that "Ti" won`t get far below that.
Probably i`ll end up with one 980 KINGPIN just because i like Ultimate Suffer.
Sometimes i am playing at 2,5K (ROG Swift 1440p) and i have never exceeded 2GB of ram on mine 680 sli including Witcher 3, Shadow Mordor and GTAV. I don`t know how people does that - going above 4GB.
Anyway since i was on 3D Surround with two 680 (which is close or even more then 4K) and i was perfectly fine then i thing i`ll be way better off with one Gtx 980ti - because of the numbers i see on Titan X and i can assume that "Ti" won`t get far below that.
Probably i`ll end up with one 980 KINGPIN just because i like Ultimate Suffer.
I would also recommend 980 Kingpin, which can easy go above Titan X but you might not be very happy with all that Overclocking so i`ll skip that.
http://www.guru3d.com/news-story/evga-leaks-geforce-gtx-980-ti-sku-lineup-and-prices.html
If the price of 980Ti is true 800 to 825 dollars for the stock 980Ti the Kingpin version is going to be another $50 to $75 more this brings the 980Ti kingpin close $900 there no way with 10% less cores and half the memory this is going to faster then a titan x.
If the price of 980Ti is true 800 to 825 dollars for the stock 980Ti the Kingpin version is going to be another $50 to $75 more this brings the 980Ti kingpin close $900 there no way with 10% less cores and half the memory this is going to faster then a titan x.
Gigabyte Z370 Gaming 7 32GB Ram i9-9900K GigaByte Aorus Extreme Gaming 2080TI (single) Game Blaster Z Windows 10 X64 build #17763.195 Define R6 Blackout Case Corsair H110i GTX Sandisk 1TB (OS) SanDisk 2TB SSD (Games) Seagate EXOs 8 and 12 TB drives Samsung UN46c7000 HD TV Samsung UN55HU9000 UHD TVCurrently using ACER PASSIVE EDID override on 3D TVs LG 55
Careful with that idea that DX12 is going to have unified memory. The future doesn't happen until it happens, and buying on expectation typically leads to disappointment. Microsoft may decide it's like Longhorn and pull the plug, you can never tell.
Depending on where price falls, I'd be interested in either 980ti or TitanX as a single card solution. I'm also getting pretty fed up with bad SLI profiles. I like mrorange's idea of thinking through all the numbers, but decide using your heart.
The other factor that holds me back, is I really, really do not want to be stuck using only recent drivers. The current driver development has been just terrible, and I go back to old drivers all the time.
But, only you can decide what is right for you. It's your money, your games, and your decision.
Careful with that idea that DX12 is going to have unified memory. The future doesn't happen until it happens, and buying on expectation typically leads to disappointment. Microsoft may decide it's like Longhorn and pull the plug, you can never tell.
Depending on where price falls, I'd be interested in either 980ti or TitanX as a single card solution. I'm also getting pretty fed up with bad SLI profiles. I like mrorange's idea of thinking through all the numbers, but decide using your heart.
The other factor that holds me back, is I really, really do not want to be stuck using only recent drivers. The current driver development has been just terrible, and I go back to old drivers all the time.
But, only you can decide what is right for you. It's your money, your games, and your decision.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607 Latest 3Dmigoto Release Bo3b's School for ShaderHackers
@bo3b,
well said.
and thanks for clearing up the keplar situation. I believed what I was reading online. Although, I have 770 x2 sli and I don't get over 60FPS and that's still kind of strange, so I assumed it to be true.
especially since Nvidia themselves have admitted to "finding an issue" with kepler cards and promising a fix in the near future.
so how could it be BS if nVidia themselves acknowledge it?
and thanks for clearing up the keplar situation. I believed what I was reading online. Although, I have 770 x2 sli and I don't get over 60FPS and that's still kind of strange, so I assumed it to be true.
especially since Nvidia themselves have admitted to "finding an issue" with kepler cards and promising a fix in the near future.
so how could it be BS if nVidia themselves acknowledge it?
@GibsonRed
YouTube has a quite a few TitanX 4K reviews. If you can get past the first few minutes …The Weirdest Titan X Review Ever! - Live 4K Performance Testing isn't too bad.
YouTube has a quite a few TitanX 4K reviews. If you can get past the first few minutes …The Weirdest Titan X Review Ever! - Live 4K Performance Testing isn't too bad.
[quote=""]@bo3b,
well said.
and thanks for clearing up the keplar situation. I believed what I was reading online. Although, I have 770 x2 sli and I don't get over 60FPS and that's still kind of strange, so I assumed it to be true.
especially since Nvidia themselves have admitted to "finding an issue" with kepler cards and promising a fix in the near future.
so how could it be BS if nVidia themselves acknowledge it?[/quote]
It's the difference between accident and intent. If it was intentional, then of course that would be terrible. If it was an accident, does it seem right to fill pages and pages of forums with hate speech? The part that is bullshit is claiming that it was a conspiracy to drive people to upgrade. It is a problem, but only for a couple of games.
Having worked at big ass corporations like NVidia, on software that is similarly complex to a driver, I have to assume that it was an accident, a bug. It happens. I did performance testing on the Apple OS for 5 years, and the entire point of our testing and lab was to catch performance bugs introduced by well meaning engineers.
That 970 video card debacle does make us all a lot less forgiving though. That was clearly a scumbag move to get the number to 4G. An extra bank of slow-ass RAM doesn't help anyone except marketing weasels.
and thanks for clearing up the keplar situation. I believed what I was reading online. Although, I have 770 x2 sli and I don't get over 60FPS and that's still kind of strange, so I assumed it to be true.
especially since Nvidia themselves have admitted to "finding an issue" with kepler cards and promising a fix in the near future.
so how could it be BS if nVidia themselves acknowledge it?
It's the difference between accident and intent. If it was intentional, then of course that would be terrible. If it was an accident, does it seem right to fill pages and pages of forums with hate speech? The part that is bullshit is claiming that it was a conspiracy to drive people to upgrade. It is a problem, but only for a couple of games.
Having worked at big ass corporations like NVidia, on software that is similarly complex to a driver, I have to assume that it was an accident, a bug. It happens. I did performance testing on the Apple OS for 5 years, and the entire point of our testing and lab was to catch performance bugs introduced by well meaning engineers.
That 970 video card debacle does make us all a lot less forgiving though. That was clearly a scumbag move to get the number to 4G. An extra bank of slow-ass RAM doesn't help anyone except marketing weasels.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607 Latest 3Dmigoto Release Bo3b's School for ShaderHackers
[quote=""]
IMHO, I personally believe in the former explanation; Nvidia hasn't simply figured out to optimize Maxwell all of the sudden, they are intentionally improving Maxwell performance in relation to previous architecture to compel the consumer-base to upgrade. Architecturally speaking (same process size, more Cuda cores, more memory bandwidth etc.), a 780 Ti should rip a 970's face off, yet it is now a full 33% slower than the 970 in The Witcher 3 (32 FPS vs. 42 FPS at 2560x1440). [/quote]
I'll repeat it for the third time:
If the difference was thanx to the Maxwell optimizations in drivers, we would see a difference in older games, the ones that showed that 960 is around gtx760 performance, and gtx780 is not far behind gtx970. You know, the ones not important for marketing anymore.
If for example Nvidia improved texturing speed, it should help with texturing in 2 year old game, shouldn't it?
I ask all you people who say it's "funny bitching" to wait a little, and see for yourselves. There will be [b]NO[/b] improvements, because it's no optimization, unless you talk about optimization of the milking process then yeah, Nvidia optimizes this continuously. I don't have the possibility to do a 780 and 970 test similar to the ones that were posted on the web when Maxwell cards showed up, but I really hope someone does and prove it to you.
Let's hope DX12 will help end such practices.
In the meantime, to people who say "get over it" and "you're bitching"; enjoy your 9xx series cards, while you can. You have around one year left, until Pascal arrives and Nvidia will cripple 9xx. Go praise them for "optimizing" Pascals then. Go praise them for artificially crippling the cards you've bought for 1000$...
After a year...
edit: I forgot to mention the 680 vs. 760 situation - the same thing happened. Please explain how Kepler architecture is newer to the Kepler architecture. It happened with Fermi 4xx and 5xx series, with Kepler 6xx and 7xx series, it happens now, and it will repeat in the future.
I'm not "bitching" because I have nothing better to do with my time. I want people I care about, to be aware. And I like this community a lot. Besides it's good to be aware of such things if we are all 3DV fans, and as such, we have a tendency to tell our friends to buy Nvidia cards. We might get in trouble when such a friend realizes what we adviced them to buy.
I hate many things that AMD does. The same issue - big corporation knowing nothing about it's product users and needs, doing just the things that influence their shares value.
So I'm far from "Nvidia is bad, let's go to AMD", but that's not the reason to shut up and not react to obvious shenanigans.
said:
IMHO, I personally believe in the former explanation; Nvidia hasn't simply figured out to optimize Maxwell all of the sudden, they are intentionally improving Maxwell performance in relation to previous architecture to compel the consumer-base to upgrade. Architecturally speaking (same process size, more Cuda cores, more memory bandwidth etc.), a 780 Ti should rip a 970's face off, yet it is now a full 33% slower than the 970 in The Witcher 3 (32 FPS vs. 42 FPS at 2560x1440).
I'll repeat it for the third time:
If the difference was thanx to the Maxwell optimizations in drivers, we would see a difference in older games, the ones that showed that 960 is around gtx760 performance, and gtx780 is not far behind gtx970. You know, the ones not important for marketing anymore.
If for example Nvidia improved texturing speed, it should help with texturing in 2 year old game, shouldn't it?
I ask all you people who say it's "funny bitching" to wait a little, and see for yourselves. There will be NO improvements, because it's no optimization, unless you talk about optimization of the milking process then yeah, Nvidia optimizes this continuously. I don't have the possibility to do a 780 and 970 test similar to the ones that were posted on the web when Maxwell cards showed up, but I really hope someone does and prove it to you.
Let's hope DX12 will help end such practices.
In the meantime, to people who say "get over it" and "you're bitching"; enjoy your 9xx series cards, while you can. You have around one year left, until Pascal arrives and Nvidia will cripple 9xx. Go praise them for "optimizing" Pascals then. Go praise them for artificially crippling the cards you've bought for 1000$...
After a year...
edit: I forgot to mention the 680 vs. 760 situation - the same thing happened. Please explain how Kepler architecture is newer to the Kepler architecture. It happened with Fermi 4xx and 5xx series, with Kepler 6xx and 7xx series, it happens now, and it will repeat in the future.
I'm not "bitching" because I have nothing better to do with my time. I want people I care about, to be aware. And I like this community a lot. Besides it's good to be aware of such things if we are all 3DV fans, and as such, we have a tendency to tell our friends to buy Nvidia cards. We might get in trouble when such a friend realizes what we adviced them to buy.
I hate many things that AMD does. The same issue - big corporation knowing nothing about it's product users and needs, doing just the things that influence their shares value.
So I'm far from "Nvidia is bad, let's go to AMD", but that's not the reason to shut up and not react to obvious shenanigans.
[quote=""]I'll repeat it for the third time:
If the difference was thanx to the Maxwell optimizations in drivers, we would see a difference in older games, the ones that showed that 960 is around gtx760 performance, and gtx780 is not far behind gtx970.
I ask all you people who say it's "funny bitching" to wait a little, and see for yourselves. There will be [b]NO[/b] improvements, because it's no optimization, unless you talk about optimization of the milking process then yeah, Nvidia optimizes this continuously.[/quote]
No. It's not that simple. You are making the assumption that all games run similarly, and that couldn't be farther from the truth. Some games are CPU bound, some are GPU bound. Some needs lots of Cuda cores, some need lots of VRAM, some need lots of bandwidth. It completely depends upon the workload in action, and it's far to simplistic to say it would speed up all older games.
What is much more likely is that they spent all of their Witcher3 time making sure that the latest cards were working correctly and not crashing. The older cards would be assumed to be already fairly well debugged and optimized, and get less attention.
For performance bugs in particular, fixing one path does not always fix others. So the fact that Witcher3 is tesselation heavy could easily be a buggy path on older cards. It could easily be that games that are heavy on tesselation are the ones that show the problem.
Speaking of which, why does nobody talk about GTA5 as being slow on Kepler? What about the fact that Witcher3 ran dog slow on AMD cards? How did NVidia make that happen? Data that conflicts with what you want to believe is inconvenient, I know.
Edit for your edit: No, it's the same scenario. It's not shenanigans, that's how software development is done. The reason it improved on the same architecture is because some clever engineer spent a lot of time figuring out a bug that was bottlenecking the cards. It's EXACTLY the same scenario, true.
I really don't understand why people are so concerned about this. If the card works for you, buy it, if it doesn't don't. There is no guaranty that anything you buy will work in the future. We all thought 2G cards were going to be OK, and now that's not true. Do you feel cheated because of that?
Edit2: Rather than make yet another off topic post: For people here who are sane, please accept my most sincere apologies for poking the trolls. One bad part of being an engineer is I always get confused that people would care about facts. This will be my last post on this topic.
Edit3: I'm going to respond here instead, because it just constantly derails the thread to complain about stuff that no one is going to take any action upon.
[quote]
Different games are bottlenecked differently? Then how about repeating the test on the same 5-10 games? I think that shoots down your argument.
Next one doesn't even need a bullet. How can you say that Nvidia employee worked on optimizations and older hardware is naturally on lower priority, when GTX7xx and GTX6xx is the same architecture. I got the feeling all of my arguments are blocked before they reach your brain ;).
I said it 3 times now: Fermi and slightly improved Fermi (4xx vs. 5xx), Kepler vs. the same Kepler. Before that, G92 vs. G92 (and the same situation). People don't care. Nvidia takes away 20% of their hardware performance? So what? "We still have 30fps and console devs say it's enough so all is fine".
Besides your point of view could be understandable if we were talking about 5 years, but not 9 month, and I'll repeat - 9 month ago, the most expensive cards from Nvidia, the best and newest, were 780/780Ti/Titans.
If you say it's normal to cripple them because that's how corporation works, then I don't get you, seriously.
Did any of the people who spent 500-700$ on GPUs that are treated as absolete within a year, were informed that they're buying a card for a year? Some of them bought the expensive card to be "set" for a longer time, to comfortably play games an high settings for the next 2-3 years.[/quote]
Sure, of course we redo the tests on the same 5-10 games. You are saying that latest drivers make all older games run slower. Where is your proof?
Regarding the supposed 7xx to 6xx gimp. Did you even read what I said? Are the cards *identical*? Architecture is one thing. You think how wide the bus is doesn't matter? How much RAM they have?
Your arguments are not blocked before they reach my brain, they don't have any proof. I'm more than willing to be proved wrong, but you absolutely categorically MUST provide benchmarks. Just your word is not sufficient, because I don't know you, and frankly my arguments seem to be unable to reach you. Your personal opinion appears to be more valuable than my 30 years of experience. The burden of proof is on you- you are making the outrageous claims.
Regarding the 9 month life span of a card. This also makes no sense. You buy the card for what it can do at that time, and hopefully it will last, but you have no idea. You did not address my 2G argument, GTA5, or AMD running slow on the same game. Why not? Future proofing is not possible. If you can't afford to upgrade every year, then geez, don't. It's not like you are missing out on anything. This isn't NVidia shafting you, this is you having completely unrealistic expectations.
said:I'll repeat it for the third time:
If the difference was thanx to the Maxwell optimizations in drivers, we would see a difference in older games, the ones that showed that 960 is around gtx760 performance, and gtx780 is not far behind gtx970.
I ask all you people who say it's "funny bitching" to wait a little, and see for yourselves. There will be NO improvements, because it's no optimization, unless you talk about optimization of the milking process then yeah, Nvidia optimizes this continuously.
No. It's not that simple. You are making the assumption that all games run similarly, and that couldn't be farther from the truth. Some games are CPU bound, some are GPU bound. Some needs lots of Cuda cores, some need lots of VRAM, some need lots of bandwidth. It completely depends upon the workload in action, and it's far to simplistic to say it would speed up all older games.
What is much more likely is that they spent all of their Witcher3 time making sure that the latest cards were working correctly and not crashing. The older cards would be assumed to be already fairly well debugged and optimized, and get less attention.
For performance bugs in particular, fixing one path does not always fix others. So the fact that Witcher3 is tesselation heavy could easily be a buggy path on older cards. It could easily be that games that are heavy on tesselation are the ones that show the problem.
Speaking of which, why does nobody talk about GTA5 as being slow on Kepler? What about the fact that Witcher3 ran dog slow on AMD cards? How did NVidia make that happen? Data that conflicts with what you want to believe is inconvenient, I know.
Edit for your edit: No, it's the same scenario. It's not shenanigans, that's how software development is done. The reason it improved on the same architecture is because some clever engineer spent a lot of time figuring out a bug that was bottlenecking the cards. It's EXACTLY the same scenario, true.
I really don't understand why people are so concerned about this. If the card works for you, buy it, if it doesn't don't. There is no guaranty that anything you buy will work in the future. We all thought 2G cards were going to be OK, and now that's not true. Do you feel cheated because of that?
Edit2: Rather than make yet another off topic post: For people here who are sane, please accept my most sincere apologies for poking the trolls. One bad part of being an engineer is I always get confused that people would care about facts. This will be my last post on this topic.
Edit3: I'm going to respond here instead, because it just constantly derails the thread to complain about stuff that no one is going to take any action upon.
Different games are bottlenecked differently? Then how about repeating the test on the same 5-10 games? I think that shoots down your argument.
Next one doesn't even need a bullet. How can you say that Nvidia employee worked on optimizations and older hardware is naturally on lower priority, when GTX7xx and GTX6xx is the same architecture. I got the feeling all of my arguments are blocked before they reach your brain ;).
I said it 3 times now: Fermi and slightly improved Fermi (4xx vs. 5xx), Kepler vs. the same Kepler. Before that, G92 vs. G92 (and the same situation). People don't care. Nvidia takes away 20% of their hardware performance? So what? "We still have 30fps and console devs say it's enough so all is fine".
Besides your point of view could be understandable if we were talking about 5 years, but not 9 month, and I'll repeat - 9 month ago, the most expensive cards from Nvidia, the best and newest, were 780/780Ti/Titans.
If you say it's normal to cripple them because that's how corporation works, then I don't get you, seriously.
Did any of the people who spent 500-700$ on GPUs that are treated as absolete within a year, were informed that they're buying a card for a year? Some of them bought the expensive card to be "set" for a longer time, to comfortably play games an high settings for the next 2-3 years.
Sure, of course we redo the tests on the same 5-10 games. You are saying that latest drivers make all older games run slower. Where is your proof?
Regarding the supposed 7xx to 6xx gimp. Did you even read what I said? Are the cards *identical*? Architecture is one thing. You think how wide the bus is doesn't matter? How much RAM they have?
Your arguments are not blocked before they reach my brain, they don't have any proof. I'm more than willing to be proved wrong, but you absolutely categorically MUST provide benchmarks. Just your word is not sufficient, because I don't know you, and frankly my arguments seem to be unable to reach you. Your personal opinion appears to be more valuable than my 30 years of experience. The burden of proof is on you- you are making the outrageous claims.
Regarding the 9 month life span of a card. This also makes no sense. You buy the card for what it can do at that time, and hopefully it will last, but you have no idea. You did not address my 2G argument, GTA5, or AMD running slow on the same game. Why not? Future proofing is not possible. If you can't afford to upgrade every year, then geez, don't. It's not like you are missing out on anything. This isn't NVidia shafting you, this is you having completely unrealistic expectations.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607 Latest 3Dmigoto Release Bo3b's School for ShaderHackers
[quote=""]I would also recommend 980 Kingpin, which can easy go above Titan X but you might not be very happy with all that Overclocking so i`ll skip that.
http://www.guru3d.com/news-story/evga-leaks-geforce-gtx-980-ti-sku-lineup-and-prices.html
If the price of 980Ti is true 800 to 825 dollars for the stock 980Ti the Kingpin version is going to be another $50 to $75 more this brings the 980Ti kingpin close $900 there no way with 10% less cores and half the memory this is going to faster then a titan x.[/quote]
I meant present Kingpin which already is faster then Titan X and it is total overkill when smart overclocking is used. All this based on overclocked ones - don`t get me wrong. Anyway i think that card which is £200 cheaper and only 5% slower with potential to be 15% faster deserve to be called KINGPIN :)
If the price of 980Ti is true 800 to 825 dollars for the stock 980Ti the Kingpin version is going to be another $50 to $75 more this brings the 980Ti kingpin close $900 there no way with 10% less cores and half the memory this is going to faster then a titan x.
I meant present Kingpin which already is faster then Titan X and it is total overkill when smart overclocking is used. All this based on overclocked ones - don`t get me wrong. Anyway i think that card which is £200 cheaper and only 5% slower with potential to be 15% faster deserve to be called KINGPIN :)
bo3b
Different games are bottlenecked differently? Then how about repeating the test on the same 5-10 games? I think that shoots down your argument.
Next one doesn't even need a bullet. How can you say that Nvidia employee worked on optimizations and older hardware is naturally on lower priority, when GTX7xx and GTX6xx is [u]the same architecture[/u]. I got the feeling all of my arguments are blocked before they reach your brain ;).
I said it 3 times now: Fermi and slightly improved Fermi (4xx vs. 5xx), Kepler vs. the same Kepler. Before that, G92 vs. G92 (and the same situation). People don't care. Nvidia takes away 20% of their hardware performance? So what? "We still have 30fps and console devs say it's enough so all is fine".
Besides your point of view could be understandable if we were talking about 5 years, but not 9 month, and I'll repeat - 9 month ago, the most expensive cards from Nvidia, the best and newest, were 780/780Ti/Titans.
If you say it's normal to cripple them because that's how corporation works, then I don't get you, seriously.
Did any of the people who spent 500-700$ on GPUs that are treated as absolete within a year, were informed that they're buying a card for a year? Some of them bought the expensive card to be "set" for a longer time, to comfortably play games an high settings for the next 2-3 years.
vulcan78
I get you, brother. Not everyone earns 2000$/month. TitanX would require spending ALL the money that are left after paying the bills, I mean ALL, not a single beer or pizza, cinema, book, nothing, for a year. Just to be able to buy it. And that's assuming we're talking about more rich people in my town. Majority of people in my town would need a 2-3 year credit on top of that, to be able to pay, and I am holding the record on my subdivision/housing estate, because I've bought 6800gt for about 400$. It's viewed by my friends as "insane!" to this day.
That has nothing to do with this discussion, but I'd like the rich guys here to understand that not every country is rich, and there are gaming enthusiasts who need to work for gtx960 more than they do to buy SLI of Titan X's. And then Nvidia replaces their 780s for 760s and 770s for 660s, because they ermmm... "had a bug with Kepler series". I wouldn't call the ourtrage "bitching". Definitely not.
Different games are bottlenecked differently? Then how about repeating the test on the same 5-10 games? I think that shoots down your argument.
Next one doesn't even need a bullet. How can you say that Nvidia employee worked on optimizations and older hardware is naturally on lower priority, when GTX7xx and GTX6xx is the same architecture. I got the feeling all of my arguments are blocked before they reach your brain ;).
I said it 3 times now: Fermi and slightly improved Fermi (4xx vs. 5xx), Kepler vs. the same Kepler. Before that, G92 vs. G92 (and the same situation). People don't care. Nvidia takes away 20% of their hardware performance? So what? "We still have 30fps and console devs say it's enough so all is fine".
Besides your point of view could be understandable if we were talking about 5 years, but not 9 month, and I'll repeat - 9 month ago, the most expensive cards from Nvidia, the best and newest, were 780/780Ti/Titans.
If you say it's normal to cripple them because that's how corporation works, then I don't get you, seriously.
Did any of the people who spent 500-700$ on GPUs that are treated as absolete within a year, were informed that they're buying a card for a year? Some of them bought the expensive card to be "set" for a longer time, to comfortably play games an high settings for the next 2-3 years.
vulcan78
I get you, brother. Not everyone earns 2000$/month. TitanX would require spending ALL the money that are left after paying the bills, I mean ALL, not a single beer or pizza, cinema, book, nothing, for a year. Just to be able to buy it. And that's assuming we're talking about more rich people in my town. Majority of people in my town would need a 2-3 year credit on top of that, to be able to pay, and I am holding the record on my subdivision/housing estate, because I've bought 6800gt for about 400$. It's viewed by my friends as "insane!" to this day.
That has nothing to do with this discussion, but I'd like the rich guys here to understand that not every country is rich, and there are gaming enthusiasts who need to work for gtx960 more than they do to buy SLI of Titan X's. And then Nvidia replaces their 780s for 760s and 770s for 660s, because they ermmm... "had a bug with Kepler series". I wouldn't call the ourtrage "bitching". Definitely not.
[quote=""]"bitching" ? - first time I here this word.
What that means - isn`t this something like: when someone makes contra argument which you can`t accept ?[/quote]
It's when people are complaining for the sake of complaining, and aren't interested in the actual answer. Like whining. There isn't a damn thing any of us can do about it, so that puts it in the category of 'bitching'.
said:"bitching" ? - first time I here this word.
What that means - isn`t this something like: when someone makes contra argument which you can`t accept ?
It's when people are complaining for the sake of complaining, and aren't interested in the actual answer. Like whining. There isn't a damn thing any of us can do about it, so that puts it in the category of 'bitching'.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607 Latest 3Dmigoto Release Bo3b's School for ShaderHackers
Ok ok... I think we should just drop the discussion about the 700 vs 900 performance.
Problem is no one is right and no one is wrong:) As a consumer I don't like being cheated. As an engineer I understand how this might have happened and why. I am not an economist to tell you if is feasible or not to pull a stunt like this, but history shows us that it happened before so we assume is also happening now!
So, I love my 555M gpu, 880M gpu and my 2x780Ti Gpus (for with I payed 1.500£ last year in January). While in Witcher 3 it doesn't behave properly I have no issues in any other games in 3D Surround where I get proper SLI and FPS. For example Dragon Age:Inquisition I get 30-60 FPS in 3D Surround - based on scene etc). I rolled back (to test it) to 340.52 (first driver to support DA:I) and I noticed I was getting 20-40fps with that driver. Thus, current driver works better than that one ! Same will happen with Witcher 3 in the future!
So... I am going to tell you a little trick:
- On my dad's PC (with my old GTX590) I am using 314.22 driver (after which performance started to degrade).
- From the newest drivers I COPY the DRS folder (all bin files that contain the Profiles + SLI Bits and so on).
- Result: I get all the profiles + SLI + 3D VIsion flags to work on an older driver (which I know behaves properly for FERMI - even if some tweaks at the driver level are not implemented in that driver version).
Even now I am using 347.88 (since Surround is bugged after that driver) + DRS folder from 352.86 so I can play witcher 3 in 3D Vision (on one monitor) + GTAV (3D Surround).
SOOO, It is normal when a new GPU series is released to actively develop for that GPU and all the others are on 2nd place. However, you still get fixes for older series if bugs are found.
Like I said before the current DRIVER model is wrong: They should release a basic driver that enables stuff for your GPU and PROFILES for games (with flags + settings) rather than the unified shiet we are getting now. I bet a LOT of these issues will go away then;))
Oh and driver development - as software - is a pain in the ass and very easy to make mistakes... especially when you are trying to make a Jack-of-all-trades software:(
Ok ok... I think we should just drop the discussion about the 700 vs 900 performance.
Problem is no one is right and no one is wrong:) As a consumer I don't like being cheated. As an engineer I understand how this might have happened and why. I am not an economist to tell you if is feasible or not to pull a stunt like this, but history shows us that it happened before so we assume is also happening now!
So, I love my 555M gpu, 880M gpu and my 2x780Ti Gpus (for with I payed 1.500£ last year in January). While in Witcher 3 it doesn't behave properly I have no issues in any other games in 3D Surround where I get proper SLI and FPS. For example Dragon Age:Inquisition I get 30-60 FPS in 3D Surround - based on scene etc). I rolled back (to test it) to 340.52 (first driver to support DA:I) and I noticed I was getting 20-40fps with that driver. Thus, current driver works better than that one ! Same will happen with Witcher 3 in the future!
So... I am going to tell you a little trick:
- On my dad's PC (with my old GTX590) I am using 314.22 driver (after which performance started to degrade).
- From the newest drivers I COPY the DRS folder (all bin files that contain the Profiles + SLI Bits and so on).
- Result: I get all the profiles + SLI + 3D VIsion flags to work on an older driver (which I know behaves properly for FERMI - even if some tweaks at the driver level are not implemented in that driver version).
Even now I am using 347.88 (since Surround is bugged after that driver) + DRS folder from 352.86 so I can play witcher 3 in 3D Vision (on one monitor) + GTAV (3D Surround).
SOOO, It is normal when a new GPU series is released to actively develop for that GPU and all the others are on 2nd place. However, you still get fixes for older series if bugs are found.
Like I said before the current DRIVER model is wrong: They should release a basic driver that enables stuff for your GPU and PROFILES for games (with flags + settings) rather than the unified shiet we are getting now. I bet a LOT of these issues will go away then;))
Oh and driver development - as software - is a pain in the ass and very easy to make mistakes... especially when you are trying to make a Jack-of-all-trades software:(
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
@Helixfax;
very good points. Yes, apparently this is normal practice.
so let me ask you; as a consumer and a loyal costumer of nvidia throughout many years. Is this something you think is acceptable? are you willing to upgrade every 1-2 years just because your previous card, which you paid top dollar for, is no longer supported?
I think its BS practice and only contributes to them selling more cards when in actuallity older cards could stick around for much longer.
This makes me question whether I want to keep buying Nvidia cards
because in todays world we vote and voice our opinion with our $$$
very good points. Yes, apparently this is normal practice.
so let me ask you; as a consumer and a loyal costumer of nvidia throughout many years. Is this something you think is acceptable? are you willing to upgrade every 1-2 years just because your previous card, which you paid top dollar for, is no longer supported?
I think its BS practice and only contributes to them selling more cards when in actuallity older cards could stick around for much longer.
This makes me question whether I want to keep buying Nvidia cards
because in todays world we vote and voice our opinion with our $$$
Doesn't really take much to run 3D though compared to 4k. I only asked because some said they upgraded from a 690 to a Titan Xand wanted to know his experience.
Sorry if I hijacked the thread.
Anyway since i was on 3D Surround with two 680 (which is close or even more then 4K) and i was perfectly fine then i thing i`ll be way better off with one Gtx 980ti - because of the numbers i see on Titan X and i can assume that "Ti" won`t get far below that.
Probably i`ll end up with one 980 KINGPIN just because i like Ultimate Suffer.
https://steamcommunity.com/profiles/76561198014296177/
http://www.guru3d.com/news-story/evga-leaks-geforce-gtx-980-ti-sku-lineup-and-prices.html
If the price of 980Ti is true 800 to 825 dollars for the stock 980Ti the Kingpin version is going to be another $50 to $75 more this brings the 980Ti kingpin close $900 there no way with 10% less cores and half the memory this is going to faster then a titan x.
Gigabyte Z370 Gaming 7 32GB Ram i9-9900K GigaByte Aorus Extreme Gaming 2080TI (single) Game Blaster Z Windows 10 X64 build #17763.195 Define R6 Blackout Case Corsair H110i GTX Sandisk 1TB (OS) SanDisk 2TB SSD (Games) Seagate EXOs 8 and 12 TB drives Samsung UN46c7000 HD TV Samsung UN55HU9000 UHD TVCurrently using ACER PASSIVE EDID override on 3D TVs LG 55
Depending on where price falls, I'd be interested in either 980ti or TitanX as a single card solution. I'm also getting pretty fed up with bad SLI profiles. I like mrorange's idea of thinking through all the numbers, but decide using your heart.
The other factor that holds me back, is I really, really do not want to be stuck using only recent drivers. The current driver development has been just terrible, and I go back to old drivers all the time.
But, only you can decide what is right for you. It's your money, your games, and your decision.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers
well said.
and thanks for clearing up the keplar situation. I believed what I was reading online. Although, I have 770 x2 sli and I don't get over 60FPS and that's still kind of strange, so I assumed it to be true.
especially since Nvidia themselves have admitted to "finding an issue" with kepler cards and promising a fix in the near future.
so how could it be BS if nVidia themselves acknowledge it?
YouTube has a quite a few TitanX 4K reviews. If you can get past the first few minutes …The Weirdest Titan X Review Ever! - Live 4K Performance Testing isn't too bad.
It's the difference between accident and intent. If it was intentional, then of course that would be terrible. If it was an accident, does it seem right to fill pages and pages of forums with hate speech? The part that is bullshit is claiming that it was a conspiracy to drive people to upgrade. It is a problem, but only for a couple of games.
Having worked at big ass corporations like NVidia, on software that is similarly complex to a driver, I have to assume that it was an accident, a bug. It happens. I did performance testing on the Apple OS for 5 years, and the entire point of our testing and lab was to catch performance bugs introduced by well meaning engineers.
That 970 video card debacle does make us all a lot less forgiving though. That was clearly a scumbag move to get the number to 4G. An extra bank of slow-ass RAM doesn't help anyone except marketing weasels.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers
I'll repeat it for the third time:
If the difference was thanx to the Maxwell optimizations in drivers, we would see a difference in older games, the ones that showed that 960 is around gtx760 performance, and gtx780 is not far behind gtx970. You know, the ones not important for marketing anymore.
If for example Nvidia improved texturing speed, it should help with texturing in 2 year old game, shouldn't it?
I ask all you people who say it's "funny bitching" to wait a little, and see for yourselves. There will be NO improvements, because it's no optimization, unless you talk about optimization of the milking process then yeah, Nvidia optimizes this continuously. I don't have the possibility to do a 780 and 970 test similar to the ones that were posted on the web when Maxwell cards showed up, but I really hope someone does and prove it to you.
Let's hope DX12 will help end such practices.
In the meantime, to people who say "get over it" and "you're bitching"; enjoy your 9xx series cards, while you can. You have around one year left, until Pascal arrives and Nvidia will cripple 9xx. Go praise them for "optimizing" Pascals then. Go praise them for artificially crippling the cards you've bought for 1000$...
After a year...
edit: I forgot to mention the 680 vs. 760 situation - the same thing happened. Please explain how Kepler architecture is newer to the Kepler architecture. It happened with Fermi 4xx and 5xx series, with Kepler 6xx and 7xx series, it happens now, and it will repeat in the future.
I'm not "bitching" because I have nothing better to do with my time. I want people I care about, to be aware. And I like this community a lot. Besides it's good to be aware of such things if we are all 3DV fans, and as such, we have a tendency to tell our friends to buy Nvidia cards. We might get in trouble when such a friend realizes what we adviced them to buy.
I hate many things that AMD does. The same issue - big corporation knowing nothing about it's product users and needs, doing just the things that influence their shares value.
So I'm far from "Nvidia is bad, let's go to AMD", but that's not the reason to shut up and not react to obvious shenanigans.
No. It's not that simple. You are making the assumption that all games run similarly, and that couldn't be farther from the truth. Some games are CPU bound, some are GPU bound. Some needs lots of Cuda cores, some need lots of VRAM, some need lots of bandwidth. It completely depends upon the workload in action, and it's far to simplistic to say it would speed up all older games.
What is much more likely is that they spent all of their Witcher3 time making sure that the latest cards were working correctly and not crashing. The older cards would be assumed to be already fairly well debugged and optimized, and get less attention.
For performance bugs in particular, fixing one path does not always fix others. So the fact that Witcher3 is tesselation heavy could easily be a buggy path on older cards. It could easily be that games that are heavy on tesselation are the ones that show the problem.
Speaking of which, why does nobody talk about GTA5 as being slow on Kepler? What about the fact that Witcher3 ran dog slow on AMD cards? How did NVidia make that happen? Data that conflicts with what you want to believe is inconvenient, I know.
Edit for your edit: No, it's the same scenario. It's not shenanigans, that's how software development is done. The reason it improved on the same architecture is because some clever engineer spent a lot of time figuring out a bug that was bottlenecking the cards. It's EXACTLY the same scenario, true.
I really don't understand why people are so concerned about this. If the card works for you, buy it, if it doesn't don't. There is no guaranty that anything you buy will work in the future. We all thought 2G cards were going to be OK, and now that's not true. Do you feel cheated because of that?
Edit2: Rather than make yet another off topic post: For people here who are sane, please accept my most sincere apologies for poking the trolls. One bad part of being an engineer is I always get confused that people would care about facts. This will be my last post on this topic.
Edit3: I'm going to respond here instead, because it just constantly derails the thread to complain about stuff that no one is going to take any action upon.
Sure, of course we redo the tests on the same 5-10 games. You are saying that latest drivers make all older games run slower. Where is your proof?
Regarding the supposed 7xx to 6xx gimp. Did you even read what I said? Are the cards *identical*? Architecture is one thing. You think how wide the bus is doesn't matter? How much RAM they have?
Your arguments are not blocked before they reach my brain, they don't have any proof. I'm more than willing to be proved wrong, but you absolutely categorically MUST provide benchmarks. Just your word is not sufficient, because I don't know you, and frankly my arguments seem to be unable to reach you. Your personal opinion appears to be more valuable than my 30 years of experience. The burden of proof is on you- you are making the outrageous claims.
Regarding the 9 month life span of a card. This also makes no sense. You buy the card for what it can do at that time, and hopefully it will last, but you have no idea. You did not address my 2G argument, GTA5, or AMD running slow on the same game. Why not? Future proofing is not possible. If you can't afford to upgrade every year, then geez, don't. It's not like you are missing out on anything. This isn't NVidia shafting you, this is you having completely unrealistic expectations.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers
I meant present Kingpin which already is faster then Titan X and it is total overkill when smart overclocking is used. All this based on overclocked ones - don`t get me wrong. Anyway i think that card which is £200 cheaper and only 5% slower with potential to be 15% faster deserve to be called KINGPIN :)
https://steamcommunity.com/profiles/76561198014296177/
Different games are bottlenecked differently? Then how about repeating the test on the same 5-10 games? I think that shoots down your argument.
Next one doesn't even need a bullet. How can you say that Nvidia employee worked on optimizations and older hardware is naturally on lower priority, when GTX7xx and GTX6xx is the same architecture. I got the feeling all of my arguments are blocked before they reach your brain ;).
I said it 3 times now: Fermi and slightly improved Fermi (4xx vs. 5xx), Kepler vs. the same Kepler. Before that, G92 vs. G92 (and the same situation). People don't care. Nvidia takes away 20% of their hardware performance? So what? "We still have 30fps and console devs say it's enough so all is fine".
Besides your point of view could be understandable if we were talking about 5 years, but not 9 month, and I'll repeat - 9 month ago, the most expensive cards from Nvidia, the best and newest, were 780/780Ti/Titans.
If you say it's normal to cripple them because that's how corporation works, then I don't get you, seriously.
Did any of the people who spent 500-700$ on GPUs that are treated as absolete within a year, were informed that they're buying a card for a year? Some of them bought the expensive card to be "set" for a longer time, to comfortably play games an high settings for the next 2-3 years.
vulcan78
I get you, brother. Not everyone earns 2000$/month. TitanX would require spending ALL the money that are left after paying the bills, I mean ALL, not a single beer or pizza, cinema, book, nothing, for a year. Just to be able to buy it. And that's assuming we're talking about more rich people in my town. Majority of people in my town would need a 2-3 year credit on top of that, to be able to pay, and I am holding the record on my subdivision/housing estate, because I've bought 6800gt for about 400$. It's viewed by my friends as "insane!" to this day.
That has nothing to do with this discussion, but I'd like the rich guys here to understand that not every country is rich, and there are gaming enthusiasts who need to work for gtx960 more than they do to buy SLI of Titan X's. And then Nvidia replaces their 780s for 760s and 770s for 660s, because they ermmm... "had a bug with Kepler series". I wouldn't call the ourtrage "bitching". Definitely not.
What that means - isn`t this something like: when someone makes contra argument which you can`t accept ?
https://steamcommunity.com/profiles/76561198014296177/
It's when people are complaining for the sake of complaining, and aren't interested in the actual answer. Like whining. There isn't a damn thing any of us can do about it, so that puts it in the category of 'bitching'.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers
Problem is no one is right and no one is wrong:) As a consumer I don't like being cheated. As an engineer I understand how this might have happened and why. I am not an economist to tell you if is feasible or not to pull a stunt like this, but history shows us that it happened before so we assume is also happening now!
So, I love my 555M gpu, 880M gpu and my 2x780Ti Gpus (for with I payed 1.500£ last year in January). While in Witcher 3 it doesn't behave properly I have no issues in any other games in 3D Surround where I get proper SLI and FPS. For example Dragon Age:Inquisition I get 30-60 FPS in 3D Surround - based on scene etc). I rolled back (to test it) to 340.52 (first driver to support DA:I) and I noticed I was getting 20-40fps with that driver. Thus, current driver works better than that one ! Same will happen with Witcher 3 in the future!
So... I am going to tell you a little trick:
- On my dad's PC (with my old GTX590) I am using 314.22 driver (after which performance started to degrade).
- From the newest drivers I COPY the DRS folder (all bin files that contain the Profiles + SLI Bits and so on).
- Result: I get all the profiles + SLI + 3D VIsion flags to work on an older driver (which I know behaves properly for FERMI - even if some tweaks at the driver level are not implemented in that driver version).
Even now I am using 347.88 (since Surround is bugged after that driver) + DRS folder from 352.86 so I can play witcher 3 in 3D Vision (on one monitor) + GTAV (3D Surround).
SOOO, It is normal when a new GPU series is released to actively develop for that GPU and all the others are on 2nd place. However, you still get fixes for older series if bugs are found.
Like I said before the current DRIVER model is wrong: They should release a basic driver that enables stuff for your GPU and PROFILES for games (with flags + settings) rather than the unified shiet we are getting now. I bet a LOT of these issues will go away then;))
Oh and driver development - as software - is a pain in the ass and very easy to make mistakes... especially when you are trying to make a Jack-of-all-trades software:(
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
very good points. Yes, apparently this is normal practice.
so let me ask you; as a consumer and a loyal costumer of nvidia throughout many years. Is this something you think is acceptable? are you willing to upgrade every 1-2 years just because your previous card, which you paid top dollar for, is no longer supported?
I think its BS practice and only contributes to them selling more cards when in actuallity older cards could stick around for much longer.
This makes me question whether I want to keep buying Nvidia cards
because in todays world we vote and voice our opinion with our $$$