Titan X or 980 Ti?
  6 / 8    
[quote=""]Not too be "one of those" or anything but.... Hows pascal going perform when it comes to 3dvision, i.e what most of us are here for.[/quote] No way to know right now. Far too many variables, and potential game changers. If, emphasize if, DX12 takes off and becomes something quickly instead of lagging for 5 years like DX11 did, that might change 3D altogether, as it's natively supported there. Might put a bullet in 3D's head too, very hard to say. DX11.3 that comes along with DX12 is the compatible layer, which will pretty much work like usual. So I think in the worst case, NVidia can just target 11.3 in Win10 using Automatic Mode. There will be DX11.0 backward compatibility too, at least for first gen Win10. Then add Pascal on top of that. Very difficult to predict. Like I always recommend- don't buy for the future, just get what you need for today.
said:Not too be "one of those" or anything but....

Hows pascal going perform when it comes to 3dvision, i.e what most of us are here for.

No way to know right now. Far too many variables, and potential game changers.

If, emphasize if, DX12 takes off and becomes something quickly instead of lagging for 5 years like DX11 did, that might change 3D altogether, as it's natively supported there. Might put a bullet in 3D's head too, very hard to say. DX11.3 that comes along with DX12 is the compatible layer, which will pretty much work like usual. So I think in the worst case, NVidia can just target 11.3 in Win10 using Automatic Mode. There will be DX11.0 backward compatibility too, at least for first gen Win10.

Then add Pascal on top of that. Very difficult to predict. Like I always recommend- don't buy for the future, just get what you need for today.

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

#76
Posted 06/02/2015 07:43 AM   
I feel the big push for DX12 to extend the life of the Xbox One. So I feel DEVS will start using DX12 for Xbox One Console and port to the PC.
I feel the big push for DX12 to extend the life of the Xbox One. So I feel DEVS will start using DX12 for Xbox One Console and port to the PC.

Gigabyte Z370 Gaming 7 32GB Ram i9-9900K GigaByte Aorus Extreme Gaming 2080TI (single) Game Blaster Z Windows 10 X64 build #17763.195 Define R6 Blackout Case Corsair H110i GTX Sandisk 1TB (OS) SanDisk 2TB SSD (Games) Seagate EXOs 8 and 12 TB drives Samsung UN46c7000 HD TV Samsung UN55HU9000 UHD TVCurrently using ACER PASSIVE EDID override on 3D TVs LG 55

#77
Posted 06/02/2015 09:10 AM   
Microsoft 10 is launching in July. Fable Legends is launching in December? Anyone interested might sign up for the Beta and give both a try. https://www.youtube.com/watch?v=3ngnzY5xtI8&feature=share https://www.youtube.com/watch?v=Wwc5jKc-EgE EDIT: Just saw this "More annoyingly, perhaps, Microsoft has also changed how updates will work with Windows 10. Although the Pro and Enterprise editions will both be able to defer updates, Windows 10 Home users will not have the option. Updates will instead be downloaded and installed automatically as soon as they're available." Well, I hope this doesn't include things that previously were optional updates at least.
Microsoft 10 is launching in July.

Fable Legends is launching in December?

Anyone interested might sign up for the Beta and give both a try.

;feature=share







EDIT: Just saw this

"More annoyingly, perhaps, Microsoft has also changed how updates will work with Windows 10. Although the Pro and Enterprise editions will both be able to defer updates, Windows 10 Home users will not have the option. Updates will instead be downloaded and installed automatically as soon as they're available."




Well, I hope this doesn't include things that previously were optional updates at least.

#78
Posted 06/02/2015 11:12 AM   
Can anyone explain why there's not much performance difference between the titan X and 980ti? The original titan was released with one smx disabled and the newer 780ti with all smx enabled (just one extra) was getting some 10% more performance. I got the original titan then and felt ripped off cause the newer card had all smx intact. I got the titan X and still feel ripped off though the newer card has two smm disabled. Lol
Can anyone explain why there's not much performance difference between the titan X and 980ti?

The original titan was released with one smx disabled and the newer 780ti with all smx enabled (just one extra) was getting some 10% more performance.

I got the original titan then and felt ripped off cause the newer card had all smx intact. I got the titan X and still feel ripped off though the newer card has two smm disabled. Lol

#79
Posted 06/02/2015 06:53 PM   
Reviewers are finding that the boost clock is clocking much higher on the ti vs the TX which is why the numbers are so close. I don't know if this is due to the ti just being a cooler card (maybe due to the lower number of memory modules?) or what, but if you can get the TX to boost as high as the ti you would see a much wider gap between the two. In other words... water cool your TX's people! (jk (sorta))
Reviewers are finding that the boost clock is clocking much higher on the ti vs the TX which is why the numbers are so close. I don't know if this is due to the ti just being a cooler card (maybe due to the lower number of memory modules?) or what, but if you can get the TX to boost as high as the ti you would see a much wider gap between the two.

In other words... water cool your TX's people!
(jk (sorta))

#80
Posted 06/02/2015 08:37 PM   
[quote=""]Assuming those numbers don't change, my take is that you'd still be better off with a Titan X. The only reason to do 980Ti would be to try to get single card performance as good as possible, otherwise you'd find much better value with SLI 970, even with that stupid memory configuration. If you want/need best single card performance, and you are already throwing down $800, I think you should just go all the way. Assuming that price holds, there is no way you can consider it a good value. Fair price maybe, not a good value.[/quote] what about this test made by pc gamer http://www.pcgamer.com/nvidia-geforce-gtx-980-ti-review/ atleast with 1080p the charts look like 980ti wins in some and the 970 sli in some. so overal quite same performance.
said:Assuming those numbers don't change, my take is that you'd still be better off with a Titan X. The only reason to do 980Ti would be to try to get single card performance as good as possible, otherwise you'd find much better value with SLI 970, even with that stupid memory configuration.

If you want/need best single card performance, and you are already throwing down $800, I think you should just go all the way. Assuming that price holds, there is no way you can consider it a good value. Fair price maybe, not a good value.


what about this test made by pc gamer http://www.pcgamer.com/nvidia-geforce-gtx-980-ti-review/
atleast with 1080p the charts look like 980ti wins in some and the 970 sli in some.
so overal quite same performance.

CoreX9 Custom watercooling (valkswagen polo radiator)
I7-8700k@stock
TitanX pascal with shitty stock cooler
Win7/10
Video: Passive 3D fullhd 3D@60hz/channel Denon x1200w /Hc5 x 2 Geobox501->eeColorBoxes->polarizers/omega filttersCustom made silverscreen
Ocupation: Enterprenior.Painting/surfacing/constructions
Interests/skills:
3D gaming,3D movies, 3D printing,Drums, Bass and guitar.
Suomi - FINLAND - perkele

#81
Posted 06/02/2015 08:38 PM   
[quote=""]Can anyone explain why there's not much performance difference between the titan X and 980ti? The original titan was released with one smx disabled and the newer 780ti with all smx enabled (just one extra) was getting some 10% more performance. I got the original titan then and felt ripped off cause the newer card had all smx intact. I got the titan X and still feel ripped off though the newer card has two smm disabled. Lol[/quote] It seems to be at least partially due to 980 Ti boosting higher, compared to the Titan X which is known to be thermally limited. Jump to 5.35: https://www.youtube.com/watch?v=VQou5h0Zh2k&feature=youtu.be&t=333 In theory, a Titan X cooled properly will over-clock and perform much better. Also, the law of diminishing returns applies even with super parallel processing on GPUs. Double the number of "cores" doesn't translate to double the performance nowadays unfortunately.
said:Can anyone explain why there's not much performance difference between the titan X and 980ti?

The original titan was released with one smx disabled and the newer 780ti with all smx enabled (just one extra) was getting some 10% more performance.

I got the original titan then and felt ripped off cause the newer card had all smx intact. I got the titan X and still feel ripped off though the newer card has two smm disabled. Lol


It seems to be at least partially due to 980 Ti boosting higher, compared to the Titan X which is known to be thermally limited.

Jump to 5.35:

;feature=youtu.be&t=333

In theory, a Titan X cooled properly will over-clock and perform much better.

Also, the law of diminishing returns applies even with super parallel processing on GPUs. Double the number of "cores" doesn't translate to double the performance nowadays unfortunately.

Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.

#82
Posted 06/02/2015 11:51 PM   
It doesn't make any sense that a 980ti would be less thermally limited than the TitanX, are we talking different cooler? These are the exact same chips, the 980ti is just a down-binned version of the chip where a manufacturing defect killed two of the SMM units. From a performance perspective, the fact that it benchmarks so close to the TitanX suggests that the benchmarks used (games) do not need the extra cores, they are limited by something else, probably the core-clock.
It doesn't make any sense that a 980ti would be less thermally limited than the TitanX, are we talking different cooler? These are the exact same chips, the 980ti is just a down-binned version of the chip where a manufacturing defect killed two of the SMM units.

From a performance perspective, the fact that it benchmarks so close to the TitanX suggests that the benchmarks used (games) do not need the extra cores, they are limited by something else, probably the core-clock.

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

#83
Posted 06/03/2015 12:19 AM   
Quite. Taking into account Linus's observations in the video, it may be that the lesser "cores" means lower thermal output, hence higher boost clock. Comparing the Titan X, higher "cores" even though not being utilised, means higher thermal output, which translates to lower boost and therefore "lower" than expected performance. FWIW: http://anandtech.com/show/9306/the-nvidia-geforce-gtx-980-ti-review/16 It would be interesting to see if the Titan X will use all "cores" when in 3D Vision mode, and therefore trump the 980 Ti by a significant margin.
Quite. Taking into account Linus's observations in the video, it may be that the lesser "cores" means lower thermal output, hence higher boost clock.

Comparing the Titan X, higher "cores" even though not being utilised, means higher thermal output, which translates to lower boost and therefore "lower" than expected performance.

FWIW:
http://anandtech.com/show/9306/the-nvidia-geforce-gtx-980-ti-review/16


It would be interesting to see if the Titan X will use all "cores" when in 3D Vision mode, and therefore trump the 980 Ti by a significant margin.

Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.

#84
Posted 06/03/2015 12:36 AM   
Ah-right-O. Thanks for the link, I just skimmed past that page before. The interesting tidbit there is to consider the 12G of VRAM. The 2 SMM and the loss of 6G of VRAM gives the same cooler an advantage for the 980ti. That also suggests that for current games that the cards are not well balanced in terms of architecture. If we can lose 2 full SMM and get better performance with a better clock- why don't we do that more?
Ah-right-O. Thanks for the link, I just skimmed past that page before. The interesting tidbit there is to consider the 12G of VRAM. The 2 SMM and the loss of 6G of VRAM gives the same cooler an advantage for the 980ti.

That also suggests that for current games that the cards are not well balanced in terms of architecture. If we can lose 2 full SMM and get better performance with a better clock- why don't we do that more?

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

#85
Posted 06/03/2015 01:01 AM   
[quote=""][quote=""]Not too be "one of those" or anything but.... Hows pascal going perform when it comes to 3dvision, i.e what most of us are here for.[/quote] No way to know right now. Far too many variables, and potential game changers. If, emphasize if, DX12 takes off and becomes something quickly instead of lagging for 5 years like DX11 did, that might change 3D altogether, as it's natively supported there. Might put a bullet in 3D's head too, very hard to say. DX11.3 that comes along with DX12 is the compatible layer, which will pretty much work like usual. So I think in the worst case, NVidia can just target 11.3 in Win10 using Automatic Mode. There will be DX11.0 backward compatibility too, at least for first gen Win10. Then add Pascal on top of that. Very difficult to predict. Like I always recommend- don't buy for the future, just get what you need for today. [/quote] My point exactly ;)
said:
said:Not too be "one of those" or anything but....

Hows pascal going perform when it comes to 3dvision, i.e what most of us are here for.

No way to know right now. Far too many variables, and potential game changers.

If, emphasize if, DX12 takes off and becomes something quickly instead of lagging for 5 years like DX11 did, that might change 3D altogether, as it's natively supported there. Might put a bullet in 3D's head too, very hard to say. DX11.3 that comes along with DX12 is the compatible layer, which will pretty much work like usual. So I think in the worst case, NVidia can just target 11.3 in Win10 using Automatic Mode. There will be DX11.0 backward compatibility too, at least for first gen Win10.

Then add Pascal on top of that. Very difficult to predict. Like I always recommend- don't buy for the future, just get what you need for today.


My point exactly ;)

i7-4790K CPU 4.8Ghz stable overclock.
16 GB RAM Corsair
EVGA 1080TI SLI
Samsung SSD 840Pro
ASUS Z97-WS
3D Surround ASUS Rog Swift PG278Q(R), 2x PG278Q (yes it works)
Obutto R3volution.
Windows 10 pro 64x (Windows 7 Dual boot)

#86
Posted 06/03/2015 05:14 AM   
Look at this: [url]http://anandtech.com/show/9306/the-nvidia-geforce-gtx-980-ti-review/14[/url] [img]http://images.anandtech.com/graphs/graph9306/74792.png[/img] 22 vs. 24 doesn't equal 36,7 vs. 43,6. That would suggest we're dealing with another "970" case here, but this time Nvidia was asked and replied it's not the case. They talk about it here: http://www.purepc.pl/karty_graficzne/premierowy_test_geforce_gtx_980_ti_tansza_wersja_gtx_titan_x [quote] Podobny manewr w wykonaniu GM200-310 byłby nawet prawdopodobny, gdyby zamiast 96 pojawiło się 88 ROP-ów, jednak NVIDIA inaczej skroiła układ. Chcąc rozwiać wszelkie wątpliwości zapytałem więc bezpośrednio u źródła: czy GeForce GTX 980 Ti skrywa jakieś tajemnice i otrzymałem zdecydowaną odpowiedź: NIE! GeForce GTX 980 Ti posiada 6144 MB pamięci na tekstury o przepustowości 336 GB/s. [/quote] translated would mean: "Nvidia cut the chip differently this time. To be sure, I asked directly if 980Ti has any secrets and I got a straight answer: NO! 980Ti has 6144MB of memory with 336GB/s bandwidth". But the question remains: Especially if the clocks of 980ti are a little higher, why the difference is so big? And why it happens in exactly the same department (ROPs) as in 970? If it was 5,5+0,5GB again, then why no game seems to show it? Less memory improved the latency or the speed of memory controller (timings etc.)? It could be because there's no game that uses more than 5,5GB of memory with even poorly ported games that allocate a lot of memory not really using that much. It's a different thing to allocate 6GB and use 6GB, and a different thing to allocate 6GB and use 2GB, while the rest is allocated just because the dev was lazy (/had a publisher pushing for this) and there's no real PC port, just straight code from the consoles with unified memory. That in my opinion explains very well why 970 is performing well even with very slow 0.5GB. I still expec 970 to show bigger gap when/if any real 4GB game/mod shows up, really utilizing the whole amount for objects on a single frame. But even I, a person pretty well aware of the "morality" of Nvidia boss/-es, don't believe they would lie when specifically asked. So I'd like to see the real reason why this pixel fillrate test shows so big difference.
Look at this:
http://anandtech.com/show/9306/the-nvidia-geforce-gtx-980-ti-review/14
Image

22 vs. 24 doesn't equal 36,7 vs. 43,6.

That would suggest we're dealing with another "970" case here, but this time Nvidia was asked and replied it's not the case. They talk about it here:
http://www.purepc.pl/karty_graficzne/premierowy_test_geforce_gtx_980_ti_tansza_wersja_gtx_titan_x

Podobny manewr w wykonaniu GM200-310 byłby nawet prawdopodobny, gdyby zamiast 96 pojawiło się 88 ROP-ów, jednak NVIDIA inaczej skroiła układ. Chcąc rozwiać wszelkie wątpliwości zapytałem więc bezpośrednio u źródła: czy GeForce GTX 980 Ti skrywa jakieś tajemnice i otrzymałem zdecydowaną odpowiedź: NIE! GeForce GTX 980 Ti posiada 6144 MB pamięci na tekstury o przepustowości 336 GB/s.

translated would mean: "Nvidia cut the chip differently this time. To be sure, I asked directly if 980Ti has any secrets and I got a straight answer: NO! 980Ti has 6144MB of memory with 336GB/s bandwidth".

But the question remains: Especially if the clocks of 980ti are a little higher, why the difference is so big? And why it happens in exactly the same department (ROPs) as in 970? If it was 5,5+0,5GB again, then why no game seems to show it? Less memory improved the latency or the speed of memory controller (timings etc.)?

It could be because there's no game that uses more than 5,5GB of memory with even poorly ported games that allocate a lot of memory not really using that much. It's a different thing to allocate 6GB and use 6GB, and a different thing to allocate 6GB and use 2GB, while the rest is allocated just because the dev was lazy (/had a publisher pushing for this) and there's no real PC port, just straight code from the consoles with unified memory.
That in my opinion explains very well why 970 is performing well even with very slow 0.5GB. I still expec 970 to show bigger gap when/if any real 4GB game/mod shows up, really utilizing the whole amount for objects on a single frame.

But even I, a person pretty well aware of the "morality" of Nvidia boss/-es, don't believe they would lie when specifically asked. So I'd like to see the real reason why this pixel fillrate test shows so big difference.

#87
Posted 06/03/2015 06:59 AM   
The explanation for lack of a performance difference between the ti and the x is that the ti has more thermal headroom because of fewer cores and less dram chips. So it can boost higher and give matching performance. But we're talking an extra boost of 100mhz even going by linus' video. Are we saying that 22 smm running at 1100mhz is equal to 24 smm at 1000mhz? Really? I dunno. But I smell something fishy. Someone should force both cards to same voltage and clock and compare the benchmarks.
The explanation for lack of a performance difference between the ti and the x is that the ti has more thermal headroom because of fewer cores and less dram chips. So it can boost higher and give matching performance.

But we're talking an extra boost of 100mhz even going by linus' video. Are we saying that 22 smm running at 1100mhz is equal to 24 smm at 1000mhz? Really?

I dunno. But I smell something fishy. Someone should force both cards to same voltage and clock and compare the benchmarks.

#88
Posted 06/03/2015 03:07 PM   
[quote=""]The explanation for lack of a performance difference between the ti and the x is that the ti has more thermal headroom because of fewer cores and less dram chips. So it can boost higher and give matching performance. But we're talking an extra boost of 100mhz even going by linus' video. Are we saying that 22 smm running at 1100mhz is equal to 24 smm at 1000mhz? Really? I dunno. But I smell something fishy. Someone should force both cards to same voltage and clock and compare the benchmarks. [/quote] Yes, that's pretty much what AnandTech is saying. That's not that big a stretch, the nature of performance bottlenecks is that as you release the pressure on that bottleneck, you get a dramatic bump in performance. So yes, 100Mhz can make this big a difference, if the core clock is truly the limiting factor.
said:The explanation for lack of a performance difference between the ti and the x is that the ti has more thermal headroom because of fewer cores and less dram chips. So it can boost higher and give matching performance.

But we're talking an extra boost of 100mhz even going by linus' video. Are we saying that 22 smm running at 1100mhz is equal to 24 smm at 1000mhz? Really?

I dunno. But I smell something fishy. Someone should force both cards to same voltage and clock and compare the benchmarks.

Yes, that's pretty much what AnandTech is saying.

That's not that big a stretch, the nature of performance bottlenecks is that as you release the pressure on that bottleneck, you get a dramatic bump in performance. So yes, 100Mhz can make this big a difference, if the core clock is truly the limiting factor.

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

#89
Posted 06/03/2015 06:24 PM   
Another pretty great review of 980ti. [url]http://www.guru3d.com/articles_pages/nvidia_geforce_gtx_980_ti_review,1.html[/url] There are some further details there that are interesting. At 1080p, the difference between the two cards is visble, and the TitanX is 10-15% faster, instead of the 2-3% we've seen. Since the review sites are tending to jump on the 4K bandwagon, that suggests that at super high resolutions the core clock matters, at lower resolutions we see the core count matter. For 3D, I don't have a good sense for what would matter most. They also use a thermal imaging camera and so you can directly see the impact of the 12G of VRAM: [url]http://www.guru3d.com/articles_pages/nvidia_geforce_gtx_titan_x_review,10.html[/url] This is definitely hurting the TitanX performance and overclock headroom. 12G is a bit too much for marketing purposes here, and it's a net negative. The 6G on the 980ti is a clearly better match to next-gen consoles, which is where all the games are going to come from for the foreseeable future. On the other hand, launch driver totally sucks. Driver lock-in. Worth noting, EVGA has a water cooled version of TitanX for $1100 which would presumably unleash that extra range you might want.
Another pretty great review of 980ti.

http://www.guru3d.com/articles_pages/nvidia_geforce_gtx_980_ti_review,1.html

There are some further details there that are interesting. At 1080p, the difference between the two cards is visble, and the TitanX is 10-15% faster, instead of the 2-3% we've seen. Since the review sites are tending to jump on the 4K bandwagon, that suggests that at super high resolutions the core clock matters, at lower resolutions we see the core count matter. For 3D, I don't have a good sense for what would matter most.


They also use a thermal imaging camera and so you can directly see the impact of the 12G of VRAM:

http://www.guru3d.com/articles_pages/nvidia_geforce_gtx_titan_x_review,10.html

This is definitely hurting the TitanX performance and overclock headroom. 12G is a bit too much for marketing purposes here, and it's a net negative. The 6G on the 980ti is a clearly better match to next-gen consoles, which is where all the games are going to come from for the foreseeable future.

On the other hand, launch driver totally sucks. Driver lock-in.


Worth noting, EVGA has a water cooled version of TitanX for $1100 which would presumably unleash that extra range you might want.

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

#90
Posted 06/03/2015 08:48 PM   
  6 / 8    
Scroll To Top