Any point in having a dedicated PhysX card now? Thinking Witcher 3.
2 / 3
[quote="andysonofbob"]Do any games now support it?
Witcher 3? GTA5?
Thanks![/quote]
I don't think that Witcher 3 engine is using dedicated GPU to deal with physical effects - I bet that it`s all done by CPU.
Also GTAV got his own physycs engine so you won`t take any benefits from that.
I recommend that you add some money and get stronger gpu/SLI setup. You`ll be better off than investing it that minor improvement.
I don't think that Witcher 3 engine is using dedicated GPU to deal with physical effects - I bet that it`s all done by CPU.
Also GTAV got his own physycs engine so you won`t take any benefits from that.
I recommend that you add some money and get stronger gpu/SLI setup. You`ll be better off than investing it that minor improvement.
According to that list you provided bo3b, PhysX in The Witcher 3 is limited to CPU only. Which is odd, because it's single player.
But PhysX being limited to CPU only might be something that we see quite often due to Cross Platform Multiplayer.
I posted this in the War Thunder thread
[quote="D-Man11"]Gaijin added WaveWorks in War Thunder and because it is cross platform, they are making the CPU do all of the PhysX destruction.
War Thunder is the first game to get NVIDIA WaveWorks - See more at: http://blogs.nvidia.com/blog/2015/11/04/war-thunder/
"A Cross Platform Upgrade for All Most notable is the way Gaijin made WaveWorks and Destruction in War Thunder work for players on different gaming platforms. To maintain a level battlefield on the cross-platform 16-versus-16 multiplayer game, Gaijin took NVIDIA’s middleware source code and tweaked it to meet War Thunder’s requirements. [color="orange"] So, NVIDIA PhysX Destruction executes on the CPU only, while WaveWorks takes a customized CPU/GPU implementation. This way all players will enjoy the same game-changing special effects and well-balanced gameplay.[/color] The result is a cross-platform upgrade for gamers on PC and console alike."
https://www.youtube.com/watch?v=WrBQqHOMiaw#t=104
So the big question is...How do the waves look in 3D?
Oh and is the CPU bottlenecking?[/quote]
"A Cross Platform Upgrade for All Most notable is the way Gaijin made WaveWorks and Destruction in War Thunder work for players on different gaming platforms. To maintain a level battlefield on the cross-platform 16-versus-16 multiplayer game, Gaijin took NVIDIA’s middleware source code and tweaked it to meet War Thunder’s requirements. So, NVIDIA PhysX Destruction executes on the CPU only, while WaveWorks takes a customized CPU/GPU implementation. This way all players will enjoy the same game-changing special effects and well-balanced gameplay. The result is a cross-platform upgrade for gamers on PC and console alike."
#t=104
So the big question is...How do the waves look in 3D?
Seems to be wrong on Witcher3, or at least not fully accurate:
http://steamcommunity.com/app/292030/discussions/0/615085406654428453/
@Skaut: It's not a question of investment. It's a question of having an old card lying around ($100 on eBay), and whether it's worth using.
What it boils down to is whether there are going to be more PhysX games in the future or not. The simple answer is we just don't know.
I would personally get a stronger SLi setup AND use the spare 670 in a third (perhaps smaller) PCIe slot for dedicated PhysX.
Personally, I would not regret having an extra dedicated card even if it's never used again, even if losing a potential resale value of ~$100.
I would, however, regret having sold the potentially dedicated PhysX card, if a good PsysX game were to come out.
Someone wiser than me once said that at the end of one's life, a person never regrets things where he tried but failed. We tend to regret things that we never attempted to try at all.
What it boils down to is whether there are going to be more PhysX games in the future or not. The simple answer is we just don't know.
I would personally get a stronger SLi setup AND use the spare 670 in a third (perhaps smaller) PCIe slot for dedicated PhysX.
Personally, I would not regret having an extra dedicated card even if it's never used again, even if losing a potential resale value of ~$100.
I would, however, regret having sold the potentially dedicated PhysX card, if a good PsysX game were to come out.
Someone wiser than me once said that at the end of one's life, a person never regrets things where he tried but failed. We tend to regret things that we never attempted to try at all.
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
I ended up keeping one of my old cards for Physx (yes its a bit overkill for that.) and I'm glad i did. It's actually pretty significant for the few games I use it.
and its crucial for running Arkham Knight with any acceptable performance.
I ended up keeping one of my old cards for Physx (yes its a bit overkill for that.) and I'm glad i did. It's actually pretty significant for the few games I use it.
and its crucial for running Arkham Knight with any acceptable performance.
i7-4790K CPU 4.8Ghz stable overclock.
16 GB RAM Corsair
EVGA 1080TI SLI
Samsung SSD 840Pro
ASUS Z97-WS
3D Surround ASUS Rog Swift PG278Q(R), 2x PG278Q (yes it works)
Obutto R3volution.
Windows 10 pro 64x (Windows 7 Dual boot)
There's no doubt that adding in a dedicated PhysX card will benefit games that use it. Especially some of the new resource intensive games.
But while older GPUs did not saturate PCI Express lanes, it might be a problem with some of these newer GPUs on older Motherboards using PCIe 2.0. Even more so when a second GPU is introduced and bandwidth per lane drops to 8x or even 4x.
Also Maxwell GPUs seem to be finicky, a 750ti(with GDDR 5), 950 or 960 from the Maxwell family might compliment it better. Especially heat and power wise.
There's no doubt that adding in a dedicated PhysX card will benefit games that use it. Especially some of the new resource intensive games.
But while older GPUs did not saturate PCI Express lanes, it might be a problem with some of these newer GPUs on older Motherboards using PCIe 2.0. Even more so when a second GPU is introduced and bandwidth per lane drops to 8x or even 4x.
Also Maxwell GPUs seem to be finicky, a 750ti(with GDDR 5), 950 or 960 from the Maxwell family might compliment it better. Especially heat and power wise.
[quote="RAGEdemon"]Someone wiser than me once said that at the end of one's life, a person never regrets things where he tried but failed. We tend to regret things that we never attempted to try at all.[/quote]
This Wise Man obviously has never had the pleasure of a failed marriage and nasty divorce. I know a few men that regret having tried it.
RAGEdemon said:Someone wiser than me once said that at the end of one's life, a person never regrets things where he tried but failed. We tend to regret things that we never attempted to try at all.
This Wise Man obviously has never had the pleasure of a failed marriage and nasty divorce. I know a few men that regret having tried it.
[quote="D-Man11"][quote="RAGEdemon"]Someone wiser than me once said that at the end of one's life, a person never regrets things where he tried but failed. We tend to regret things that we never attempted to try at all.[/quote]
This Wise Man obviously has never had the pleasure of a failed marriage and nasty divorce. I know a few men that regret having tried it.[/quote]
I sympathise with the sentiment D-Man11, but there are more people happily married than bitterly divorced I would imagine, even with the extraordinarily high divorce rates in some parts of the world.
Another wise man once said " 'Tis better to have loved and lost than never to have loved at all".
I would like to think that these divorced people still tried other relationships rather than giving up for the remainder of their lives.
¯\_(ツ)_/¯
Regarding PCIe bandwidth, we have nothing to worry about for quite some time. The original PsysX cards were PCI only as they don't require much bandwidth. Even PCIe 1.1 @ 4x is still viable even with a GTx 980...
[img]http://tpucdn.com/reviews/NVIDIA/GTX_980_PCI-Express_Scaling/images/perfrel_3840.gif[/img]
RAGEdemon said:Someone wiser than me once said that at the end of one's life, a person never regrets things where he tried but failed. We tend to regret things that we never attempted to try at all.
This Wise Man obviously has never had the pleasure of a failed marriage and nasty divorce. I know a few men that regret having tried it.
I sympathise with the sentiment D-Man11, but there are more people happily married than bitterly divorced I would imagine, even with the extraordinarily high divorce rates in some parts of the world.
Another wise man once said " 'Tis better to have loved and lost than never to have loved at all".
I would like to think that these divorced people still tried other relationships rather than giving up for the remainder of their lives.
¯\_(ツ)_/¯
Regarding PCIe bandwidth, we have nothing to worry about for quite some time. The original PsysX cards were PCI only as they don't require much bandwidth. Even PCIe 1.1 @ 4x is still viable even with a GTx 980...
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
Last month AMD announced GPUOpen, while they have always leaned towards Open Source ( http://developer.amd.com/tools-and-sdks/open-source/ ) they are trying to take their rallying behind the Open Source banner to the next level. They have contributed to both the Havoc and Bullet physics engines, which is in their own interest to do.
http://www.tomshardware.com/news/amd-gpuopen-open-source-development,30750.html
Last month AMD announced GPUOpen, while they have always leaned towards Open Source ( http://developer.amd.com/tools-and-sdks/open-source/ ) they are trying to take their rallying behind the Open Source banner to the next level. They have contributed to both the Havoc and Bullet physics engines, which is in their own interest to do.
[quote="RAGEdemon"]Regarding PCIe bandwidth, we have nothing to worry about for quite some time. The original PsysX cards were PCI only as they don't require much bandwidth. Even PCIe 1.1 @ 4x is still viable even with a GTx 980...[/quote]
"For the majority of games, there is no significant performance difference between x16 3.0 and x8 3.0 (and x16 2.0, which offers the same bandwidth). The average difference is only 1%, which you'd never notice. Even such bandwidth-restricted scenario as x16 1.1 or x8 2.0, offered by seriously old motherboards, only saw a small difference of around 5%. The same goes for x4 3.0, which is the bandwidth offered by the x4 slots on some recent motherboards. It's worth noting here that not all x4 slots are wired to the CPU. Some of the cheaper motherboards have their PCIe x16 (electrical x4) slots wired to the chipset instead of the CPU, which could severely clog the chipset bus (the connection between the CPU and the chipset, limited to a mere 2 GB/s on the Intel platform). Refer to the block-diagram in your motherboard's manual."
Quoted from http://www.techpowerup.com/reviews/NVIDIA/GTX_980_PCI-Express_Scaling/22.html
The article is subjective, just like anything. But my personal experience, I gained 5% in FireStrike after noticing in GPUz, that I had inadvertently installed my 980ti in a PCIe 8x slot.
RAGEdemon said:Regarding PCIe bandwidth, we have nothing to worry about for quite some time. The original PsysX cards were PCI only as they don't require much bandwidth. Even PCIe 1.1 @ 4x is still viable even with a GTx 980...
"For the majority of games, there is no significant performance difference between x16 3.0 and x8 3.0 (and x16 2.0, which offers the same bandwidth). The average difference is only 1%, which you'd never notice. Even such bandwidth-restricted scenario as x16 1.1 or x8 2.0, offered by seriously old motherboards, only saw a small difference of around 5%. The same goes for x4 3.0, which is the bandwidth offered by the x4 slots on some recent motherboards. It's worth noting here that not all x4 slots are wired to the CPU. Some of the cheaper motherboards have their PCIe x16 (electrical x4) slots wired to the chipset instead of the CPU, which could severely clog the chipset bus (the connection between the CPU and the chipset, limited to a mere 2 GB/s on the Intel platform). Refer to the block-diagram in your motherboard's manual."
The article is subjective, just like anything. But my personal experience, I gained 5% in FireStrike after noticing in GPUz, that I had inadvertently installed my 980ti in a PCIe 8x slot.
[quote="D-Man11"]There's no doubt that adding in a dedicated PhysX card will benefit games that use it. Especially some of the new resource intensive games.
But while older GPUs did not saturate PCI Express lanes, it might be a problem with some of these newer GPUs on older Motherboards using PCIe 2.0. Even more so when a second GPU is introduced and bandwidth per lane drops to 8x or even 4x.
Also Maxwell GPUs seem to be finicky, a 750ti(with GDDR 5), 950 or 960 from the Maxwell family might compliment it better. Especially heat and power wise.
[/quote]
All Valid points. But, In my specific build and motherboard with all three cards it ends up being. 16x /8x /8x and from everything I have read and experienced doing different bench-tests we still haven't really reached a point where even 8x has been exceeded.
As I saw no significant change in performance or 3dmark scores while testing it at 16x 16x (without a physx card)
Heats definitely an issue and my cards get pretty warm to say the least but not enough to be a problem.
(reading the above comments after my post are interesting, maybe this is motherboard related with how it deals with the lanes.)
D-Man11 said:There's no doubt that adding in a dedicated PhysX card will benefit games that use it. Especially some of the new resource intensive games.
But while older GPUs did not saturate PCI Express lanes, it might be a problem with some of these newer GPUs on older Motherboards using PCIe 2.0. Even more so when a second GPU is introduced and bandwidth per lane drops to 8x or even 4x.
Also Maxwell GPUs seem to be finicky, a 750ti(with GDDR 5), 950 or 960 from the Maxwell family might compliment it better. Especially heat and power wise.
All Valid points. But, In my specific build and motherboard with all three cards it ends up being. 16x /8x /8x and from everything I have read and experienced doing different bench-tests we still haven't really reached a point where even 8x has been exceeded.
As I saw no significant change in performance or 3dmark scores while testing it at 16x 16x (without a physx card)
Heats definitely an issue and my cards get pretty warm to say the least but not enough to be a problem.
(reading the above comments after my post are interesting, maybe this is motherboard related with how it deals with the lanes.)
i7-4790K CPU 4.8Ghz stable overclock.
16 GB RAM Corsair
EVGA 1080TI SLI
Samsung SSD 840Pro
ASUS Z97-WS
3D Surround ASUS Rog Swift PG278Q(R), 2x PG278Q (yes it works)
Obutto R3volution.
Windows 10 pro 64x (Windows 7 Dual boot)
Previously on old games, a dedicated PhysX would certainly help and you could pretty much use any old GPU.
As you can see in the following chart that was for Batman: Arkham City, which at the time was pretty demanding.
[url]http://www.geforce.co.uk/whats-new/guides/batman-arkham-city-graphics-breakdown-and-performance-guide#3[/url]
The bottom result is without a dedicated PhysX card.
You can see that the difference from the oldest card to the newer one wasn't that much, but the overall gain from no dedicated card to the newer one was roughly 25%.
[img]http://international.download.nvidia.com/webassets/en_US/shared/images/articles/batmanarkhamcity/BAC-DX11-TessHigh-PhysXHigh-FXAAHigh-PhysXSecondaryCardComparison.png[/img]
But when you look at Batman: Arkham Origins on Volnaiskra's webpage, better results were gained using his TitanX as a dedicated card vs the 650. Whereas, in PLA (People's Liberation Army)he had a huge gain using the 650 vs the Titan. Which doesn't make sense.
But Volnaiskra's conclusion after all his tests is very sound, as quoted below.
Who should get a dedicated PhysX card?
Honestly, I think almost anyone with an Nvidia GPU would benefit. If you've got an older card left over from an upgrade, it's a no-brainer: keep it for PhysX. Even if you don't, it may well be worth it buying a new card specially for PhysX: this may well be one of the more economical upgrades available, as you can get a significant difference from a comparatively cheap piece of hardware. Sure, it'll only help in certain games, but the same could be said for a CPU upgrade, yet that doesn't seem to stop most people.
http://www.volnapc.com/all-posts/how-much-difference-does-a-dedicated-physx-card-make
With the exception that I pointed out, if you are using an older motherboard with PCIe 2.0, you could lose any gain if both your lanes drop to 8x. Provided that your main GPU is a 980, 980ti or Titan that could possibly saturate a 8x lane. But if your lanes remain 16x, your good to go.
With PCIe 3.0, there is no problem
The bottom result is without a dedicated PhysX card.
You can see that the difference from the oldest card to the newer one wasn't that much, but the overall gain from no dedicated card to the newer one was roughly 25%.
But when you look at Batman: Arkham Origins on Volnaiskra's webpage, better results were gained using his TitanX as a dedicated card vs the 650. Whereas, in PLA (People's Liberation Army)he had a huge gain using the 650 vs the Titan. Which doesn't make sense.
But Volnaiskra's conclusion after all his tests is very sound, as quoted below.
Who should get a dedicated PhysX card?
Honestly, I think almost anyone with an Nvidia GPU would benefit. If you've got an older card left over from an upgrade, it's a no-brainer: keep it for PhysX. Even if you don't, it may well be worth it buying a new card specially for PhysX: this may well be one of the more economical upgrades available, as you can get a significant difference from a comparatively cheap piece of hardware. Sure, it'll only help in certain games, but the same could be said for a CPU upgrade, yet that doesn't seem to stop most people.
With the exception that I pointed out, if you are using an older motherboard with PCIe 2.0, you could lose any gain if both your lanes drop to 8x. Provided that your main GPU is a 980, 980ti or Titan that could possibly saturate a 8x lane. But if your lanes remain 16x, your good to go.
I don't think this has been posted but Volnaiskra did do a dedicated PhysX test with The Witcher 3, and the results were quite dissapointing:
http://www.volnapc.com/all-posts/does-a-dedicated-physx-card-help-in-witcher-3
Apparently, PhysX was dialed back in TW3 quite a bit after it was found to interfere with game mechanics.
It's also worth pointing out that, as Volnaiskra says, the percentage time at minimum FPS is significantly affected. This means that with a dedicated card, you will get significantly reduced misrostutter. Unfortunately, the above graph does not show this, but the following graph does:
[i][img]http://www.volnapc.com/uploads/3/0/9/1/30918989/8667959_orig.png[/img]
"Please note : the reddish portion of the graphs is just the background; the actual data is shown by the jagged black area, which is actually just a single line. This line purports to represent FPS (Frame per Second), but actually shows much more detail than that - tiny fractions of a second. As the black line makes its way across the graph it jumps up and down, plotting so many points at such a high density that it appears to be one large, solid mass (especially in SLI)."[/i]
Note how the coloured area under the curve with a dedicated setup significantly decreases line instability (spikes downwards), aka micro-stutter. I don't know how susceptible to micro-stutter you guys might be, but I personally felt this result was the deal sealer. In comparison, I wouldn't even mind decreased performance of a few % due to PCIe lane bandwidth, if it were to exist.
Looking at this topic again (it has come up a few times in the past), I will be buying a dedicated card. If you guys like, I can do some benchmarks in some games.
What card would you guys suggest?
Obviously, the best bang for the buck would be great, the only caveat being that it needs to be a single slot solution to enable it to fit in my system.
Apparently, PhysX was dialed back in TW3 quite a bit after it was found to interfere with game mechanics.
It's also worth pointing out that, as Volnaiskra says, the percentage time at minimum FPS is significantly affected. This means that with a dedicated card, you will get significantly reduced misrostutter. Unfortunately, the above graph does not show this, but the following graph does:
"Please note : the reddish portion of the graphs is just the background; the actual data is shown by the jagged black area, which is actually just a single line. This line purports to represent FPS (Frame per Second), but actually shows much more detail than that - tiny fractions of a second. As the black line makes its way across the graph it jumps up and down, plotting so many points at such a high density that it appears to be one large, solid mass (especially in SLI)."
Note how the coloured area under the curve with a dedicated setup significantly decreases line instability (spikes downwards), aka micro-stutter. I don't know how susceptible to micro-stutter you guys might be, but I personally felt this result was the deal sealer. In comparison, I wouldn't even mind decreased performance of a few % due to PCIe lane bandwidth, if it were to exist.
Looking at this topic again (it has come up a few times in the past), I will be buying a dedicated card. If you guys like, I can do some benchmarks in some games.
What card would you guys suggest?
Obviously, the best bang for the buck would be great, the only caveat being that it needs to be a single slot solution to enable it to fit in my system.
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
Single slot solution is tough, very tough.
Nvidia is rumored to be launching a GTX 930 this month. Hopefully it will finally offer a single slot GPU solution in the 9XX series. It would make a lot of HTCP owners very happy.
There's a great forum thread on single slot GPUs somewhere, I know I linked to it from our forums. I'll find it and link it.
EDIT: GT 930 link http://wccftech.com/nvidia-geforce-gt-930-launch-q1-2016-maxwell-kepler-fermi/
Nvidia is rumored to be launching a GTX 930 this month. Hopefully it will finally offer a single slot GPU solution in the 9XX series. It would make a lot of HTCP owners very happy.
There's a great forum thread on single slot GPUs somewhere, I know I linked to it from our forums. I'll find it and link it.
Here's the thread on our forum
https://forums.geforce.com/default/topic/883590/
Here's the link to the thread over at tech report forums. It's interesting when he compares integrated GPUs in newer CPUs to available single slot solutions for a dedicated main card in an HTPC/Budget Gaming rig. ( he doesn't discuss PhysX )
But it will give you an idea of what's available, so that you can research it.
http://techreport.com/forums/viewtopic.php?f=3&t=94408
EDIT: and to add, I know these are mini form factor, but that's currently the single slot choice.
There are single slot Quadro GPUs, but they are very weak and I'm not sure if they would work well.
The K1200 starts getting into a little better performance, but runs $300.
Galaxy has one with GDDR5 which is what you would want because the GDDR3 750ti is much slower and it's Maxwell.
http://www.galax.net/GLOBAL/750tirazor2gb.html
http://www.techpowerup.com/206212/galaxy-intros-single-slot-geforce-gtx-750-ti-razor-graphics-card.html
Here's the link to the thread over at tech report forums. It's interesting when he compares integrated GPUs in newer CPUs to available single slot solutions for a dedicated main card in an HTPC/Budget Gaming rig. ( he doesn't discuss PhysX )
EDIT: and to add, I know these are mini form factor, but that's currently the single slot choice.
There are single slot Quadro GPUs, but they are very weak and I'm not sure if they would work well.
The K1200 starts getting into a little better performance, but runs $300.
I don't think that Witcher 3 engine is using dedicated GPU to deal with physical effects - I bet that it`s all done by CPU.
Also GTAV got his own physycs engine so you won`t take any benefits from that.
I recommend that you add some money and get stronger gpu/SLI setup. You`ll be better off than investing it that minor improvement.
https://steamcommunity.com/profiles/76561198014296177/
But PhysX being limited to CPU only might be something that we see quite often due to Cross Platform Multiplayer.
I posted this in the War Thunder thread
http://steamcommunity.com/app/292030/discussions/0/615085406654428453/
@Skaut: It's not a question of investment. It's a question of having an old card lying around ($100 on eBay), and whether it's worth using.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers
I would personally get a stronger SLi setup AND use the spare 670 in a third (perhaps smaller) PCIe slot for dedicated PhysX.
Personally, I would not regret having an extra dedicated card even if it's never used again, even if losing a potential resale value of ~$100.
I would, however, regret having sold the potentially dedicated PhysX card, if a good PsysX game were to come out.
Someone wiser than me once said that at the end of one's life, a person never regrets things where he tried but failed. We tend to regret things that we never attempted to try at all.
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
and its crucial for running Arkham Knight with any acceptable performance.
i7-4790K CPU 4.8Ghz stable overclock.
16 GB RAM Corsair
EVGA 1080TI SLI
Samsung SSD 840Pro
ASUS Z97-WS
3D Surround ASUS Rog Swift PG278Q(R), 2x PG278Q (yes it works)
Obutto R3volution.
Windows 10 pro 64x (Windows 7 Dual boot)
But while older GPUs did not saturate PCI Express lanes, it might be a problem with some of these newer GPUs on older Motherboards using PCIe 2.0. Even more so when a second GPU is introduced and bandwidth per lane drops to 8x or even 4x.
Also Maxwell GPUs seem to be finicky, a 750ti(with GDDR 5), 950 or 960 from the Maxwell family might compliment it better. Especially heat and power wise.
This Wise Man obviously has never had the pleasure of a failed marriage and nasty divorce. I know a few men that regret having tried it.
I sympathise with the sentiment D-Man11, but there are more people happily married than bitterly divorced I would imagine, even with the extraordinarily high divorce rates in some parts of the world.
Another wise man once said " 'Tis better to have loved and lost than never to have loved at all".
I would like to think that these divorced people still tried other relationships rather than giving up for the remainder of their lives.
¯\_(ツ)_/¯
Regarding PCIe bandwidth, we have nothing to worry about for quite some time. The original PsysX cards were PCI only as they don't require much bandwidth. Even PCIe 1.1 @ 4x is still viable even with a GTx 980...
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
http://www.tomshardware.com/news/amd-gpuopen-open-source-development,30750.html
"For the majority of games, there is no significant performance difference between x16 3.0 and x8 3.0 (and x16 2.0, which offers the same bandwidth). The average difference is only 1%, which you'd never notice. Even such bandwidth-restricted scenario as x16 1.1 or x8 2.0, offered by seriously old motherboards, only saw a small difference of around 5%. The same goes for x4 3.0, which is the bandwidth offered by the x4 slots on some recent motherboards. It's worth noting here that not all x4 slots are wired to the CPU. Some of the cheaper motherboards have their PCIe x16 (electrical x4) slots wired to the chipset instead of the CPU, which could severely clog the chipset bus (the connection between the CPU and the chipset, limited to a mere 2 GB/s on the Intel platform). Refer to the block-diagram in your motherboard's manual."
Quoted from http://www.techpowerup.com/reviews/NVIDIA/GTX_980_PCI-Express_Scaling/22.html
The article is subjective, just like anything. But my personal experience, I gained 5% in FireStrike after noticing in GPUz, that I had inadvertently installed my 980ti in a PCIe 8x slot.
All Valid points. But, In my specific build and motherboard with all three cards it ends up being. 16x /8x /8x and from everything I have read and experienced doing different bench-tests we still haven't really reached a point where even 8x has been exceeded.
As I saw no significant change in performance or 3dmark scores while testing it at 16x 16x (without a physx card)
Heats definitely an issue and my cards get pretty warm to say the least but not enough to be a problem.
(reading the above comments after my post are interesting, maybe this is motherboard related with how it deals with the lanes.)
i7-4790K CPU 4.8Ghz stable overclock.
16 GB RAM Corsair
EVGA 1080TI SLI
Samsung SSD 840Pro
ASUS Z97-WS
3D Surround ASUS Rog Swift PG278Q(R), 2x PG278Q (yes it works)
Obutto R3volution.
Windows 10 pro 64x (Windows 7 Dual boot)
As you can see in the following chart that was for Batman: Arkham City, which at the time was pretty demanding.
http://www.geforce.co.uk/whats-new/guides/batman-arkham-city-graphics-breakdown-and-performance-guide#3
The bottom result is without a dedicated PhysX card.
You can see that the difference from the oldest card to the newer one wasn't that much, but the overall gain from no dedicated card to the newer one was roughly 25%.
But when you look at Batman: Arkham Origins on Volnaiskra's webpage, better results were gained using his TitanX as a dedicated card vs the 650. Whereas, in PLA (People's Liberation Army)he had a huge gain using the 650 vs the Titan. Which doesn't make sense.
But Volnaiskra's conclusion after all his tests is very sound, as quoted below.
Who should get a dedicated PhysX card?
Honestly, I think almost anyone with an Nvidia GPU would benefit. If you've got an older card left over from an upgrade, it's a no-brainer: keep it for PhysX. Even if you don't, it may well be worth it buying a new card specially for PhysX: this may well be one of the more economical upgrades available, as you can get a significant difference from a comparatively cheap piece of hardware. Sure, it'll only help in certain games, but the same could be said for a CPU upgrade, yet that doesn't seem to stop most people.
http://www.volnapc.com/all-posts/how-much-difference-does-a-dedicated-physx-card-make
With the exception that I pointed out, if you are using an older motherboard with PCIe 2.0, you could lose any gain if both your lanes drop to 8x. Provided that your main GPU is a 980, 980ti or Titan that could possibly saturate a 8x lane. But if your lanes remain 16x, your good to go.
With PCIe 3.0, there is no problem
http://www.volnapc.com/all-posts/does-a-dedicated-physx-card-help-in-witcher-3
Apparently, PhysX was dialed back in TW3 quite a bit after it was found to interfere with game mechanics.
It's also worth pointing out that, as Volnaiskra says, the percentage time at minimum FPS is significantly affected. This means that with a dedicated card, you will get significantly reduced misrostutter. Unfortunately, the above graph does not show this, but the following graph does:
"Please note : the reddish portion of the graphs is just the background; the actual data is shown by the jagged black area, which is actually just a single line. This line purports to represent FPS (Frame per Second), but actually shows much more detail than that - tiny fractions of a second. As the black line makes its way across the graph it jumps up and down, plotting so many points at such a high density that it appears to be one large, solid mass (especially in SLI)."
Note how the coloured area under the curve with a dedicated setup significantly decreases line instability (spikes downwards), aka micro-stutter. I don't know how susceptible to micro-stutter you guys might be, but I personally felt this result was the deal sealer. In comparison, I wouldn't even mind decreased performance of a few % due to PCIe lane bandwidth, if it were to exist.
Looking at this topic again (it has come up a few times in the past), I will be buying a dedicated card. If you guys like, I can do some benchmarks in some games.
What card would you guys suggest?
Obviously, the best bang for the buck would be great, the only caveat being that it needs to be a single slot solution to enable it to fit in my system.
Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.
Nvidia is rumored to be launching a GTX 930 this month. Hopefully it will finally offer a single slot GPU solution in the 9XX series. It would make a lot of HTCP owners very happy.
There's a great forum thread on single slot GPUs somewhere, I know I linked to it from our forums. I'll find it and link it.
EDIT: GT 930 link http://wccftech.com/nvidia-geforce-gt-930-launch-q1-2016-maxwell-kepler-fermi/
https://forums.geforce.com/default/topic/883590/
Here's the link to the thread over at tech report forums. It's interesting when he compares integrated GPUs in newer CPUs to available single slot solutions for a dedicated main card in an HTPC/Budget Gaming rig. ( he doesn't discuss PhysX )
But it will give you an idea of what's available, so that you can research it.
http://techreport.com/forums/viewtopic.php?f=3&t=94408
EDIT: and to add, I know these are mini form factor, but that's currently the single slot choice.
There are single slot Quadro GPUs, but they are very weak and I'm not sure if they would work well.
The K1200 starts getting into a little better performance, but runs $300.
Galaxy has one with GDDR5 which is what you would want because the GDDR3 750ti is much slower and it's Maxwell.
http://www.galax.net/GLOBAL/750tirazor2gb.html
http://www.techpowerup.com/206212/galaxy-intros-single-slot-geforce-gtx-750-ti-razor-graphics-card.html