looks like I'm not that bright on the supersampling either, following the instrux, but the monitors won't change... huh. "monitor won't support resolution" Unfortunately, I have limited time to fiddle. thanks though.
looks like I'm not that bright on the supersampling either, following the instrux, but the monitors won't change... huh. "monitor won't support resolution" Unfortunately, I have limited time to fiddle. thanks though.
Could you use the overlay in MSI Afterburner to show if the 3rd 680 is parked/idled when 3D Vision is enabled and the 610 is set as PhysX?
Or if you have BF3, I saw where it has an in-game overlay built into it for monitoring hardware perfomance.
[quote="djb"]bo3b, afraid I'm not that bright following you on testing with supersampling.
I did switch the PhysX from my 3rd sli card to the 4th stand alone gtx610 and ran Unique Heaven in 3d and there was no change in FPS. I tend to agree that it would be fixed or there would be more chatter. I guess I wonder then how much gpu power for PhysX would a game like Metro Last Light take and if a 610 would be enough. The GPU tech side certainly isn't my forte.
Back to the supersampling though for resolution, I do find that fascinating and will check that out because running surround3D, you only at 2XAA.
[/quote]I'll send you some details via PM, for setting up arbitrary SuperSampling.
You have the perfect test case anyway though- Metro Last Light.
Run with their SSAA turned on max, which is (SuperSamplingAntiAliasing). This in principle is exactly the same, having it draw into a giant virtual screen which is then downsampled to your native resolution.
The idea here is that if you were draw with 4X SSAA it is 3840x2160 as the virtual screen size. That should make three cards really sweat, and ensure it's not CPU or PhysX bound.
So, the easy test is to turn on 4X SSAA, and disable PhysX. That removes the third card (or fourth) from being used for PhysX. Then try their benchmark MetroLLbenchmark.exe (in same directory as game). You can try with single card, 2x SLI, and 3x SLI and see how it impacts the framerate in 3D Vision.
Should be fairly obvious whether the third card is active or not. If you want to see card activity, you can run any number of GPU monitor software. I use NVidia Inspector to graph the results.
djb said:bo3b, afraid I'm not that bright following you on testing with supersampling.
I did switch the PhysX from my 3rd sli card to the 4th stand alone gtx610 and ran Unique Heaven in 3d and there was no change in FPS. I tend to agree that it would be fixed or there would be more chatter. I guess I wonder then how much gpu power for PhysX would a game like Metro Last Light take and if a 610 would be enough. The GPU tech side certainly isn't my forte.
Back to the supersampling though for resolution, I do find that fascinating and will check that out because running surround3D, you only at 2XAA.
I'll send you some details via PM, for setting up arbitrary SuperSampling.
You have the perfect test case anyway though- Metro Last Light.
Run with their SSAA turned on max, which is (SuperSamplingAntiAliasing). This in principle is exactly the same, having it draw into a giant virtual screen which is then downsampled to your native resolution.
The idea here is that if you were draw with 4X SSAA it is 3840x2160 as the virtual screen size. That should make three cards really sweat, and ensure it's not CPU or PhysX bound.
So, the easy test is to turn on 4X SSAA, and disable PhysX. That removes the third card (or fourth) from being used for PhysX. Then try their benchmark MetroLLbenchmark.exe (in same directory as game). You can try with single card, 2x SLI, and 3x SLI and see how it impacts the framerate in 3D Vision.
Should be fairly obvious whether the third card is active or not. If you want to see card activity, you can run any number of GPU monitor software. I use NVidia Inspector to graph the results.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607 Latest 3Dmigoto Release Bo3b's School for ShaderHackers
Ok, I ran the Metro Last Light Benchmark. Results in the jpg...
Ran several times, I'll let you decipher it.
The control panel will let me sli cards 1 and 2 and dedicate PhysX to #3.
The control panel will not let me just sli cards 1 and 2.
The control panel will not let me sli cards 1 and 2 and dedicate PhysX to card #4, the gtx610.
Seems the only way I would be able to do that is to disable card #3 in the bios or on the MB using the gpu switch.
Anyway, here's some numbers.
Ok, I ran the Metro Last Light Benchmark. Results in the jpg...
Ran several times, I'll let you decipher it.
The control panel will let me sli cards 1 and 2 and dedicate PhysX to #3.
The control panel will not let me just sli cards 1 and 2.
The control panel will not let me sli cards 1 and 2 and dedicate PhysX to card #4, the gtx610.
Seems the only way I would be able to do that is to disable card #3 in the bios or on the MB using the gpu switch.
I just realized I didn't run the benchmark with PhysX OFF, with the NV control panel set up with:
2 sli and dedicate PhysX card to #3.
I'll do tomorrow afternoon.
Ok, couldn't sleep...
Here's what I got. What I don't understand is Average is higher with 2 cards but 3 cards lets you peak the fps? I'll let the experts interpret...
Ok, couldn't sleep...
Here's what I got. What I don't understand is Average is higher with 2 cards but 3 cards lets you peak the fps? I'll let the experts interpret...
Thanks for doing this experiment. I know these sorts of things take a lot more time than expected, and I appreciate it.
I think that your test pretty solidly says that 3 GPUs is still not functional in 3D Vision.
That last test is confusing, I can't figure out why the average would increase, and max decrease in that setup. I can't think of any scenario off hand that would explain that.
Not sure about that test case, so I'm going to ignore it for now.
The two test cases that I think are diagnostic are:
3 sli, no Physx at 24, 63, 7.4
3 sli, gpu#3 Physx at 23, 67, 9
That tells me that the third card is not actually in use. When dedicated to PhysX, we get very close to the same results as when the third card is supposed to be helping draw frames.
The extra case of
3 sli, 610 as physx at 24, 58, 8
Is also a good confirmation of that, because if the 610 doing PhysX duties, we also do not get any improvement in frame rates. If anything it's slightly worse, presumably because the 610 is not quite up to the Metro PhysX task.
In this SSAA case, and given that the frame rates are actually pretty bad, averaging 24 fps, I would definitely expect the third card to matter. It's always possible we are CPU bottlenecked, which was the reason to use SSAA.
You might check your CPU at the same time, it would be conclusive, if the CPU is not running at max, because that would mean that the CPU is waiting a little on the GPU, which is what we want to see here.
If it's not a total pain, it wouldn't hurt to do the test in 2D. That would give you a comparison of how you would expect it to scale with 3 cards. I was expecting to see something like average frames go from 13, to 24, to 30. Almost double for the first, then 50% more for the third.
This is not necessarily what I expect, but it's also not a surprise that Alternate Frame Rendering SLI doesn't play nice with 3D Vision. 3D Vision wants to use those alternate frames to draw every other eye. When we throw in a third card, it's asking a lot of the driver to do a round robin on the three cards, and send it to the correct eye. It's not impossible, but I can see this complicating the driver.
If you are not done experimenting and want to see if you can get that third card active, you could try playing with the profile settings using NVidia Inspector. You can change the setting there from Force_AFR to Force_SFR, which makes the SLI draw part of every frame instead of alternating. It may not be faster because of a myriad of factors including bus overhead, but might help. In that mode each card would draw 1/3 of a frame.
In NVidia Inspector, you also have the option to force it use only 2 cards, which, if it works, would be easier than disabling in the BIOS.
Like usual, not completely clear, so many factors and technology, but unless you are CPU throttled, I don't think the 3rd card is adding value.
Thanks for doing this experiment. I know these sorts of things take a lot more time than expected, and I appreciate it.
I think that your test pretty solidly says that 3 GPUs is still not functional in 3D Vision.
That last test is confusing, I can't figure out why the average would increase, and max decrease in that setup. I can't think of any scenario off hand that would explain that.
Not sure about that test case, so I'm going to ignore it for now.
The two test cases that I think are diagnostic are:
3 sli, no Physx at 24, 63, 7.4
3 sli, gpu#3 Physx at 23, 67, 9
That tells me that the third card is not actually in use. When dedicated to PhysX, we get very close to the same results as when the third card is supposed to be helping draw frames.
The extra case of
3 sli, 610 as physx at 24, 58, 8
Is also a good confirmation of that, because if the 610 doing PhysX duties, we also do not get any improvement in frame rates. If anything it's slightly worse, presumably because the 610 is not quite up to the Metro PhysX task.
In this SSAA case, and given that the frame rates are actually pretty bad, averaging 24 fps, I would definitely expect the third card to matter. It's always possible we are CPU bottlenecked, which was the reason to use SSAA.
You might check your CPU at the same time, it would be conclusive, if the CPU is not running at max, because that would mean that the CPU is waiting a little on the GPU, which is what we want to see here.
If it's not a total pain, it wouldn't hurt to do the test in 2D. That would give you a comparison of how you would expect it to scale with 3 cards. I was expecting to see something like average frames go from 13, to 24, to 30. Almost double for the first, then 50% more for the third.
This is not necessarily what I expect, but it's also not a surprise that Alternate Frame Rendering SLI doesn't play nice with 3D Vision. 3D Vision wants to use those alternate frames to draw every other eye. When we throw in a third card, it's asking a lot of the driver to do a round robin on the three cards, and send it to the correct eye. It's not impossible, but I can see this complicating the driver.
If you are not done experimenting and want to see if you can get that third card active, you could try playing with the profile settings using NVidia Inspector. You can change the setting there from Force_AFR to Force_SFR, which makes the SLI draw part of every frame instead of alternating. It may not be faster because of a myriad of factors including bus overhead, but might help. In that mode each card would draw 1/3 of a frame.
In NVidia Inspector, you also have the option to force it use only 2 cards, which, if it works, would be easier than disabling in the BIOS.
Like usual, not completely clear, so many factors and technology, but unless you are CPU throttled, I don't think the 3rd card is adding value.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607 Latest 3Dmigoto Release Bo3b's School for ShaderHackers
@bo3b,
I'll read deeper later this week,,,
Be more than happy to do some more testing. Probably Friday night. On the Asus RIVE MB I have the option to disable specific cards with a switch, so am going to fiddle with that instead of relying on the NV control panel software. That way I'll know for sure nothing is going to the card. I'll pull up task manager to see what the cpu cores/threads are doing.
I'll download NV inspector too. Kinda interesting actually. Another benchmark you can think of to try.
Thanks, later this week.
Be more than happy to do some more testing. Probably Friday night. On the Asus RIVE MB I have the option to disable specific cards with a switch, so am going to fiddle with that instead of relying on the NV control panel software. That way I'll know for sure nothing is going to the card. I'll pull up task manager to see what the cpu cores/threads are doing.
I'll download NV inspector too. Kinda interesting actually. Another benchmark you can think of to try.
Looking at your tests, it seems to comfirm that 3-way SLI is enabled in 3D Vision now.
You should just fire off an e-mail to Nvidia support asking them about your results and find out which games (if any) are optomized for 3-way SLI with 3D Vision. Then test those games.
It certainly seems that when using 3-way SLI with PhysX set to auto, that the 3rd GPU performs a double duty or alleviates/streamlines rendering issues since your best results were obtained with it set to Auto and without a 4th GPU dedicated to PhysX.
I'm suprised that in your results when using 3-way SLI(680x3) with the 610 (as a 4th card)dedicated as a PhysX card, the FPS were lower than than 3-way SLI with the third 680 set to auto PhysX. Yet the 610 was seeing an average of 75% usage.
I'm curious if the PCIe lanes drop to 8x(or maybe 4x) when 4 GPUs are used with your motherboard and if so is the drop hindering your FPS since the GTX 680 is PCIe 3.0? I know in the past it didn't matter, but since the 680 is PCIe 3.0 does it matter now? Perhaps not with a single monitor but maybe with surround?
Looking at your tests, it seems to comfirm that 3-way SLI is enabled in 3D Vision now.
You should just fire off an e-mail to Nvidia support asking them about your results and find out which games (if any) are optomized for 3-way SLI with 3D Vision. Then test those games.
It certainly seems that when using 3-way SLI with PhysX set to auto, that the 3rd GPU performs a double duty or alleviates/streamlines rendering issues since your best results were obtained with it set to Auto and without a 4th GPU dedicated to PhysX.
I'm suprised that in your results when using 3-way SLI(680x3) with the 610 (as a 4th card)dedicated as a PhysX card, the FPS were lower than than 3-way SLI with the third 680 set to auto PhysX. Yet the 610 was seeing an average of 75% usage.
I'm curious if the PCIe lanes drop to 8x(or maybe 4x) when 4 GPUs are used with your motherboard and if so is the drop hindering your FPS since the GTX 680 is PCIe 3.0? I know in the past it didn't matter, but since the 680 is PCIe 3.0 does it matter now? Perhaps not with a single monitor but maybe with surround?
1x gpu 13.48 fps -> 100 %
2x gpu 27.41 fps -> 203 %
3x gpu 24.08 fps -> 179 %
this is negative scaling in my book, suggesting that 3d vision automatic still only supports 2 gpus.
however, games with both native stereo 3d and 3-way sli support might work.
the tomb raider or sleeping dogs benchmark with vsync forced off in the driver would be interesting.
this is negative scaling in my book, suggesting that 3d vision automatic still only supports 2 gpus.
however, games with both native stereo 3d and 3-way sli support might work.
the tomb raider or sleeping dogs benchmark with vsync forced off in the driver would be interesting.
NVIDIA TITAN X (Pascal), Intel Core i7-6900K, Win 10 Pro,
ASUS ROG Rampage V Edition 10, G.Skill RipJaws V 4x 8GB DDR4-3200 CL14-14-14-34,
ASUS ROG Swift PG258Q, ASUS ROG Swift PG278Q, Acer Predator XB280HK, BenQ W710ST
I believe the answer to the topic question is ???? How the numbers vary is interesting. What's interesting is going from 2 to 3 cards, the average fps drops, but the maximum fps goes up. Probably due to the type of benchmarking during certain scenes??? All cards do run in sync, anyway....like I mentioned before, I'm just running the test...
I pondered doing on some more games, but maybe another day, gotta run.
@bo3b, correct, this did take longer than expected, but once I started...couldn't stop. I had to reinstall the Driver just about any time there was a shift in the number of cards I used or when I switched the bridge, which is fine as it created a good baseline, just took forever!
To address a couple questions/comments from you guys:
Regarding Throttling, the cpu will run the intel burn test at max/100% for unlimited amount of time, and this is water cooled, so there is not restrictions at the MB/OS/temp level. Running the Metro LL Bench the cpu averages about 15-20% usage with a peak usage of about 36%.
On the PCIE, attached is the Card/Slot data showing 16, 8, 8, 8. This is what the MB "user guide" says it will run the slots at, so seems to jive.
The attached shows 680 cards at PCIE3.0. This is the card, not the MB setting. The MB is running on auto (PCIE2.0). Using AIDA64, it's showing running at 2.0. I know I can change the bios to force 3.0, and there is a driver tweak to get it to run at 3.0, but I'm not going there because of what I've read in the Asus ROG forums regarding the Sandy Bridge 3960K I have does not actually support 3.0. I pondered playing with this but reading the info in the links below, decided not to mess with it...
http://rog.asus.com/forum/showthread.php?32496-A-word-of-warning-regarding-GEN3-support-and-SB-E.&country=&status=
http://rog.asus.com/forum/showthread.php?22805-GTX-680-PCIe-3.0-amp-Rampage-Mobo
I've re-run the tests, actually disabling the card(s) on the MB using the PCIE lane switch, thus ensuring the card was not active.
Due to some anomolies, the NV driver was reinstalled each time a change in the number of cards was used.
Also, ran in 2D.
I did do the Force_SFR too, numbers in the attached.
Ok, back to gaming :-)
I believe the answer to the topic question is ???? How the numbers vary is interesting. What's interesting is going from 2 to 3 cards, the average fps drops, but the maximum fps goes up. Probably due to the type of benchmarking during certain scenes??? All cards do run in sync, anyway....like I mentioned before, I'm just running the test...
I pondered doing on some more games, but maybe another day, gotta run.
@bo3b, correct, this did take longer than expected, but once I started...couldn't stop. I had to reinstall the Driver just about any time there was a shift in the number of cards I used or when I switched the bridge, which is fine as it created a good baseline, just took forever!
To address a couple questions/comments from you guys:
Regarding Throttling, the cpu will run the intel burn test at max/100% for unlimited amount of time, and this is water cooled, so there is not restrictions at the MB/OS/temp level. Running the Metro LL Bench the cpu averages about 15-20% usage with a peak usage of about 36%.
On the PCIE, attached is the Card/Slot data showing 16, 8, 8, 8. This is what the MB "user guide" says it will run the slots at, so seems to jive.
Guys!
I just got a second 660ti coming home and can get a third one cheap.
SO what is the final verdict for games with no Physx...does the 3rd GPU help or not?
Thanks!
Guys!
I just got a second 660ti coming home and can get a third one cheap.
SO what is the final verdict for games with no Physx...does the 3rd GPU help or not?
Thanks!
Or if you have BF3, I saw where it has an in-game overlay built into it for monitoring hardware perfomance.
You have the perfect test case anyway though- Metro Last Light.
Run with their SSAA turned on max, which is (SuperSamplingAntiAliasing). This in principle is exactly the same, having it draw into a giant virtual screen which is then downsampled to your native resolution.
The idea here is that if you were draw with 4X SSAA it is 3840x2160 as the virtual screen size. That should make three cards really sweat, and ensure it's not CPU or PhysX bound.
So, the easy test is to turn on 4X SSAA, and disable PhysX. That removes the third card (or fourth) from being used for PhysX. Then try their benchmark MetroLLbenchmark.exe (in same directory as game). You can try with single card, 2x SLI, and 3x SLI and see how it impacts the framerate in 3D Vision.
Should be fairly obvious whether the third card is active or not. If you want to see card activity, you can run any number of GPU monitor software. I use NVidia Inspector to graph the results.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers
Ran several times, I'll let you decipher it.
The control panel will let me sli cards 1 and 2 and dedicate PhysX to #3.
The control panel will not let me just sli cards 1 and 2.
The control panel will not let me sli cards 1 and 2 and dedicate PhysX to card #4, the gtx610.
Seems the only way I would be able to do that is to disable card #3 in the bios or on the MB using the gpu switch.
Anyway, here's some numbers.
2 sli and dedicate PhysX card to #3.
I'll do tomorrow afternoon.
Here's what I got. What I don't understand is Average is higher with 2 cards but 3 cards lets you peak the fps? I'll let the experts interpret...
I think that your test pretty solidly says that 3 GPUs is still not functional in 3D Vision.
That last test is confusing, I can't figure out why the average would increase, and max decrease in that setup. I can't think of any scenario off hand that would explain that.
Not sure about that test case, so I'm going to ignore it for now.
The two test cases that I think are diagnostic are:
3 sli, no Physx at 24, 63, 7.4
3 sli, gpu#3 Physx at 23, 67, 9
That tells me that the third card is not actually in use. When dedicated to PhysX, we get very close to the same results as when the third card is supposed to be helping draw frames.
The extra case of
3 sli, 610 as physx at 24, 58, 8
Is also a good confirmation of that, because if the 610 doing PhysX duties, we also do not get any improvement in frame rates. If anything it's slightly worse, presumably because the 610 is not quite up to the Metro PhysX task.
In this SSAA case, and given that the frame rates are actually pretty bad, averaging 24 fps, I would definitely expect the third card to matter. It's always possible we are CPU bottlenecked, which was the reason to use SSAA.
You might check your CPU at the same time, it would be conclusive, if the CPU is not running at max, because that would mean that the CPU is waiting a little on the GPU, which is what we want to see here.
If it's not a total pain, it wouldn't hurt to do the test in 2D. That would give you a comparison of how you would expect it to scale with 3 cards. I was expecting to see something like average frames go from 13, to 24, to 30. Almost double for the first, then 50% more for the third.
This is not necessarily what I expect, but it's also not a surprise that Alternate Frame Rendering SLI doesn't play nice with 3D Vision. 3D Vision wants to use those alternate frames to draw every other eye. When we throw in a third card, it's asking a lot of the driver to do a round robin on the three cards, and send it to the correct eye. It's not impossible, but I can see this complicating the driver.
If you are not done experimenting and want to see if you can get that third card active, you could try playing with the profile settings using NVidia Inspector. You can change the setting there from Force_AFR to Force_SFR, which makes the SLI draw part of every frame instead of alternating. It may not be faster because of a myriad of factors including bus overhead, but might help. In that mode each card would draw 1/3 of a frame.
In NVidia Inspector, you also have the option to force it use only 2 cards, which, if it works, would be easier than disabling in the BIOS.
Like usual, not completely clear, so many factors and technology, but unless you are CPU throttled, I don't think the 3rd card is adding value.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers
I'll read deeper later this week,,,
Be more than happy to do some more testing. Probably Friday night. On the Asus RIVE MB I have the option to disable specific cards with a switch, so am going to fiddle with that instead of relying on the NV control panel software. That way I'll know for sure nothing is going to the card. I'll pull up task manager to see what the cpu cores/threads are doing.
I'll download NV inspector too. Kinda interesting actually. Another benchmark you can think of to try.
Thanks, later this week.
You should just fire off an e-mail to Nvidia support asking them about your results and find out which games (if any) are optomized for 3-way SLI with 3D Vision. Then test those games.
It certainly seems that when using 3-way SLI with PhysX set to auto, that the 3rd GPU performs a double duty or alleviates/streamlines rendering issues since your best results were obtained with it set to Auto and without a 4th GPU dedicated to PhysX.
I'm suprised that in your results when using 3-way SLI(680x3) with the 610 (as a 4th card)dedicated as a PhysX card, the FPS were lower than than 3-way SLI with the third 680 set to auto PhysX. Yet the 610 was seeing an average of 75% usage.
I'm curious if the PCIe lanes drop to 8x(or maybe 4x) when 4 GPUs are used with your motherboard and if so is the drop hindering your FPS since the GTX 680 is PCIe 3.0? I know in the past it didn't matter, but since the 680 is PCIe 3.0 does it matter now? Perhaps not with a single monitor but maybe with surround?
2x gpu 27.41 fps -> 203 %
3x gpu 24.08 fps -> 179 %
this is negative scaling in my book, suggesting that 3d vision automatic still only supports 2 gpus.
however, games with both native stereo 3d and 3-way sli support might work.
the tomb raider or sleeping dogs benchmark with vsync forced off in the driver would be interesting.
NVIDIA TITAN X (Pascal), Intel Core i7-6900K, Win 10 Pro,
ASUS ROG Rampage V Edition 10, G.Skill RipJaws V 4x 8GB DDR4-3200 CL14-14-14-34,
ASUS ROG Swift PG258Q, ASUS ROG Swift PG278Q, Acer Predator XB280HK, BenQ W710ST
I pondered doing on some more games, but maybe another day, gotta run.
@bo3b, correct, this did take longer than expected, but once I started...couldn't stop. I had to reinstall the Driver just about any time there was a shift in the number of cards I used or when I switched the bridge, which is fine as it created a good baseline, just took forever!
To address a couple questions/comments from you guys:
Regarding Throttling, the cpu will run the intel burn test at max/100% for unlimited amount of time, and this is water cooled, so there is not restrictions at the MB/OS/temp level. Running the Metro LL Bench the cpu averages about 15-20% usage with a peak usage of about 36%.
On the PCIE, attached is the Card/Slot data showing 16, 8, 8, 8. This is what the MB "user guide" says it will run the slots at, so seems to jive.
The attached shows 680 cards at PCIE3.0. This is the card, not the MB setting. The MB is running on auto (PCIE2.0). Using AIDA64, it's showing running at 2.0. I know I can change the bios to force 3.0, and there is a driver tweak to get it to run at 3.0, but I'm not going there because of what I've read in the Asus ROG forums regarding the Sandy Bridge 3960K I have does not actually support 3.0. I pondered playing with this but reading the info in the links below, decided not to mess with it...
http://rog.asus.com/forum/showthread.php?32496-A-word-of-warning-regarding-GEN3-support-and-SB-E.&country=&status=
http://rog.asus.com/forum/showthread.php?22805-GTX-680-PCIe-3.0-amp-Rampage-Mobo
I've re-run the tests, actually disabling the card(s) on the MB using the PCIE lane switch, thus ensuring the card was not active.
Due to some anomolies, the NV driver was reinstalled each time a change in the number of cards was used.
Also, ran in 2D.
I did do the Force_SFR too, numbers in the attached.
Ok, back to gaming :-)
I just got a second 660ti coming home and can get a third one cheap.
SO what is the final verdict for games with no Physx...does the 3rd GPU help or not?
Thanks!
*CPU: i7 920 DO @ 4.1Ghz 1.35v HT On*CPU Cooler: Thermaltake Water 2.0 Extreme*Mobo: Evga X58 SLI / RAM: 12GB Crucial Ballistix Tactical Tracer DDR3 1600 7-7-7-21 1.5v*Video Cards:Tri Sli Evga GTX 660 ti x 2 & MSI GTX 660 ti *Speakers:CBM-170 SE*PSU: Corsair HX1000W*Display: Mitusbishi 60" DLP (3D Vision ) Qnix QX2710 27" 1440P*Case: CoolerMaster HAF X (932 side) *Windows 7 64Bit on Samsung 840 256GB*Others: Roccat Kone XTD | Roccat Alumic | Logitech G15 | *Mobile: Galaxy Note 2
i7 4970k@4.5Ghz, SLI GTX1080Ti Aorus Gigabyte Xtreme, 16GB G Skill 2400hrz, 3*PG258Q in 3D surround.