Hey, I just thought I would share with you all that after installing CD 1.27 today, I'm experience a significant performance boost while playing Metro 2033 in 3D (with DirectX11 and everything maxed out). I haven't bothered to take any exact numbers, but I'd estimate that I'm getting at least an extra 10fps.
So yeah, anybody who might not have been able to run Metro 2033 in 3D very well before, I suggest you give it another go with the new drivers.
Hey, I just thought I would share with you all that after installing CD 1.27 today, I'm experience a significant performance boost while playing Metro 2033 in 3D (with DirectX11 and everything maxed out). I haven't bothered to take any exact numbers, but I'd estimate that I'm getting at least an extra 10fps.
So yeah, anybody who might not have been able to run Metro 2033 in 3D very well before, I suggest you give it another go with the new drivers.
Hey, I just thought I would share with you all that after installing CD 1.27 today, I'm experience a significant performance boost while playing Metro 2033 in 3D (with DirectX11 and everything maxed out). I haven't bothered to take any exact numbers, but I'd estimate that I'm getting at least an extra 10fps.
So yeah, anybody who might not have been able to run Metro 2033 in 3D very well before, I suggest you give it another go with the new drivers.
Hey, I just thought I would share with you all that after installing CD 1.27 today, I'm experience a significant performance boost while playing Metro 2033 in 3D (with DirectX11 and everything maxed out). I haven't bothered to take any exact numbers, but I'd estimate that I'm getting at least an extra 10fps.
So yeah, anybody who might not have been able to run Metro 2033 in 3D very well before, I suggest you give it another go with the new drivers.
I had a chance to run this in 3D for the first time last night and performance in DX11 with everything maxed out (Adv. DOF off) and found performance to be very disappointing actually with 2x480SLI. Once again, the poor SLI scaling in DX10/11 popped up here, with GPUs only being used at 50-60% each. Once I dropped down to DX9 (thus losing all the benefits of DX11 ofc), GPU utilization shot up to near 100% and performance nearly doubled as well.
I had a chance to run this in 3D for the first time last night and performance in DX11 with everything maxed out (Adv. DOF off) and found performance to be very disappointing actually with 2x480SLI. Once again, the poor SLI scaling in DX10/11 popped up here, with GPUs only being used at 50-60% each. Once I dropped down to DX9 (thus losing all the benefits of DX11 ofc), GPU utilization shot up to near 100% and performance nearly doubled as well.
I had a chance to run this in 3D for the first time last night and performance in DX11 with everything maxed out (Adv. DOF off) and found performance to be very disappointing actually with 2x480SLI. Once again, the poor SLI scaling in DX10/11 popped up here, with GPUs only being used at 50-60% each. Once I dropped down to DX9 (thus losing all the benefits of DX11 ofc), GPU utilization shot up to near 100% and performance nearly doubled as well.
I had a chance to run this in 3D for the first time last night and performance in DX11 with everything maxed out (Adv. DOF off) and found performance to be very disappointing actually with 2x480SLI. Once again, the poor SLI scaling in DX10/11 popped up here, with GPUs only being used at 50-60% each. Once I dropped down to DX9 (thus losing all the benefits of DX11 ofc), GPU utilization shot up to near 100% and performance nearly doubled as well.
[quote name='chiz' post='1062072' date='May 26 2010, 10:25 AM']I had a chance to run this in 3D for the first time last night and performance in DX11 with everything maxed out (Adv. DOF off) and found performance to be very disappointing actually with 2x480SLI. Once again, the poor SLI scaling in DX10/11 popped up here, with GPUs only being used at 50-60% each. Once I dropped down to DX9 (thus losing all the benefits of DX11 ofc), GPU utilization shot up to near 100% and performance nearly doubled as well.[/quote]
Low GPU utilization is usually caused by CPU bottlenecking... What CPU do you have and at which clocks?
[quote name='chiz' post='1062072' date='May 26 2010, 10:25 AM']I had a chance to run this in 3D for the first time last night and performance in DX11 with everything maxed out (Adv. DOF off) and found performance to be very disappointing actually with 2x480SLI. Once again, the poor SLI scaling in DX10/11 popped up here, with GPUs only being used at 50-60% each. Once I dropped down to DX9 (thus losing all the benefits of DX11 ofc), GPU utilization shot up to near 100% and performance nearly doubled as well.
Low GPU utilization is usually caused by CPU bottlenecking... What CPU do you have and at which clocks?
[quote name='chiz' post='1062072' date='May 26 2010, 10:25 AM']I had a chance to run this in 3D for the first time last night and performance in DX11 with everything maxed out (Adv. DOF off) and found performance to be very disappointing actually with 2x480SLI. Once again, the poor SLI scaling in DX10/11 popped up here, with GPUs only being used at 50-60% each. Once I dropped down to DX9 (thus losing all the benefits of DX11 ofc), GPU utilization shot up to near 100% and performance nearly doubled as well.[/quote]
Low GPU utilization is usually caused by CPU bottlenecking... What CPU do you have and at which clocks?
[quote name='chiz' post='1062072' date='May 26 2010, 10:25 AM']I had a chance to run this in 3D for the first time last night and performance in DX11 with everything maxed out (Adv. DOF off) and found performance to be very disappointing actually with 2x480SLI. Once again, the poor SLI scaling in DX10/11 popped up here, with GPUs only being used at 50-60% each. Once I dropped down to DX9 (thus losing all the benefits of DX11 ofc), GPU utilization shot up to near 100% and performance nearly doubled as well.
Low GPU utilization is usually caused by CPU bottlenecking... What CPU do you have and at which clocks?
[quote name='-{RaptoR}-' post='1062074' date='May 25 2010, 06:27 PM']Low GPU utilization is usually caused by CPU bottlenecking... What CPU do you have and at which clocks?[/quote]
With SLI in non-3D Vision CPU bottlenecks can be a concern, but 3D Vision SLI scaling is a completely different beast due to the Vsync cap at 60FPS along with all the stereo driver overhead and synchronization going on with the various hardware layers.
I'm running an i7 920 @ 4.0GHz on an X58 with x16/x16 PCIe bandwidth to each GTX 480, so if its CPU/system bottlenecking then there's not much that can be done about it until we get faster CPU uarchs. I personally don't think that's the problem though, as other games and even the same games running DX9 path instead of DX10 scale better with GPU utilization near 100% on both GPUs. I think it comes down to the stereo driver scaling poorly in SLI in DX10/11 or incorrectly calculating the frame sync tick when you add additional GPUs. Instead of increasing its sync tick rate by a factor of 1 for each GPU, it just uses the same sync rate of a single GPU and spreads that load over your GPUs.
Games I've observed this in directly:
[list]
[*]Avatar
[*]Metro 2033
[*]Crysis
[*]Just Cause 2
[*]BFBC2 (closer to 60-70% per GPU)
[/list]
Probably a few more I'm forgetting atm. But the end result is pretty obvious, in the worst case you basically get the same denominations of 60Hz you would with Vsync on, like 30, 20, 15, 10 ete. In best case you'd see what you would with Triple Buffering enabled, so all incremental FPS between 30 and 60FPS. In the middle is when you see the increments you would see with 120Hz Vsync, like 60, 40, 30, 15, 10 etc. I've tried forcing Triple Buffering in DX10/11 via NVCP for these titles but it doesn't really seem to have an impact, which makes me think the Stereo SLI Driver has its own flags/settings that override non-stereo SLI settings.
[quote name='-{RaptoR}-' post='1062074' date='May 25 2010, 06:27 PM']Low GPU utilization is usually caused by CPU bottlenecking... What CPU do you have and at which clocks?
With SLI in non-3D Vision CPU bottlenecks can be a concern, but 3D Vision SLI scaling is a completely different beast due to the Vsync cap at 60FPS along with all the stereo driver overhead and synchronization going on with the various hardware layers.
I'm running an i7 920 @ 4.0GHz on an X58 with x16/x16 PCIe bandwidth to each GTX 480, so if its CPU/system bottlenecking then there's not much that can be done about it until we get faster CPU uarchs. I personally don't think that's the problem though, as other games and even the same games running DX9 path instead of DX10 scale better with GPU utilization near 100% on both GPUs. I think it comes down to the stereo driver scaling poorly in SLI in DX10/11 or incorrectly calculating the frame sync tick when you add additional GPUs. Instead of increasing its sync tick rate by a factor of 1 for each GPU, it just uses the same sync rate of a single GPU and spreads that load over your GPUs.
Games I've observed this in directly:
Avatar
Metro 2033
Crysis
Just Cause 2
BFBC2 (closer to 60-70% per GPU)
Probably a few more I'm forgetting atm. But the end result is pretty obvious, in the worst case you basically get the same denominations of 60Hz you would with Vsync on, like 30, 20, 15, 10 ete. In best case you'd see what you would with Triple Buffering enabled, so all incremental FPS between 30 and 60FPS. In the middle is when you see the increments you would see with 120Hz Vsync, like 60, 40, 30, 15, 10 etc. I've tried forcing Triple Buffering in DX10/11 via NVCP for these titles but it doesn't really seem to have an impact, which makes me think the Stereo SLI Driver has its own flags/settings that override non-stereo SLI settings.
[quote name='-{RaptoR}-' post='1062074' date='May 25 2010, 06:27 PM']Low GPU utilization is usually caused by CPU bottlenecking... What CPU do you have and at which clocks?[/quote]
With SLI in non-3D Vision CPU bottlenecks can be a concern, but 3D Vision SLI scaling is a completely different beast due to the Vsync cap at 60FPS along with all the stereo driver overhead and synchronization going on with the various hardware layers.
I'm running an i7 920 @ 4.0GHz on an X58 with x16/x16 PCIe bandwidth to each GTX 480, so if its CPU/system bottlenecking then there's not much that can be done about it until we get faster CPU uarchs. I personally don't think that's the problem though, as other games and even the same games running DX9 path instead of DX10 scale better with GPU utilization near 100% on both GPUs. I think it comes down to the stereo driver scaling poorly in SLI in DX10/11 or incorrectly calculating the frame sync tick when you add additional GPUs. Instead of increasing its sync tick rate by a factor of 1 for each GPU, it just uses the same sync rate of a single GPU and spreads that load over your GPUs.
Games I've observed this in directly:
[list]
[*]Avatar
[*]Metro 2033
[*]Crysis
[*]Just Cause 2
[*]BFBC2 (closer to 60-70% per GPU)
[/list]
Probably a few more I'm forgetting atm. But the end result is pretty obvious, in the worst case you basically get the same denominations of 60Hz you would with Vsync on, like 30, 20, 15, 10 ete. In best case you'd see what you would with Triple Buffering enabled, so all incremental FPS between 30 and 60FPS. In the middle is when you see the increments you would see with 120Hz Vsync, like 60, 40, 30, 15, 10 etc. I've tried forcing Triple Buffering in DX10/11 via NVCP for these titles but it doesn't really seem to have an impact, which makes me think the Stereo SLI Driver has its own flags/settings that override non-stereo SLI settings.
[quote name='-{RaptoR}-' post='1062074' date='May 25 2010, 06:27 PM']Low GPU utilization is usually caused by CPU bottlenecking... What CPU do you have and at which clocks?
With SLI in non-3D Vision CPU bottlenecks can be a concern, but 3D Vision SLI scaling is a completely different beast due to the Vsync cap at 60FPS along with all the stereo driver overhead and synchronization going on with the various hardware layers.
I'm running an i7 920 @ 4.0GHz on an X58 with x16/x16 PCIe bandwidth to each GTX 480, so if its CPU/system bottlenecking then there's not much that can be done about it until we get faster CPU uarchs. I personally don't think that's the problem though, as other games and even the same games running DX9 path instead of DX10 scale better with GPU utilization near 100% on both GPUs. I think it comes down to the stereo driver scaling poorly in SLI in DX10/11 or incorrectly calculating the frame sync tick when you add additional GPUs. Instead of increasing its sync tick rate by a factor of 1 for each GPU, it just uses the same sync rate of a single GPU and spreads that load over your GPUs.
Games I've observed this in directly:
Avatar
Metro 2033
Crysis
Just Cause 2
BFBC2 (closer to 60-70% per GPU)
Probably a few more I'm forgetting atm. But the end result is pretty obvious, in the worst case you basically get the same denominations of 60Hz you would with Vsync on, like 30, 20, 15, 10 ete. In best case you'd see what you would with Triple Buffering enabled, so all incremental FPS between 30 and 60FPS. In the middle is when you see the increments you would see with 120Hz Vsync, like 60, 40, 30, 15, 10 etc. I've tried forcing Triple Buffering in DX10/11 via NVCP for these titles but it doesn't really seem to have an impact, which makes me think the Stereo SLI Driver has its own flags/settings that override non-stereo SLI settings.
I am having issues as well. I am using a 3-way SLI setup and I am getting about 30 FPS in JC2 with 3D vision on. If I dedicate one GPU to PHYSX and put the other 2 in SLI I show improved frames but I suspect I was better off with the previous drivers. My performance in games is not as good in SLI with the 256 drivers.
I will do some more experimenting when I get home tonight.
I am having issues as well. I am using a 3-way SLI setup and I am getting about 30 FPS in JC2 with 3D vision on. If I dedicate one GPU to PHYSX and put the other 2 in SLI I show improved frames but I suspect I was better off with the previous drivers. My performance in games is not as good in SLI with the 256 drivers.
I will do some more experimenting when I get home tonight.
4770k @ 4.2 Water cooled
32 Gigs DDR 3 2400
GTX Titan X SLI
Obsidian 800D
EVGA 1300 watt
1 Terabyte SSD raid 0
ASUS 27 inch 3D monitor 3D vision 2.
I am having issues as well. I am using a 3-way SLI setup and I am getting about 30 FPS in JC2 with 3D vision on. If I dedicate one GPU to PHYSX and put the other 2 in SLI I show improved frames but I suspect I was better off with the previous drivers. My performance in games is not as good in SLI with the 256 drivers.
I will do some more experimenting when I get home tonight.
I am having issues as well. I am using a 3-way SLI setup and I am getting about 30 FPS in JC2 with 3D vision on. If I dedicate one GPU to PHYSX and put the other 2 in SLI I show improved frames but I suspect I was better off with the previous drivers. My performance in games is not as good in SLI with the 256 drivers.
I will do some more experimenting when I get home tonight.
4770k @ 4.2 Water cooled
32 Gigs DDR 3 2400
GTX Titan X SLI
Obsidian 800D
EVGA 1300 watt
1 Terabyte SSD raid 0
ASUS 27 inch 3D monitor 3D vision 2.
[quote name='chiz' post='1062108' date='May 25 2010, 07:26 PM']With SLI in non-3D Vision CPU bottlenecks can be a concern, but 3D Vision SLI scaling is a completely different beast due to the Vsync cap at 60FPS along with all the stereo driver overhead and synchronization going on with the various hardware layers.
I'm running an i7 920 @ 4.0GHz on an X58 with x16/x16 PCIe bandwidth to each GTX 480, so if its CPU/system bottlenecking then there's not much that can be done about it until we get faster CPU uarchs. I personally don't think that's the problem though, as other games and even the same games running DX9 path instead of DX10 scale better with GPU utilization near 100% on both GPUs. I think it comes down to the stereo driver scaling poorly in SLI in DX10/11 or incorrectly calculating the frame sync tick when you add additional GPUs. Instead of increasing its sync tick rate by a factor of 1 for each GPU, it just uses the same sync rate of a single GPU and spreads that load over your GPUs.
Games I've observed this in directly:
[list]
[*]Avatar
[*]Metro 2033
[*]Crysis
[*]Just Cause 2
[*]BFBC2 (closer to 60-70% per GPU)
[/list]
Probably a few more I'm forgetting atm. But the end result is pretty obvious, in the worst case you basically get the same denominations of 60Hz you would with Vsync on, like 30, 20, 15, 10 ete. In best case you'd see what you would with Triple Buffering enabled, so all incremental FPS between 30 and 60FPS. In the middle is when you see the increments you would see with 120Hz Vsync, like 60, 40, 30, 15, 10 etc. I've tried forcing Triple Buffering in DX10/11 via NVCP for these titles but it doesn't really seem to have an impact, which makes me think the Stereo SLI Driver has its own flags/settings that override non-stereo SLI settings.[/quote]
chiz
I have exacxtly same experience with same games and 3DVision dx10, dx11 and SLI.
I completely agree with your analysis.
I think the problem is immature SLI profiles for GTX400 series, especially in combination with 3DVision.
Seems that much more work needs to be done to optimize SLI for the new architecture.
Did you move to CD1.27? Have you seen any improvement with the new drivers.
I have not installed the 257 driver yet, plan to later in the week.
Anyway Metro2033 in dx9 very high in 3D is MUCH better than dx11 very high in 2D.
Always appreciate your insight!
You should be a mod here.
[size=1][color="#FFCC00"]MOTHERBOARD: EVGA 780I SLI A2 P-06Bios
CPU: Intel 2 Core Quad QX9650 45nm(OC @ 3.83GHz FSB:1333 @ 1.3200V set in bios)prime95 all day
CPU Cooler: Gigabyte 3D Mercury case with integrated watercooling (cpu only at present)
RAM: 2x2GB OCZ PC8000 SLI (Timing:5-5-5-15-2T@ 2.0V, FSB:DRAM Ratio=2:3)
GRAPHICS: 2X EVGA GTX 480sc(clocks: 769c/1007mem/1538shader, stock heatsink)
HDD1: 2X Western Digital Caviar SATA II 250GB 7200 rpm Raid 0
HDD2: Western Digital Caviar SATA II 500GB 7200 rpm
SOUND: On board
OS: Windows Vista Ultimate 64bit SP2
MONITOR: Dell 3008wfp 30" Native Res: 2560X1600 @ 60Hz, Acer235Hz120Hz-3D
PSU: Thermaltake Toughpower 1000W
CASE: Gigabyte 3D Mercury
3DMARK Vantage: 29,686p Current Display Driver:197.41[/color][/size]
[quote name='chiz' post='1062108' date='May 25 2010, 07:26 PM']With SLI in non-3D Vision CPU bottlenecks can be a concern, but 3D Vision SLI scaling is a completely different beast due to the Vsync cap at 60FPS along with all the stereo driver overhead and synchronization going on with the various hardware layers.
I'm running an i7 920 @ 4.0GHz on an X58 with x16/x16 PCIe bandwidth to each GTX 480, so if its CPU/system bottlenecking then there's not much that can be done about it until we get faster CPU uarchs. I personally don't think that's the problem though, as other games and even the same games running DX9 path instead of DX10 scale better with GPU utilization near 100% on both GPUs. I think it comes down to the stereo driver scaling poorly in SLI in DX10/11 or incorrectly calculating the frame sync tick when you add additional GPUs. Instead of increasing its sync tick rate by a factor of 1 for each GPU, it just uses the same sync rate of a single GPU and spreads that load over your GPUs.
Games I've observed this in directly:
Avatar
Metro 2033
Crysis
Just Cause 2
BFBC2 (closer to 60-70% per GPU)
Probably a few more I'm forgetting atm. But the end result is pretty obvious, in the worst case you basically get the same denominations of 60Hz you would with Vsync on, like 30, 20, 15, 10 ete. In best case you'd see what you would with Triple Buffering enabled, so all incremental FPS between 30 and 60FPS. In the middle is when you see the increments you would see with 120Hz Vsync, like 60, 40, 30, 15, 10 etc. I've tried forcing Triple Buffering in DX10/11 via NVCP for these titles but it doesn't really seem to have an impact, which makes me think the Stereo SLI Driver has its own flags/settings that override non-stereo SLI settings.
chiz
I have exacxtly same experience with same games and 3DVision dx10, dx11 and SLI.
I completely agree with your analysis.
I think the problem is immature SLI profiles for GTX400 series, especially in combination with 3DVision.
Seems that much more work needs to be done to optimize SLI for the new architecture.
Did you move to CD1.27? Have you seen any improvement with the new drivers.
I have not installed the 257 driver yet, plan to later in the week.
Anyway Metro2033 in dx9 very high in 3D is MUCH better than dx11 very high in 2D.
Always appreciate your insight!
You should be a mod here.
MOTHERBOARD: EVGA 780I SLI A2 P-06Bios
CPU: Intel 2 Core Quad QX9650 45nm(OC @ 3.83GHz FSB:1333 @ 1.3200V set in bios)prime95 all day
CPU Cooler: Gigabyte 3D Mercury case with integrated watercooling (cpu only at present)
[quote name='chiz' post='1062108' date='May 25 2010, 07:26 PM']With SLI in non-3D Vision CPU bottlenecks can be a concern, but 3D Vision SLI scaling is a completely different beast due to the Vsync cap at 60FPS along with all the stereo driver overhead and synchronization going on with the various hardware layers.
I'm running an i7 920 @ 4.0GHz on an X58 with x16/x16 PCIe bandwidth to each GTX 480, so if its CPU/system bottlenecking then there's not much that can be done about it until we get faster CPU uarchs. I personally don't think that's the problem though, as other games and even the same games running DX9 path instead of DX10 scale better with GPU utilization near 100% on both GPUs. I think it comes down to the stereo driver scaling poorly in SLI in DX10/11 or incorrectly calculating the frame sync tick when you add additional GPUs. Instead of increasing its sync tick rate by a factor of 1 for each GPU, it just uses the same sync rate of a single GPU and spreads that load over your GPUs.
Games I've observed this in directly:
[list]
[*]Avatar
[*]Metro 2033
[*]Crysis
[*]Just Cause 2
[*]BFBC2 (closer to 60-70% per GPU)
[/list]
Probably a few more I'm forgetting atm. But the end result is pretty obvious, in the worst case you basically get the same denominations of 60Hz you would with Vsync on, like 30, 20, 15, 10 ete. In best case you'd see what you would with Triple Buffering enabled, so all incremental FPS between 30 and 60FPS. In the middle is when you see the increments you would see with 120Hz Vsync, like 60, 40, 30, 15, 10 etc. I've tried forcing Triple Buffering in DX10/11 via NVCP for these titles but it doesn't really seem to have an impact, which makes me think the Stereo SLI Driver has its own flags/settings that override non-stereo SLI settings.[/quote]
chiz
I have exacxtly same experience with same games and 3DVision dx10, dx11 and SLI.
I completely agree with your analysis.
I think the problem is immature SLI profiles for GTX400 series, especially in combination with 3DVision.
Seems that much more work needs to be done to optimize SLI for the new architecture.
Did you move to CD1.27? Have you seen any improvement with the new drivers.
I have not installed the 257 driver yet, plan to later in the week.
Anyway Metro2033 in dx9 very high in 3D is MUCH better than dx11 very high in 2D.
Always appreciate your insight!
You should be a mod here.
[size=1][color="#FFCC00"]MOTHERBOARD: EVGA 780I SLI A2 P-06Bios
CPU: Intel 2 Core Quad QX9650 45nm(OC @ 3.83GHz FSB:1333 @ 1.3200V set in bios)prime95 all day
CPU Cooler: Gigabyte 3D Mercury case with integrated watercooling (cpu only at present)
RAM: 2x2GB OCZ PC8000 SLI (Timing:5-5-5-15-2T@ 2.0V, FSB:DRAM Ratio=2:3)
GRAPHICS: 2X EVGA GTX 480sc(clocks: 769c/1007mem/1538shader, stock heatsink)
HDD1: 2X Western Digital Caviar SATA II 250GB 7200 rpm Raid 0
HDD2: Western Digital Caviar SATA II 500GB 7200 rpm
SOUND: On board
OS: Windows Vista Ultimate 64bit SP2
MONITOR: Dell 3008wfp 30" Native Res: 2560X1600 @ 60Hz, Acer235Hz120Hz-3D
PSU: Thermaltake Toughpower 1000W
CASE: Gigabyte 3D Mercury
3DMARK Vantage: 29,686p Current Display Driver:197.41[/color][/size]
[quote name='chiz' post='1062108' date='May 25 2010, 07:26 PM']With SLI in non-3D Vision CPU bottlenecks can be a concern, but 3D Vision SLI scaling is a completely different beast due to the Vsync cap at 60FPS along with all the stereo driver overhead and synchronization going on with the various hardware layers.
I'm running an i7 920 @ 4.0GHz on an X58 with x16/x16 PCIe bandwidth to each GTX 480, so if its CPU/system bottlenecking then there's not much that can be done about it until we get faster CPU uarchs. I personally don't think that's the problem though, as other games and even the same games running DX9 path instead of DX10 scale better with GPU utilization near 100% on both GPUs. I think it comes down to the stereo driver scaling poorly in SLI in DX10/11 or incorrectly calculating the frame sync tick when you add additional GPUs. Instead of increasing its sync tick rate by a factor of 1 for each GPU, it just uses the same sync rate of a single GPU and spreads that load over your GPUs.
Games I've observed this in directly:
Avatar
Metro 2033
Crysis
Just Cause 2
BFBC2 (closer to 60-70% per GPU)
Probably a few more I'm forgetting atm. But the end result is pretty obvious, in the worst case you basically get the same denominations of 60Hz you would with Vsync on, like 30, 20, 15, 10 ete. In best case you'd see what you would with Triple Buffering enabled, so all incremental FPS between 30 and 60FPS. In the middle is when you see the increments you would see with 120Hz Vsync, like 60, 40, 30, 15, 10 etc. I've tried forcing Triple Buffering in DX10/11 via NVCP for these titles but it doesn't really seem to have an impact, which makes me think the Stereo SLI Driver has its own flags/settings that override non-stereo SLI settings.
chiz
I have exacxtly same experience with same games and 3DVision dx10, dx11 and SLI.
I completely agree with your analysis.
I think the problem is immature SLI profiles for GTX400 series, especially in combination with 3DVision.
Seems that much more work needs to be done to optimize SLI for the new architecture.
Did you move to CD1.27? Have you seen any improvement with the new drivers.
I have not installed the 257 driver yet, plan to later in the week.
Anyway Metro2033 in dx9 very high in 3D is MUCH better than dx11 very high in 2D.
Always appreciate your insight!
You should be a mod here.
MOTHERBOARD: EVGA 780I SLI A2 P-06Bios
CPU: Intel 2 Core Quad QX9650 45nm(OC @ 3.83GHz FSB:1333 @ 1.3200V set in bios)prime95 all day
CPU Cooler: Gigabyte 3D Mercury case with integrated watercooling (cpu only at present)
Yes there is something wrong with SLI and these drivers. AvP scales very well under dx11, about 90% usage on both gpus. Metro is ok also, but other titles scale pretty bad. GTA4 works almost as a single 480 with usage of 50-60% on each card and the shadow flickering is still present, the same thing happens with bad company 2 under dx11. Just cause 2 worked great with previous 197.xx drivers where I got almost twice the performance on dx10, now with 256.xx drivers usage has gone way down as well as performance.
Yes there is something wrong with SLI and these drivers. AvP scales very well under dx11, about 90% usage on both gpus. Metro is ok also, but other titles scale pretty bad. GTA4 works almost as a single 480 with usage of 50-60% on each card and the shadow flickering is still present, the same thing happens with bad company 2 under dx11. Just cause 2 worked great with previous 197.xx drivers where I got almost twice the performance on dx10, now with 256.xx drivers usage has gone way down as well as performance.
Yes there is something wrong with SLI and these drivers. AvP scales very well under dx11, about 90% usage on both gpus. Metro is ok also, but other titles scale pretty bad. GTA4 works almost as a single 480 with usage of 50-60% on each card and the shadow flickering is still present, the same thing happens with bad company 2 under dx11. Just cause 2 worked great with previous 197.xx drivers where I got almost twice the performance on dx10, now with 256.xx drivers usage has gone way down as well as performance.
Yes there is something wrong with SLI and these drivers. AvP scales very well under dx11, about 90% usage on both gpus. Metro is ok also, but other titles scale pretty bad. GTA4 works almost as a single 480 with usage of 50-60% on each card and the shadow flickering is still present, the same thing happens with bad company 2 under dx11. Just cause 2 worked great with previous 197.xx drivers where I got almost twice the performance on dx10, now with 256.xx drivers usage has gone way down as well as performance.
[quote name='Guz' post='1062166' date='May 25 2010, 09:12 PM']Yes there is something wrong with SLI and these drivers. AvP scales very well under dx11, about 90% usage on both gpus. Metro is ok also, but other titles scale pretty bad. GTA4 works almost as a single 480 with usage of 50-60% on each card and the shadow flickering is still present, the same thing happens with bad company 2 under dx11. Just cause 2 worked great with previous 197.xx drivers where I got almost twice the performance on dx10, now with 256.xx drivers usage has gone way down as well as performance.
We need Nvidia look into this matter ASAP.[/quote]
i Agree! i wanna get the best performance out of this gtx's
am a sli newB, how do i use sli profiles?
[quote name='Guz' post='1062166' date='May 25 2010, 09:12 PM']Yes there is something wrong with SLI and these drivers. AvP scales very well under dx11, about 90% usage on both gpus. Metro is ok also, but other titles scale pretty bad. GTA4 works almost as a single 480 with usage of 50-60% on each card and the shadow flickering is still present, the same thing happens with bad company 2 under dx11. Just cause 2 worked great with previous 197.xx drivers where I got almost twice the performance on dx10, now with 256.xx drivers usage has gone way down as well as performance.
We need Nvidia look into this matter ASAP.
i Agree! i wanna get the best performance out of this gtx's
So yeah, anybody who might not have been able to run Metro 2033 in 3D very well before, I suggest you give it another go with the new drivers.
I'm using a GTX 470, by the way/
So yeah, anybody who might not have been able to run Metro 2033 in 3D very well before, I suggest you give it another go with the new drivers.
I'm using a GTX 470, by the way/
So yeah, anybody who might not have been able to run Metro 2033 in 3D very well before, I suggest you give it another go with the new drivers.
I'm using a GTX 470, by the way/
So yeah, anybody who might not have been able to run Metro 2033 in 3D very well before, I suggest you give it another go with the new drivers.
I'm using a GTX 470, by the way/
-=HeliX=- Mod 3DV Game Fixes
My 3D Vision Games List Ratings
Intel Core i7 5930K @4.5GHz | Gigabyte X99 Gaming 5 | Win10 x64 Pro | Corsair H105
Nvidia GeForce Titan X SLI Hybrid | ROG Swift PG278Q 144Hz + 3D Vision/G-Sync | 32GB Adata DDR4 2666
Intel Samsung 950Pro SSD | Samsung EVO 4x1 RAID 0 |
Yamaha VX-677 A/V Receiver | Polk Audio RM6880 7.1 | LG Blu-Ray
Auzen X-Fi HT HD | Logitech G710/G502/G27 | Corsair Air 540 | EVGA P2-1200W
-=HeliX=- Mod 3DV Game Fixes
My 3D Vision Games List Ratings
Intel Core i7 5930K @4.5GHz | Gigabyte X99 Gaming 5 | Win10 x64 Pro | Corsair H105
Nvidia GeForce Titan X SLI Hybrid | ROG Swift PG278Q 144Hz + 3D Vision/G-Sync | 32GB Adata DDR4 2666
Intel Samsung 950Pro SSD | Samsung EVO 4x1 RAID 0 |
Yamaha VX-677 A/V Receiver | Polk Audio RM6880 7.1 | LG Blu-Ray
Auzen X-Fi HT HD | Logitech G710/G502/G27 | Corsair Air 540 | EVGA P2-1200W
Low GPU utilization is usually caused by CPU bottlenecking... What CPU do you have and at which clocks?
Low GPU utilization is usually caused by CPU bottlenecking... What CPU do you have and at which clocks?
// Owned: MX420 // MX440 // FX5200 Ultra // 7600GT // 8800GTS 320MB // 9800GX2 // GTX275 // SLI GTX580s // SLI GTX780Tis //
________________________________________________________________________________________________________________________
Low GPU utilization is usually caused by CPU bottlenecking... What CPU do you have and at which clocks?
Low GPU utilization is usually caused by CPU bottlenecking... What CPU do you have and at which clocks?
// Owned: MX420 // MX440 // FX5200 Ultra // 7600GT // 8800GTS 320MB // 9800GX2 // GTX275 // SLI GTX580s // SLI GTX780Tis //
________________________________________________________________________________________________________________________
With SLI in non-3D Vision CPU bottlenecks can be a concern, but 3D Vision SLI scaling is a completely different beast due to the Vsync cap at 60FPS along with all the stereo driver overhead and synchronization going on with the various hardware layers.
I'm running an i7 920 @ 4.0GHz on an X58 with x16/x16 PCIe bandwidth to each GTX 480, so if its CPU/system bottlenecking then there's not much that can be done about it until we get faster CPU uarchs. I personally don't think that's the problem though, as other games and even the same games running DX9 path instead of DX10 scale better with GPU utilization near 100% on both GPUs. I think it comes down to the stereo driver scaling poorly in SLI in DX10/11 or incorrectly calculating the frame sync tick when you add additional GPUs. Instead of increasing its sync tick rate by a factor of 1 for each GPU, it just uses the same sync rate of a single GPU and spreads that load over your GPUs.
Games I've observed this in directly:
[list]
[*]Avatar
[*]Metro 2033
[*]Crysis
[*]Just Cause 2
[*]BFBC2 (closer to 60-70% per GPU)
[/list]
Probably a few more I'm forgetting atm. But the end result is pretty obvious, in the worst case you basically get the same denominations of 60Hz you would with Vsync on, like 30, 20, 15, 10 ete. In best case you'd see what you would with Triple Buffering enabled, so all incremental FPS between 30 and 60FPS. In the middle is when you see the increments you would see with 120Hz Vsync, like 60, 40, 30, 15, 10 etc. I've tried forcing Triple Buffering in DX10/11 via NVCP for these titles but it doesn't really seem to have an impact, which makes me think the Stereo SLI Driver has its own flags/settings that override non-stereo SLI settings.
With SLI in non-3D Vision CPU bottlenecks can be a concern, but 3D Vision SLI scaling is a completely different beast due to the Vsync cap at 60FPS along with all the stereo driver overhead and synchronization going on with the various hardware layers.
I'm running an i7 920 @ 4.0GHz on an X58 with x16/x16 PCIe bandwidth to each GTX 480, so if its CPU/system bottlenecking then there's not much that can be done about it until we get faster CPU uarchs. I personally don't think that's the problem though, as other games and even the same games running DX9 path instead of DX10 scale better with GPU utilization near 100% on both GPUs. I think it comes down to the stereo driver scaling poorly in SLI in DX10/11 or incorrectly calculating the frame sync tick when you add additional GPUs. Instead of increasing its sync tick rate by a factor of 1 for each GPU, it just uses the same sync rate of a single GPU and spreads that load over your GPUs.
Games I've observed this in directly:
Probably a few more I'm forgetting atm. But the end result is pretty obvious, in the worst case you basically get the same denominations of 60Hz you would with Vsync on, like 30, 20, 15, 10 ete. In best case you'd see what you would with Triple Buffering enabled, so all incremental FPS between 30 and 60FPS. In the middle is when you see the increments you would see with 120Hz Vsync, like 60, 40, 30, 15, 10 etc. I've tried forcing Triple Buffering in DX10/11 via NVCP for these titles but it doesn't really seem to have an impact, which makes me think the Stereo SLI Driver has its own flags/settings that override non-stereo SLI settings.
-=HeliX=- Mod 3DV Game Fixes
My 3D Vision Games List Ratings
Intel Core i7 5930K @4.5GHz | Gigabyte X99 Gaming 5 | Win10 x64 Pro | Corsair H105
Nvidia GeForce Titan X SLI Hybrid | ROG Swift PG278Q 144Hz + 3D Vision/G-Sync | 32GB Adata DDR4 2666
Intel Samsung 950Pro SSD | Samsung EVO 4x1 RAID 0 |
Yamaha VX-677 A/V Receiver | Polk Audio RM6880 7.1 | LG Blu-Ray
Auzen X-Fi HT HD | Logitech G710/G502/G27 | Corsair Air 540 | EVGA P2-1200W
With SLI in non-3D Vision CPU bottlenecks can be a concern, but 3D Vision SLI scaling is a completely different beast due to the Vsync cap at 60FPS along with all the stereo driver overhead and synchronization going on with the various hardware layers.
I'm running an i7 920 @ 4.0GHz on an X58 with x16/x16 PCIe bandwidth to each GTX 480, so if its CPU/system bottlenecking then there's not much that can be done about it until we get faster CPU uarchs. I personally don't think that's the problem though, as other games and even the same games running DX9 path instead of DX10 scale better with GPU utilization near 100% on both GPUs. I think it comes down to the stereo driver scaling poorly in SLI in DX10/11 or incorrectly calculating the frame sync tick when you add additional GPUs. Instead of increasing its sync tick rate by a factor of 1 for each GPU, it just uses the same sync rate of a single GPU and spreads that load over your GPUs.
Games I've observed this in directly:
[list]
[*]Avatar
[*]Metro 2033
[*]Crysis
[*]Just Cause 2
[*]BFBC2 (closer to 60-70% per GPU)
[/list]
Probably a few more I'm forgetting atm. But the end result is pretty obvious, in the worst case you basically get the same denominations of 60Hz you would with Vsync on, like 30, 20, 15, 10 ete. In best case you'd see what you would with Triple Buffering enabled, so all incremental FPS between 30 and 60FPS. In the middle is when you see the increments you would see with 120Hz Vsync, like 60, 40, 30, 15, 10 etc. I've tried forcing Triple Buffering in DX10/11 via NVCP for these titles but it doesn't really seem to have an impact, which makes me think the Stereo SLI Driver has its own flags/settings that override non-stereo SLI settings.
With SLI in non-3D Vision CPU bottlenecks can be a concern, but 3D Vision SLI scaling is a completely different beast due to the Vsync cap at 60FPS along with all the stereo driver overhead and synchronization going on with the various hardware layers.
I'm running an i7 920 @ 4.0GHz on an X58 with x16/x16 PCIe bandwidth to each GTX 480, so if its CPU/system bottlenecking then there's not much that can be done about it until we get faster CPU uarchs. I personally don't think that's the problem though, as other games and even the same games running DX9 path instead of DX10 scale better with GPU utilization near 100% on both GPUs. I think it comes down to the stereo driver scaling poorly in SLI in DX10/11 or incorrectly calculating the frame sync tick when you add additional GPUs. Instead of increasing its sync tick rate by a factor of 1 for each GPU, it just uses the same sync rate of a single GPU and spreads that load over your GPUs.
Games I've observed this in directly:
Probably a few more I'm forgetting atm. But the end result is pretty obvious, in the worst case you basically get the same denominations of 60Hz you would with Vsync on, like 30, 20, 15, 10 ete. In best case you'd see what you would with Triple Buffering enabled, so all incremental FPS between 30 and 60FPS. In the middle is when you see the increments you would see with 120Hz Vsync, like 60, 40, 30, 15, 10 etc. I've tried forcing Triple Buffering in DX10/11 via NVCP for these titles but it doesn't really seem to have an impact, which makes me think the Stereo SLI Driver has its own flags/settings that override non-stereo SLI settings.
-=HeliX=- Mod 3DV Game Fixes
My 3D Vision Games List Ratings
Intel Core i7 5930K @4.5GHz | Gigabyte X99 Gaming 5 | Win10 x64 Pro | Corsair H105
Nvidia GeForce Titan X SLI Hybrid | ROG Swift PG278Q 144Hz + 3D Vision/G-Sync | 32GB Adata DDR4 2666
Intel Samsung 950Pro SSD | Samsung EVO 4x1 RAID 0 |
Yamaha VX-677 A/V Receiver | Polk Audio RM6880 7.1 | LG Blu-Ray
Auzen X-Fi HT HD | Logitech G710/G502/G27 | Corsair Air 540 | EVGA P2-1200W
I will do some more experimenting when I get home tonight.
I will do some more experimenting when I get home tonight.
4770k @ 4.2 Water cooled
32 Gigs DDR 3 2400
GTX Titan X SLI
Obsidian 800D
EVGA 1300 watt
1 Terabyte SSD raid 0
ASUS 27 inch 3D monitor 3D vision 2.
I will do some more experimenting when I get home tonight.
I will do some more experimenting when I get home tonight.
4770k @ 4.2 Water cooled
32 Gigs DDR 3 2400
GTX Titan X SLI
Obsidian 800D
EVGA 1300 watt
1 Terabyte SSD raid 0
ASUS 27 inch 3D monitor 3D vision 2.
I'm running an i7 920 @ 4.0GHz on an X58 with x16/x16 PCIe bandwidth to each GTX 480, so if its CPU/system bottlenecking then there's not much that can be done about it until we get faster CPU uarchs. I personally don't think that's the problem though, as other games and even the same games running DX9 path instead of DX10 scale better with GPU utilization near 100% on both GPUs. I think it comes down to the stereo driver scaling poorly in SLI in DX10/11 or incorrectly calculating the frame sync tick when you add additional GPUs. Instead of increasing its sync tick rate by a factor of 1 for each GPU, it just uses the same sync rate of a single GPU and spreads that load over your GPUs.
Games I've observed this in directly:
[list]
[*]Avatar
[*]Metro 2033
[*]Crysis
[*]Just Cause 2
[*]BFBC2 (closer to 60-70% per GPU)
[/list]
Probably a few more I'm forgetting atm. But the end result is pretty obvious, in the worst case you basically get the same denominations of 60Hz you would with Vsync on, like 30, 20, 15, 10 ete. In best case you'd see what you would with Triple Buffering enabled, so all incremental FPS between 30 and 60FPS. In the middle is when you see the increments you would see with 120Hz Vsync, like 60, 40, 30, 15, 10 etc. I've tried forcing Triple Buffering in DX10/11 via NVCP for these titles but it doesn't really seem to have an impact, which makes me think the Stereo SLI Driver has its own flags/settings that override non-stereo SLI settings.[/quote]
chiz
I have exacxtly same experience with same games and 3DVision dx10, dx11 and SLI.
I completely agree with your analysis.
I think the problem is immature SLI profiles for GTX400 series, especially in combination with 3DVision.
Seems that much more work needs to be done to optimize SLI for the new architecture.
Did you move to CD1.27? Have you seen any improvement with the new drivers.
I have not installed the 257 driver yet, plan to later in the week.
Anyway Metro2033 in dx9 very high in 3D is MUCH better than dx11 very high in 2D.
Always appreciate your insight!
You should be a mod here.
[size=1][color="#FFCC00"]MOTHERBOARD: EVGA 780I SLI A2 P-06Bios
CPU: Intel 2 Core Quad QX9650 45nm(OC @ 3.83GHz FSB:1333 @ 1.3200V set in bios)prime95 all day
CPU Cooler: Gigabyte 3D Mercury case with integrated watercooling (cpu only at present)
RAM: 2x2GB OCZ PC8000 SLI (Timing:5-5-5-15-2T@ 2.0V, FSB:DRAM Ratio=2:3)
GRAPHICS: 2X EVGA GTX 480sc(clocks: 769c/1007mem/1538shader, stock heatsink)
HDD1: 2X Western Digital Caviar SATA II 250GB 7200 rpm Raid 0
HDD2: Western Digital Caviar SATA II 500GB 7200 rpm
SOUND: On board
OS: Windows Vista Ultimate 64bit SP2
MONITOR: Dell 3008wfp 30" Native Res: 2560X1600 @ 60Hz, Acer235Hz120Hz-3D
PSU: Thermaltake Toughpower 1000W
CASE: Gigabyte 3D Mercury
3DMARK Vantage: 29,686p Current Display Driver:197.41[/color][/size]
I'm running an i7 920 @ 4.0GHz on an X58 with x16/x16 PCIe bandwidth to each GTX 480, so if its CPU/system bottlenecking then there's not much that can be done about it until we get faster CPU uarchs. I personally don't think that's the problem though, as other games and even the same games running DX9 path instead of DX10 scale better with GPU utilization near 100% on both GPUs. I think it comes down to the stereo driver scaling poorly in SLI in DX10/11 or incorrectly calculating the frame sync tick when you add additional GPUs. Instead of increasing its sync tick rate by a factor of 1 for each GPU, it just uses the same sync rate of a single GPU and spreads that load over your GPUs.
Games I've observed this in directly:
Probably a few more I'm forgetting atm. But the end result is pretty obvious, in the worst case you basically get the same denominations of 60Hz you would with Vsync on, like 30, 20, 15, 10 ete. In best case you'd see what you would with Triple Buffering enabled, so all incremental FPS between 30 and 60FPS. In the middle is when you see the increments you would see with 120Hz Vsync, like 60, 40, 30, 15, 10 etc. I've tried forcing Triple Buffering in DX10/11 via NVCP for these titles but it doesn't really seem to have an impact, which makes me think the Stereo SLI Driver has its own flags/settings that override non-stereo SLI settings.
chiz
I have exacxtly same experience with same games and 3DVision dx10, dx11 and SLI.
I completely agree with your analysis.
I think the problem is immature SLI profiles for GTX400 series, especially in combination with 3DVision.
Seems that much more work needs to be done to optimize SLI for the new architecture.
Did you move to CD1.27? Have you seen any improvement with the new drivers.
I have not installed the 257 driver yet, plan to later in the week.
Anyway Metro2033 in dx9 very high in 3D is MUCH better than dx11 very high in 2D.
Always appreciate your insight!
You should be a mod here.
MOTHERBOARD: EVGA 780I SLI A2 P-06Bios
CPU: Intel 2 Core Quad QX9650 45nm(OC @ 3.83GHz FSB:1333 @ 1.3200V set in bios)prime95 all day
CPU Cooler: Gigabyte 3D Mercury case with integrated watercooling (cpu only at present)
RAM: 2x2GB OCZ PC8000 SLI (Timing:5-5-5-15-2T@ 2.0V, FSB:DRAM Ratio=2:3)
GRAPHICS: 2X EVGA GTX 480sc(clocks: 769c/1007mem/1538shader, stock heatsink)
HDD1: 2X Western Digital Caviar SATA II 250GB 7200 rpm Raid 0
HDD2: Western Digital Caviar SATA II 500GB 7200 rpm
SOUND: On board
OS: Windows Vista Ultimate 64bit SP2
MONITOR: Dell 3008wfp 30" Native Res: 2560X1600 @ 60Hz, Acer235Hz120Hz-3D
PSU: Thermaltake Toughpower 1000W
CASE: Gigabyte 3D Mercury
3DMARK Vantage: 29,686p Current Display Driver:197.41
I'm running an i7 920 @ 4.0GHz on an X58 with x16/x16 PCIe bandwidth to each GTX 480, so if its CPU/system bottlenecking then there's not much that can be done about it until we get faster CPU uarchs. I personally don't think that's the problem though, as other games and even the same games running DX9 path instead of DX10 scale better with GPU utilization near 100% on both GPUs. I think it comes down to the stereo driver scaling poorly in SLI in DX10/11 or incorrectly calculating the frame sync tick when you add additional GPUs. Instead of increasing its sync tick rate by a factor of 1 for each GPU, it just uses the same sync rate of a single GPU and spreads that load over your GPUs.
Games I've observed this in directly:
[list]
[*]Avatar
[*]Metro 2033
[*]Crysis
[*]Just Cause 2
[*]BFBC2 (closer to 60-70% per GPU)
[/list]
Probably a few more I'm forgetting atm. But the end result is pretty obvious, in the worst case you basically get the same denominations of 60Hz you would with Vsync on, like 30, 20, 15, 10 ete. In best case you'd see what you would with Triple Buffering enabled, so all incremental FPS between 30 and 60FPS. In the middle is when you see the increments you would see with 120Hz Vsync, like 60, 40, 30, 15, 10 etc. I've tried forcing Triple Buffering in DX10/11 via NVCP for these titles but it doesn't really seem to have an impact, which makes me think the Stereo SLI Driver has its own flags/settings that override non-stereo SLI settings.[/quote]
chiz
I have exacxtly same experience with same games and 3DVision dx10, dx11 and SLI.
I completely agree with your analysis.
I think the problem is immature SLI profiles for GTX400 series, especially in combination with 3DVision.
Seems that much more work needs to be done to optimize SLI for the new architecture.
Did you move to CD1.27? Have you seen any improvement with the new drivers.
I have not installed the 257 driver yet, plan to later in the week.
Anyway Metro2033 in dx9 very high in 3D is MUCH better than dx11 very high in 2D.
Always appreciate your insight!
You should be a mod here.
[size=1][color="#FFCC00"]MOTHERBOARD: EVGA 780I SLI A2 P-06Bios
CPU: Intel 2 Core Quad QX9650 45nm(OC @ 3.83GHz FSB:1333 @ 1.3200V set in bios)prime95 all day
CPU Cooler: Gigabyte 3D Mercury case with integrated watercooling (cpu only at present)
RAM: 2x2GB OCZ PC8000 SLI (Timing:5-5-5-15-2T@ 2.0V, FSB:DRAM Ratio=2:3)
GRAPHICS: 2X EVGA GTX 480sc(clocks: 769c/1007mem/1538shader, stock heatsink)
HDD1: 2X Western Digital Caviar SATA II 250GB 7200 rpm Raid 0
HDD2: Western Digital Caviar SATA II 500GB 7200 rpm
SOUND: On board
OS: Windows Vista Ultimate 64bit SP2
MONITOR: Dell 3008wfp 30" Native Res: 2560X1600 @ 60Hz, Acer235Hz120Hz-3D
PSU: Thermaltake Toughpower 1000W
CASE: Gigabyte 3D Mercury
3DMARK Vantage: 29,686p Current Display Driver:197.41[/color][/size]
I'm running an i7 920 @ 4.0GHz on an X58 with x16/x16 PCIe bandwidth to each GTX 480, so if its CPU/system bottlenecking then there's not much that can be done about it until we get faster CPU uarchs. I personally don't think that's the problem though, as other games and even the same games running DX9 path instead of DX10 scale better with GPU utilization near 100% on both GPUs. I think it comes down to the stereo driver scaling poorly in SLI in DX10/11 or incorrectly calculating the frame sync tick when you add additional GPUs. Instead of increasing its sync tick rate by a factor of 1 for each GPU, it just uses the same sync rate of a single GPU and spreads that load over your GPUs.
Games I've observed this in directly:
Probably a few more I'm forgetting atm. But the end result is pretty obvious, in the worst case you basically get the same denominations of 60Hz you would with Vsync on, like 30, 20, 15, 10 ete. In best case you'd see what you would with Triple Buffering enabled, so all incremental FPS between 30 and 60FPS. In the middle is when you see the increments you would see with 120Hz Vsync, like 60, 40, 30, 15, 10 etc. I've tried forcing Triple Buffering in DX10/11 via NVCP for these titles but it doesn't really seem to have an impact, which makes me think the Stereo SLI Driver has its own flags/settings that override non-stereo SLI settings.
chiz
I have exacxtly same experience with same games and 3DVision dx10, dx11 and SLI.
I completely agree with your analysis.
I think the problem is immature SLI profiles for GTX400 series, especially in combination with 3DVision.
Seems that much more work needs to be done to optimize SLI for the new architecture.
Did you move to CD1.27? Have you seen any improvement with the new drivers.
I have not installed the 257 driver yet, plan to later in the week.
Anyway Metro2033 in dx9 very high in 3D is MUCH better than dx11 very high in 2D.
Always appreciate your insight!
You should be a mod here.
MOTHERBOARD: EVGA 780I SLI A2 P-06Bios
CPU: Intel 2 Core Quad QX9650 45nm(OC @ 3.83GHz FSB:1333 @ 1.3200V set in bios)prime95 all day
CPU Cooler: Gigabyte 3D Mercury case with integrated watercooling (cpu only at present)
RAM: 2x2GB OCZ PC8000 SLI (Timing:5-5-5-15-2T@ 2.0V, FSB:DRAM Ratio=2:3)
GRAPHICS: 2X EVGA GTX 480sc(clocks: 769c/1007mem/1538shader, stock heatsink)
HDD1: 2X Western Digital Caviar SATA II 250GB 7200 rpm Raid 0
HDD2: Western Digital Caviar SATA II 500GB 7200 rpm
SOUND: On board
OS: Windows Vista Ultimate 64bit SP2
MONITOR: Dell 3008wfp 30" Native Res: 2560X1600 @ 60Hz, Acer235Hz120Hz-3D
PSU: Thermaltake Toughpower 1000W
CASE: Gigabyte 3D Mercury
3DMARK Vantage: 29,686p Current Display Driver:197.41
We need Nvidia look into this matter ASAP.
We need Nvidia look into this matter ASAP.
We need Nvidia look into this matter ASAP.
We need Nvidia look into this matter ASAP.
We need Nvidia look into this matter ASAP.[/quote]
i Agree! i wanna get the best performance out of this gtx's
am a sli newB, how do i use sli profiles?
We need Nvidia look into this matter ASAP.
i Agree! i wanna get the best performance out of this gtx's
am a sli newB, how do i use sli profiles?
----------------------------------------------------------------------------------
Acer HN274H
30" IPS Pro Monitor WQXGA 2560x1600
Mitsubishi 60737 60" DLP HDTV
Core i7 3820 @4.8ghz
16GB DDR3 1600
ASRock Fatal1ty X79 Professional LGA 2011
SeaSonic X-SERIES X-1050 1050W
Windows 7 Ultimate x64
ASUS GTX titan SLI
Sennheiser pc 360 with Asus Xonar Essence STX
Bose Companion 3 Series II
----------------------------------------------------------------------------------