What kind of improvement, vram or power, improves 3D vision?
  2 / 2    
[quote="foreverseeking"][quote="eqzitara"]An i7 is no worries. If you had something amd or lowend i5 it may of been an issue but your fine.[/quote] While I agree with you that the I7950 is a great cpu, it might still be bottlenecking even at 3.8ghz on many games. My I7930 when over clocked from 2.8 to 3.8ghz gained almost 50% higher min frames per second in Batman. I know the maths doesn't quite add up, but i did multiple tests. Visually it was night and day better. I lowered the clock down to 3.57 for stability at a low core voltage. With each over clock i ran fraps and with each increase the fps increase was quite linear on a good selection of games, tailing only a little towards the end. I wouldn't advise him to upgrade as benefits will be small, but if I go sli I think an upgrade from the 900 series will be hugely beneficial. So even though CPU is not as important as GPU, considering most of us here have a top gpu setup, minimum and average fps in my experience are hugely affected by cpu speed. For anyone building, my advice is get a top Ivy (or wait for hasswell) and give it a moderate over-clock.[/quote] Gah...can't decide now what to do. So as I said earlier currently on i7-950 @ 3.8ghz. I could today, go to microcenter and pick up an i7-3570k and an ASRock Z77 Extreme4 LGA 1155 Z77 and a Cooler Master Hyper 212 Plus, for a total of about $300. Or I could wait for Haswell...gah, decisions decisions. Eh.....I should just wait, I know I'll regret getting a mobo/proc that are already almost a year old when Haswell is only 3 months away.
foreverseeking said:
eqzitara said:An i7 is no worries.
If you had something amd or lowend i5 it may of been an issue but your fine.


While I agree with you that the I7950 is a great cpu, it might still be bottlenecking even at 3.8ghz on many games. My I7930 when over clocked from 2.8 to 3.8ghz gained almost 50% higher min frames per second in Batman. I know the maths doesn't quite add up, but i did multiple tests. Visually it was night and day better. I lowered the clock down to 3.57 for stability at a low core voltage.

With each over clock i ran fraps and with each increase the fps increase was quite linear on a good selection of games, tailing only a little towards the end. I wouldn't advise him to upgrade as benefits will be small, but if I go sli I think an upgrade from the 900 series will be hugely beneficial.

So even though CPU is not as important as GPU, considering most of us here have a top gpu setup, minimum and average fps in my experience are hugely affected by cpu speed. For anyone building, my advice is get a top Ivy (or wait for hasswell) and give it a moderate over-clock.


Gah...can't decide now what to do. So as I said earlier currently on i7-950 @ 3.8ghz.

I could today, go to microcenter and pick up an i7-3570k and an ASRock Z77 Extreme4 LGA 1155 Z77 and a Cooler Master Hyper 212 Plus, for a total of about $300. Or I could wait for Haswell...gah, decisions decisions.

Eh.....I should just wait, I know I'll regret getting a mobo/proc that are already almost a year old when Haswell is only 3 months away.

#16
Posted 03/02/2013 12:52 PM   
Lol, just wait unless you've got money to burn. If you have to convince yourself to make the purchase, you will always regret it on some level. Haswell will not be much better then Ivy in performance from rumours I've read, but will have a modest bump per ghz and also will be cooler and more efficient, always welcome in a broiling PC case :-)
Lol, just wait unless you've got money to burn. If you have to convince yourself to make the purchase, you will always regret it on some level. Haswell will not be much better then Ivy in performance from rumours I've read, but will have a modest bump per ghz and also will be cooler and more efficient, always welcome in a broiling PC case :-)

OS: Win 8 CPU: I7 4770k 3.5GZ GPU: GTX 780ti

#17
Posted 03/02/2013 12:59 PM   
[quote="foreverseeking"]Lol, just wait unless you've got money to burn. If you have to convince yourself to make the purchase, you will always regret it on some level. Haswell will not be much better then Ivy in performance from rumours I've read, but will have a modest bump per ghz and also will be cooler and more efficient, always welcome in a broiling PC case :-)[/quote] Ha, indeed. Thanks for the advice again :)
foreverseeking said:Lol, just wait unless you've got money to burn. If you have to convince yourself to make the purchase, you will always regret it on some level. Haswell will not be much better then Ivy in performance from rumours I've read, but will have a modest bump per ghz and also will be cooler and more efficient, always welcome in a broiling PC case :-)


Ha, indeed. Thanks for the advice again :)

#18
Posted 03/02/2013 01:29 PM   
Actually, the way I've understood it, which may be entirely wrong is that the two images +overhead equates to more like 60% performance reduction when using 3D vs 2D. 2D data path: Application -> DX -> Driver -> Display 3D data path: Application -> 3D-Driver -> DX -> Driver -> Display (or) 3D data path: Application -> DX -> 3D-Driver -> Driver -> Display The application calculates the scene and requests it to be drawn. My understanding is that 3Dvision intercepts this request and based on our depth+convergence settings and the depth of of various objects in the scene recalculates two different view points (2 frames) and then forwards that information to DX and the driver to render. Since (from my understanding) 3 different frames are being calculated, this adds to the CPU overhead. While many of us that use 3DVison also use SLI rigs, I don't have evidence to support that 3DVision would force alternate frame rendering on 2 card SLI rigs. (while it would make sense that it did) The above would imply that 3D is 2-3x more taxing on your CPU and 2x more taxing on your GPU. I just installed EVGA's PrecisionX and if I can get it to function with the games I currently have installed I'll try some tests to see what the different memory usage is 3D/2D SLI/Single card. M.
Actually, the way I've understood it, which may be entirely wrong is that the two images +overhead equates to more like 60% performance reduction when using 3D vs 2D.

2D data path: Application -> DX -> Driver -> Display
3D data path: Application -> 3D-Driver -> DX -> Driver -> Display
(or) 3D data path: Application -> DX -> 3D-Driver -> Driver -> Display

The application calculates the scene and requests it to be drawn. My understanding is that 3Dvision intercepts this request and based on our depth+convergence settings and the depth of of various objects in the scene recalculates two different view points (2 frames) and then forwards that information to DX and the driver to render.

Since (from my understanding) 3 different frames are being calculated, this adds to the CPU overhead. While many of us that use 3DVison also use SLI rigs, I don't have evidence to support that 3DVision would force alternate frame rendering on 2 card SLI rigs. (while it would make sense that it did)

The above would imply that 3D is 2-3x more taxing on your CPU and 2x more taxing on your GPU.

I just installed EVGA's PrecisionX and if I can get it to function with the games I currently have installed I'll try some tests to see what the different memory usage is 3D/2D SLI/Single card.

M.

i7-2600K-4.5Ghz/Corsair H100i/8GB/GTX780SC-SLI/Win7-64/1200W-PSU/Samsung 840-500GB SSD/Coolermaster-Tower/Benq 1080ST @ 100"

#19
Posted 03/02/2013 08:50 PM   
I just tested Witcher 2 again. running around in a small room for 1-2 minutes. no windows. no view outside. same small room for both tests. In 2D: max vram usage is 574MB In 3D: max vram usage is 733MB 27.7% vram usage increase. Also tested Unigine Valley Benchmark 1.0 in 2D and 3D. same graphic settings. 5-10 minutes each test. In 2D: max vram usage is 690MB In 3D: max vram usage is 870MB 26.1% vram usage increase. To mbloof, 3D vision forces v-sync on and triple buffering on. From my experience, in a cpu limited scenario, fps will decrease approximately 33% compared to 2D. In a gpu limited scenario, fps will decrease approximately 50%. Note: 3D Vision max fps is 60 (with a 120Hz display). The cpu load will never be more than 60fps (120Hz display). Ideally, the cpu can process at least 120 fps (in 2D), but the cpu will never actually process more than 60fps (in 3D - 120Hz display).
I just tested Witcher 2 again. running around in a small room for 1-2 minutes. no windows. no view outside. same small room for both tests.

In 2D: max vram usage is 574MB
In 3D: max vram usage is 733MB
27.7% vram usage increase.

Also tested Unigine Valley Benchmark 1.0 in 2D and 3D. same graphic settings. 5-10 minutes each test.
In 2D: max vram usage is 690MB
In 3D: max vram usage is 870MB
26.1% vram usage increase.

To mbloof, 3D vision forces v-sync on and triple buffering on. From my experience, in a cpu limited scenario, fps will decrease approximately 33% compared to 2D. In a gpu limited scenario, fps will decrease approximately 50%.

Note: 3D Vision max fps is 60 (with a 120Hz display). The cpu load will never be more than 60fps (120Hz display). Ideally, the cpu can process at least 120 fps (in 2D), but the cpu will never actually process more than 60fps (in 3D - 120Hz display).

Thief 1/2/gold in 3D
https://forums.geforce.com/default/topic/523535/3d-vision/thief-1-2-and-system-shock-2-perfect-3d-with-unofficial-patch-1-19
http://photos.3dvisionlive.com/Partol/album/509eb580a3e067153c000020/

[Acer GD245HQ - 1920x1080 120Hz] [Nvidia 3D Vision]
[MSI H81M-P33 with Pentium G3258 @ 4.4GHz and Zalman CNPS5X}[Transcend 2x2GB DDR3]
[Asus GTX 750 Ti @ 1350MHz] [Intel SSD 330 - 240GB]
[Creative Titanium HD + Beyerdynamic DT 880 (250ohm) headphones] [Windows 7 64bit]

#20
Posted 03/03/2013 11:10 PM   
Dude, I'm runnin i7950 oc at 4ghz. no problems runnin games in 3d dude. I have no reason to upgrade my cpu, and as far as I can see, neither do you. I only recommend SLI. It just works so well for 3d vision.
Dude, I'm runnin i7950 oc at 4ghz. no problems runnin games in 3d dude. I have no reason to upgrade my cpu, and as far as I can see, neither do you. I only recommend SLI. It just works so well for 3d vision.

AsRock X58 Extreme6 mobo
Intel Core-i7 950 @ 4ghz
12gb Corsair Dominator DDR3 1600
ASUS DirectCU II GTX 780 3gb
Corsair TX 950w PSU
NZXT Phantom Red/Black Case
3d Vision 1 w/ Samsung 2233rz Monitor
3d Vision 2 w/ ASUS VG278HE Monitor

#21
Posted 03/04/2013 08:17 PM   
[quote="jlmitnick"][quote="foreverseeking"][quote="eqzitara"]An i7 is no worries. If you had something amd or lowend i5 it may of been an issue but your fine.[/quote] While I agree with you that the I7950 is a great cpu, it might still be bottlenecking even at 3.8ghz on many games. My I7930 when over clocked from 2.8 to 3.8ghz gained almost 50% higher min frames per second in Batman. I know the maths doesn't quite add up, but i did multiple tests. Visually it was night and day better. I lowered the clock down to 3.57 for stability at a low core voltage. With each over clock i ran fraps and with each increase the fps increase was quite linear on a good selection of games, tailing only a little towards the end. I wouldn't advise him to upgrade as benefits will be small, but if I go sli I think an upgrade from the 900 series will be hugely beneficial. So even though CPU is not as important as GPU, considering most of us here have a top gpu setup, minimum and average fps in my experience are hugely affected by cpu speed. For anyone building, my advice is get a top Ivy (or wait for hasswell) and give it a moderate over-clock.[/quote] Gah...can't decide now what to do. So as I said earlier currently on i7-950 @ 3.8ghz. I could today, go to microcenter and pick up an i7-3570k and an ASRock Z77 Extreme4 LGA 1155 Z77 and a Cooler Master Hyper 212 Plus, for a total of about $300. Or I could wait for Haswell...gah, decisions decisions. Eh.....I should just wait, I know I'll regret getting a mobo/proc that are already almost a year old when Haswell is only 3 months away.[/quote] Man, it sounds to me like you have other issues that just a slow processor, but what do I know (not much). You can remedy that by purchasing new stuff - you'll have to reinstall windows, etc... anyway, so that's the sure-fire way of remedying the problem. For what it's worth, I have the AsRock Extreme 6 1155, and I've been satisfied with the purchase. I burned up a nice MSI board that cost twice as much, and was way cooler looking. But the Extreme series seems competent. Keep in mind that there's plenty of documentation on OC'ing Ivy and Sandy bridge processors, but we don't really know yet about Haswell. Personally I would be upgrading now and OC'ing the system anyway. I've ran both single card setups and SLI, and recognize the advantages of each. Ultimately for the last two builds I've settled on single card setups. Money-wise there ultimately isn't a whole lot of difference, and while the performance ceiling is higher with SLI, 3D vision users like ourselves have enough trouble with game compatibility as it is, let alone throwing SLI issues into the mix. But that's just my two cents. Cheers!
jlmitnick said:
foreverseeking said:
eqzitara said:An i7 is no worries.
If you had something amd or lowend i5 it may of been an issue but your fine.


While I agree with you that the I7950 is a great cpu, it might still be bottlenecking even at 3.8ghz on many games. My I7930 when over clocked from 2.8 to 3.8ghz gained almost 50% higher min frames per second in Batman. I know the maths doesn't quite add up, but i did multiple tests. Visually it was night and day better. I lowered the clock down to 3.57 for stability at a low core voltage.

With each over clock i ran fraps and with each increase the fps increase was quite linear on a good selection of games, tailing only a little towards the end. I wouldn't advise him to upgrade as benefits will be small, but if I go sli I think an upgrade from the 900 series will be hugely beneficial.

So even though CPU is not as important as GPU, considering most of us here have a top gpu setup, minimum and average fps in my experience are hugely affected by cpu speed. For anyone building, my advice is get a top Ivy (or wait for hasswell) and give it a moderate over-clock.


Gah...can't decide now what to do. So as I said earlier currently on i7-950 @ 3.8ghz.

I could today, go to microcenter and pick up an i7-3570k and an ASRock Z77 Extreme4 LGA 1155 Z77 and a Cooler Master Hyper 212 Plus, for a total of about $300. Or I could wait for Haswell...gah, decisions decisions.

Eh.....I should just wait, I know I'll regret getting a mobo/proc that are already almost a year old when Haswell is only 3 months away.


Man, it sounds to me like you have other issues that just a slow processor, but what do I know (not much). You can remedy that by purchasing new stuff - you'll have to reinstall windows, etc... anyway, so that's the sure-fire way of remedying the problem.

For what it's worth, I have the AsRock Extreme 6 1155, and I've been satisfied with the purchase. I burned up a nice MSI board that cost twice as much, and was way cooler looking. But the Extreme series seems competent. Keep in mind that there's plenty of documentation on OC'ing Ivy and Sandy bridge processors, but we don't really know yet about Haswell. Personally I would be upgrading now and OC'ing the system anyway.

I've ran both single card setups and SLI, and recognize the advantages of each. Ultimately for the last two builds I've settled on single card setups. Money-wise there ultimately isn't a whole lot of difference, and while the performance ceiling is higher with SLI, 3D vision users like ourselves have enough trouble with game compatibility as it is, let alone throwing SLI issues into the mix. But that's just my two cents. Cheers!

|CPU: i7-2700k @ 4.5Ghz
|Cooler: Zalman 9900 Max
|MB: MSI Military Class II Z68 GD-80
|RAM: Corsair Vengence 16GB DDR3
|SSDs: Seagate 600 240GB; Crucial M4 128GB
|HDDs: Seagate Barracuda 1TB; Seagate Barracuda 500GB
|PS: OCZ ZX Series 1250watt
|Case: Antec 1200 V3
|Monitors: Asus 3D VG278HE; Asus 3D VG236H; Samsung 3D 51" Plasma;
|GPU:MSI 1080GTX "Duke"
|OS: Windows 10 Pro X64

#22
Posted 03/04/2013 09:13 PM   
Hi fellas, I did an in depth analysis of this a few years back. My findings are below the graph. Please read the results at the bottom for conclusions... [b]*** I must stress that this is not designed to compare the performance of the 2 graphics cards side by side as any such comparison would be unfair, but only compare 3D Vision on each card separately. ***[/b] [img]http://www.shahzad.aquiss.com/s3d.png[/img] * Batman Arkham Asylum has a maximum FPS cap of 60, hence the truncation of the bars. ** Average results are not contaminated by benchmarks for Batman Arkham Asylum / Mirror's Edge. [b][u]The most important conclusion is that the drop due to S3D on Both ATi and nVidia cards is about the same: 35%[/u] [/b] Other conclusions include: 1. One can see (or will see) the importance of spending as much as one can on the graphics card even at the cost of a higher end CPU. That is, a mediocre cpu and a high end graphics card will net vastly improved gaming performance in and out of S3D than a high end CPU and an average graphics card. 2. nVidia cards and GeForce 3D Vision no longer have the whole S3D market to themselves. ATi seems to be a viable solution, if they decide to really tap the S3D market. I should also add that, clock for clock, different models of the i7 processors show virtually no difference in real world benchmarks. Generally speaking, the best practice is to invest in a good overclocking motherboard and cooling, and OC to 4GHz+ on even an older i7. If you must, buy haswell when it launches; but again, clock for clock, it will be only marginally faster (not counting the onboard GPU).
Hi fellas,

I did an in depth analysis of this a few years back. My findings are below the graph. Please read the results at the bottom for conclusions...

*** I must stress that this is not designed to compare the performance of the 2 graphics cards side by side as any such comparison would be unfair, but only compare 3D Vision on each card separately. ***

Image

* Batman Arkham Asylum has a maximum FPS cap of 60, hence the truncation of the bars.

** Average results are not contaminated by benchmarks for Batman Arkham Asylum / Mirror's Edge.

The most important conclusion is that the drop due to S3D on Both ATi and nVidia cards is about the same: 35%

Other conclusions include:

1. One can see (or will see) the importance of spending as much as one can on the graphics card even at the cost of a higher end CPU. That is, a mediocre cpu and a high end graphics card will net vastly improved gaming performance in and out of S3D than a high end CPU and an average graphics card.

2. nVidia cards and GeForce 3D Vision no longer have the whole S3D market to themselves. ATi seems to be a viable solution, if they decide to really tap the S3D market.


I should also add that, clock for clock, different models of the i7 processors show virtually no difference in real world benchmarks. Generally speaking, the best practice is to invest in a good overclocking motherboard and cooling, and OC to 4GHz+ on even an older i7. If you must, buy haswell when it launches; but again, clock for clock, it will be only marginally faster (not counting the onboard GPU).

Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.

#23
Posted 03/05/2013 01:02 AM   
Thanks for the input RAGE. This is interesting, but pretty much verifies my own experience/results. I recently OC'd my Sandy Bridge to 4.5 and couldn't be happier. I change GPUs way more often than CPUs. In fact, the last CPU I owned before this i7 was a Core II Duo!
Thanks for the input RAGE. This is interesting, but pretty much verifies my own experience/results. I recently OC'd my Sandy Bridge to 4.5 and couldn't be happier. I change GPUs way more often than CPUs. In fact, the last CPU I owned before this i7 was a Core II Duo!

|CPU: i7-2700k @ 4.5Ghz
|Cooler: Zalman 9900 Max
|MB: MSI Military Class II Z68 GD-80
|RAM: Corsair Vengence 16GB DDR3
|SSDs: Seagate 600 240GB; Crucial M4 128GB
|HDDs: Seagate Barracuda 1TB; Seagate Barracuda 500GB
|PS: OCZ ZX Series 1250watt
|Case: Antec 1200 V3
|Monitors: Asus 3D VG278HE; Asus 3D VG236H; Samsung 3D 51" Plasma;
|GPU:MSI 1080GTX "Duke"
|OS: Windows 10 Pro X64

#24
Posted 03/05/2013 01:31 PM   
  2 / 2    
Scroll To Top