Starcraft 2 and WTF? Rated good but really does nvidia even play the games???
  2 / 3    
[quote name='cravinmild' post='1109567' date='Aug 27 2010, 03:12 PM']Sorry but 20% depth (only achived by completely reducing convergence to ZERO) is not a rated "good". That is hardly better than vanilla (2d). After 2 years Nvidia, TWO YEARS of your solution and you still cant get this 3d solution to work fully or get the developers onboard with you, what are you doing that causes gamemakers to run in the other direction when you walk in the room.

This is not a step in the right direction, more like a slide to the left and a "shucks guys" "dont know".[/quote]
Again, did you even bother trying it? It works great for me at 50% depth, not 20%. Everything in-game lines up perfectly, even the 2D mouse cursor and health bars. You just have to set Convergence correctly. If you're having trouble calibrating this correctly yourself, hit Ctrl F7 in-game then navigate to:

%:\\Users\%username%\Documents\NVStereoscopic3D.LOG

Open up "SC2" with Notepad and replace these two lines:

[code]StereoSeparation=0.06535968 (3d85db4c)
StereoConvergence=25.73163986 (41cdda66)[/code]

Your login screen will be broken, so will cut-scenes and menus, so you'll either need to adjust Convergence on-the-fly between transitions or disable 3D with Ctrl-T, but your in-game 3D should be perfect with good depth, separation and even pop-out when available. These problems are not Nvidia's fault, they're Blizzard's fault for not uniformly applying their offset values in different game instances. Once [b]Blizzard[/b] fixes this as detailed in MikeZ's post, or alternately, render cursor and health bars at correct 3D object depth, then 3D Vision should be perfect.

Here's a few more examples below:
[quote name='cravinmild' post='1109567' date='Aug 27 2010, 03:12 PM']Sorry but 20% depth (only achived by completely reducing convergence to ZERO) is not a rated "good". That is hardly better than vanilla (2d). After 2 years Nvidia, TWO YEARS of your solution and you still cant get this 3d solution to work fully or get the developers onboard with you, what are you doing that causes gamemakers to run in the other direction when you walk in the room.



This is not a step in the right direction, more like a slide to the left and a "shucks guys" "dont know".

Again, did you even bother trying it? It works great for me at 50% depth, not 20%. Everything in-game lines up perfectly, even the 2D mouse cursor and health bars. You just have to set Convergence correctly. If you're having trouble calibrating this correctly yourself, hit Ctrl F7 in-game then navigate to:



%:\\Users\%username%\Documents\NVStereoscopic3D.LOG



Open up "SC2" with Notepad and replace these two lines:



StereoSeparation=0.06535968 (3d85db4c)

StereoConvergence=25.73163986 (41cdda66)




Your login screen will be broken, so will cut-scenes and menus, so you'll either need to adjust Convergence on-the-fly between transitions or disable 3D with Ctrl-T, but your in-game 3D should be perfect with good depth, separation and even pop-out when available. These problems are not Nvidia's fault, they're Blizzard's fault for not uniformly applying their offset values in different game instances. Once Blizzard fixes this as detailed in MikeZ's post, or alternately, render cursor and health bars at correct 3D object depth, then 3D Vision should be perfect.



Here's a few more examples below:

-=HeliX=- Mod 3DV Game Fixes
My 3D Vision Games List Ratings

Intel Core i7 5930K @4.5GHz | Gigabyte X99 Gaming 5 | Win10 x64 Pro | Corsair H105
Nvidia GeForce Titan X SLI Hybrid | ROG Swift PG278Q 144Hz + 3D Vision/G-Sync | 32GB Adata DDR4 2666
Intel Samsung 950Pro SSD | Samsung EVO 4x1 RAID 0 |
Yamaha VX-677 A/V Receiver | Polk Audio RM6880 7.1 | LG Blu-Ray
Auzen X-Fi HT HD | Logitech G710/G502/G27 | Corsair Air 540 | EVGA P2-1200W

#16
Posted 08/27/2010 08:03 PM   
The fundamental problem is that NVidia is doing it the wrong way, but they have no choice.

Any solution that attempts to intercept rendering calls and automagically create stereo 3D out of a renderer is doomed to fail on anything but the simplest game renderer. There are way to many ways to break the effect, too many things are done without depth information, or with inconsistent depth information.

Ideally Nvidia would have developers add the 3D support and produce both images, using what ever was needed, this would put the test and support issues squarely in the developers hands.
Practically the stereo 3D market is too small for developers to prioritize S3D, so NVidia does what they do.

I know first hand that their dev support visit developers with major titles and shows them S3D issues in early builds of their games, I also know that any fix above and beyond the trivial is never going to happen, and almost no-one is going to add S3D to their test matrix.

At some point if S3D becomes commonplace, or a sales driver, it'll change, but until then I'd expect major titles to ship broken, until either the dev patches it, or more likely NVidia adds more special case code to the game profile to fix it for the devs.
The fundamental problem is that NVidia is doing it the wrong way, but they have no choice.



Any solution that attempts to intercept rendering calls and automagically create stereo 3D out of a renderer is doomed to fail on anything but the simplest game renderer. There are way to many ways to break the effect, too many things are done without depth information, or with inconsistent depth information.



Ideally Nvidia would have developers add the 3D support and produce both images, using what ever was needed, this would put the test and support issues squarely in the developers hands.

Practically the stereo 3D market is too small for developers to prioritize S3D, so NVidia does what they do.



I know first hand that their dev support visit developers with major titles and shows them S3D issues in early builds of their games, I also know that any fix above and beyond the trivial is never going to happen, and almost no-one is going to add S3D to their test matrix.



At some point if S3D becomes commonplace, or a sales driver, it'll change, but until then I'd expect major titles to ship broken, until either the dev patches it, or more likely NVidia adds more special case code to the game profile to fix it for the devs.
#17
Posted 08/27/2010 08:09 PM   
The fundamental problem is that NVidia is doing it the wrong way, but they have no choice.

Any solution that attempts to intercept rendering calls and automagically create stereo 3D out of a renderer is doomed to fail on anything but the simplest game renderer. There are way to many ways to break the effect, too many things are done without depth information, or with inconsistent depth information.

Ideally Nvidia would have developers add the 3D support and produce both images, using what ever was needed, this would put the test and support issues squarely in the developers hands.
Practically the stereo 3D market is too small for developers to prioritize S3D, so NVidia does what they do.

I know first hand that their dev support visit developers with major titles and shows them S3D issues in early builds of their games, I also know that any fix above and beyond the trivial is never going to happen, and almost no-one is going to add S3D to their test matrix.

At some point if S3D becomes commonplace, or a sales driver, it'll change, but until then I'd expect major titles to ship broken, until either the dev patches it, or more likely NVidia adds more special case code to the game profile to fix it for the devs.
The fundamental problem is that NVidia is doing it the wrong way, but they have no choice.



Any solution that attempts to intercept rendering calls and automagically create stereo 3D out of a renderer is doomed to fail on anything but the simplest game renderer. There are way to many ways to break the effect, too many things are done without depth information, or with inconsistent depth information.



Ideally Nvidia would have developers add the 3D support and produce both images, using what ever was needed, this would put the test and support issues squarely in the developers hands.

Practically the stereo 3D market is too small for developers to prioritize S3D, so NVidia does what they do.



I know first hand that their dev support visit developers with major titles and shows them S3D issues in early builds of their games, I also know that any fix above and beyond the trivial is never going to happen, and almost no-one is going to add S3D to their test matrix.



At some point if S3D becomes commonplace, or a sales driver, it'll change, but until then I'd expect major titles to ship broken, until either the dev patches it, or more likely NVidia adds more special case code to the game profile to fix it for the devs.
#18
Posted 08/27/2010 08:09 PM   
@ERP

I'd disagree that Nvidia's approach is fundamentally wrong from a design standpoint, they're simply doing the purest form of 3D using the game's depth values and applying separation/convergence values to render a 2nd camera view. This is actually technically the best way to approach 3D, the downside however is that its performance-expensive as you actually render the scene 2x at different camera angles.

The problem isn't with Nvidia's approach, its the developers who have become more and more reliant on hackey post-processing and frame buffer effects that DO NOT have the proper depth values and instead, are either rendered in 2D, one-eye, or at screen-depth in 3D. Without S3D it doesn't matter and you won't notice the difference as everything is essentially at screen-depth and 2D, but in true S3D this results in very clear artifacts and rendering issues in 3D causing conflicts once you start adjusting Depth and increasing separation.

Also, major game devs have already begun taking S3D into account by adjusting and implementing the tools into their engines. UE3 for example announced they were integrating 3D Vision support into their engine earlier this year: [url="http://hothardware.com/News/Unreal-Engine-3-Gains-NVIDIA-3D-Vision-Support-The-Future-Is-Depth/"]http://hothardware.com/News/Unreal-Engine-...uture-Is-Depth/[/url]

CryTek also announced they were going to make CryEngine 3 synonymous with Stereo 3D as a major selling point and tech enabler with ambitions to make it a major commercially licensed engine: [url="http://www.tomshardware.com/news/crysis-cryengine-crytek-stereoscopic-3d,9774.html"]http://www.tomshardware.com/news/crysis-cr...ic-3d,9774.html[/url]

For UE3 we've seen how hit and miss 3D Vision can be with the engine (Batman AA, Borderlands as good examples, many others as poor examples), so I'm sure its just a matter of either disabling some of the problematic post-processing filters UE3 loves to hack into their games, and to properly render overlays and crosshairs at object-depth rather than screen-depth. That's really all there is to it, we've seen Nvidia can selectively target what objects and game assets they render in 2D/3D but we've also seen they cannot easily change the actual depth values, which would involve intercepting rendering calls and injecting their own. Even using their own laser sight, we've seen how performance expensive that can be.
@ERP



I'd disagree that Nvidia's approach is fundamentally wrong from a design standpoint, they're simply doing the purest form of 3D using the game's depth values and applying separation/convergence values to render a 2nd camera view. This is actually technically the best way to approach 3D, the downside however is that its performance-expensive as you actually render the scene 2x at different camera angles.



The problem isn't with Nvidia's approach, its the developers who have become more and more reliant on hackey post-processing and frame buffer effects that DO NOT have the proper depth values and instead, are either rendered in 2D, one-eye, or at screen-depth in 3D. Without S3D it doesn't matter and you won't notice the difference as everything is essentially at screen-depth and 2D, but in true S3D this results in very clear artifacts and rendering issues in 3D causing conflicts once you start adjusting Depth and increasing separation.



Also, major game devs have already begun taking S3D into account by adjusting and implementing the tools into their engines. UE3 for example announced they were integrating 3D Vision support into their engine earlier this year: http://hothardware.com/News/Unreal-Engine-...uture-Is-Depth/



CryTek also announced they were going to make CryEngine 3 synonymous with Stereo 3D as a major selling point and tech enabler with ambitions to make it a major commercially licensed engine: http://www.tomshardware.com/news/crysis-cr...ic-3d,9774.html



For UE3 we've seen how hit and miss 3D Vision can be with the engine (Batman AA, Borderlands as good examples, many others as poor examples), so I'm sure its just a matter of either disabling some of the problematic post-processing filters UE3 loves to hack into their games, and to properly render overlays and crosshairs at object-depth rather than screen-depth. That's really all there is to it, we've seen Nvidia can selectively target what objects and game assets they render in 2D/3D but we've also seen they cannot easily change the actual depth values, which would involve intercepting rendering calls and injecting their own. Even using their own laser sight, we've seen how performance expensive that can be.

-=HeliX=- Mod 3DV Game Fixes
My 3D Vision Games List Ratings

Intel Core i7 5930K @4.5GHz | Gigabyte X99 Gaming 5 | Win10 x64 Pro | Corsair H105
Nvidia GeForce Titan X SLI Hybrid | ROG Swift PG278Q 144Hz + 3D Vision/G-Sync | 32GB Adata DDR4 2666
Intel Samsung 950Pro SSD | Samsung EVO 4x1 RAID 0 |
Yamaha VX-677 A/V Receiver | Polk Audio RM6880 7.1 | LG Blu-Ray
Auzen X-Fi HT HD | Logitech G710/G502/G27 | Corsair Air 540 | EVGA P2-1200W

#19
Posted 08/27/2010 08:33 PM   
@ERP

I'd disagree that Nvidia's approach is fundamentally wrong from a design standpoint, they're simply doing the purest form of 3D using the game's depth values and applying separation/convergence values to render a 2nd camera view. This is actually technically the best way to approach 3D, the downside however is that its performance-expensive as you actually render the scene 2x at different camera angles.

The problem isn't with Nvidia's approach, its the developers who have become more and more reliant on hackey post-processing and frame buffer effects that DO NOT have the proper depth values and instead, are either rendered in 2D, one-eye, or at screen-depth in 3D. Without S3D it doesn't matter and you won't notice the difference as everything is essentially at screen-depth and 2D, but in true S3D this results in very clear artifacts and rendering issues in 3D causing conflicts once you start adjusting Depth and increasing separation.

Also, major game devs have already begun taking S3D into account by adjusting and implementing the tools into their engines. UE3 for example announced they were integrating 3D Vision support into their engine earlier this year: [url="http://hothardware.com/News/Unreal-Engine-3-Gains-NVIDIA-3D-Vision-Support-The-Future-Is-Depth/"]http://hothardware.com/News/Unreal-Engine-...uture-Is-Depth/[/url]

CryTek also announced they were going to make CryEngine 3 synonymous with Stereo 3D as a major selling point and tech enabler with ambitions to make it a major commercially licensed engine: [url="http://www.tomshardware.com/news/crysis-cryengine-crytek-stereoscopic-3d,9774.html"]http://www.tomshardware.com/news/crysis-cr...ic-3d,9774.html[/url]

For UE3 we've seen how hit and miss 3D Vision can be with the engine (Batman AA, Borderlands as good examples, many others as poor examples), so I'm sure its just a matter of either disabling some of the problematic post-processing filters UE3 loves to hack into their games, and to properly render overlays and crosshairs at object-depth rather than screen-depth. That's really all there is to it, we've seen Nvidia can selectively target what objects and game assets they render in 2D/3D but we've also seen they cannot easily change the actual depth values, which would involve intercepting rendering calls and injecting their own. Even using their own laser sight, we've seen how performance expensive that can be.
@ERP



I'd disagree that Nvidia's approach is fundamentally wrong from a design standpoint, they're simply doing the purest form of 3D using the game's depth values and applying separation/convergence values to render a 2nd camera view. This is actually technically the best way to approach 3D, the downside however is that its performance-expensive as you actually render the scene 2x at different camera angles.



The problem isn't with Nvidia's approach, its the developers who have become more and more reliant on hackey post-processing and frame buffer effects that DO NOT have the proper depth values and instead, are either rendered in 2D, one-eye, or at screen-depth in 3D. Without S3D it doesn't matter and you won't notice the difference as everything is essentially at screen-depth and 2D, but in true S3D this results in very clear artifacts and rendering issues in 3D causing conflicts once you start adjusting Depth and increasing separation.



Also, major game devs have already begun taking S3D into account by adjusting and implementing the tools into their engines. UE3 for example announced they were integrating 3D Vision support into their engine earlier this year: http://hothardware.com/News/Unreal-Engine-...uture-Is-Depth/



CryTek also announced they were going to make CryEngine 3 synonymous with Stereo 3D as a major selling point and tech enabler with ambitions to make it a major commercially licensed engine: http://www.tomshardware.com/news/crysis-cr...ic-3d,9774.html



For UE3 we've seen how hit and miss 3D Vision can be with the engine (Batman AA, Borderlands as good examples, many others as poor examples), so I'm sure its just a matter of either disabling some of the problematic post-processing filters UE3 loves to hack into their games, and to properly render overlays and crosshairs at object-depth rather than screen-depth. That's really all there is to it, we've seen Nvidia can selectively target what objects and game assets they render in 2D/3D but we've also seen they cannot easily change the actual depth values, which would involve intercepting rendering calls and injecting their own. Even using their own laser sight, we've seen how performance expensive that can be.

-=HeliX=- Mod 3DV Game Fixes
My 3D Vision Games List Ratings

Intel Core i7 5930K @4.5GHz | Gigabyte X99 Gaming 5 | Win10 x64 Pro | Corsair H105
Nvidia GeForce Titan X SLI Hybrid | ROG Swift PG278Q 144Hz + 3D Vision/G-Sync | 32GB Adata DDR4 2666
Intel Samsung 950Pro SSD | Samsung EVO 4x1 RAID 0 |
Yamaha VX-677 A/V Receiver | Polk Audio RM6880 7.1 | LG Blu-Ray
Auzen X-Fi HT HD | Logitech G710/G502/G27 | Corsair Air 540 | EVGA P2-1200W

#20
Posted 08/27/2010 08:33 PM   
dp
dp

-=HeliX=- Mod 3DV Game Fixes
My 3D Vision Games List Ratings

Intel Core i7 5930K @4.5GHz | Gigabyte X99 Gaming 5 | Win10 x64 Pro | Corsair H105
Nvidia GeForce Titan X SLI Hybrid | ROG Swift PG278Q 144Hz + 3D Vision/G-Sync | 32GB Adata DDR4 2666
Intel Samsung 950Pro SSD | Samsung EVO 4x1 RAID 0 |
Yamaha VX-677 A/V Receiver | Polk Audio RM6880 7.1 | LG Blu-Ray
Auzen X-Fi HT HD | Logitech G710/G502/G27 | Corsair Air 540 | EVGA P2-1200W

#21
Posted 08/27/2010 08:37 PM   
dp
dp

-=HeliX=- Mod 3DV Game Fixes
My 3D Vision Games List Ratings

Intel Core i7 5930K @4.5GHz | Gigabyte X99 Gaming 5 | Win10 x64 Pro | Corsair H105
Nvidia GeForce Titan X SLI Hybrid | ROG Swift PG278Q 144Hz + 3D Vision/G-Sync | 32GB Adata DDR4 2666
Intel Samsung 950Pro SSD | Samsung EVO 4x1 RAID 0 |
Yamaha VX-677 A/V Receiver | Polk Audio RM6880 7.1 | LG Blu-Ray
Auzen X-Fi HT HD | Logitech G710/G502/G27 | Corsair Air 540 | EVGA P2-1200W

#22
Posted 08/27/2010 08:37 PM   
I notice you have used a similar method to what you need to do with Torchlight, which also has unit icons rendered in 2D.

By pulling out convergence as you add depth keeping a unit near the center of the screen at screen depth, everything works in 3D. Doesnt SC2 allow you to zoom in and out then?

This technique works wonders in Torchlight but sucks with games like World in Conflict where you can zoom and pan the camera. I used to love WiC - sniff!
I notice you have used a similar method to what you need to do with Torchlight, which also has unit icons rendered in 2D.



By pulling out convergence as you add depth keeping a unit near the center of the screen at screen depth, everything works in 3D. Doesnt SC2 allow you to zoom in and out then?



This technique works wonders in Torchlight but sucks with games like World in Conflict where you can zoom and pan the camera. I used to love WiC - sniff!

Lord, grant me the serenity to accept the things I cannot change, the courage to change the things I can, and the wisdom to know the difference.
-------------------
Vitals: Windows 7 64bit, i5 2500 @ 4.4ghz, SLI GTX670, 8GB, Viewsonic VX2268WM

Handy Driver Discussion
Helix Mod - community fixes
Bo3b's Shaderhacker School - How to fix 3D in games
3dsolutionsgaming.com - videos, reviews and 3D fixes

#23
Posted 08/27/2010 08:48 PM   
I notice you have used a similar method to what you need to do with Torchlight, which also has unit icons rendered in 2D.

By pulling out convergence as you add depth keeping a unit near the center of the screen at screen depth, everything works in 3D. Doesnt SC2 allow you to zoom in and out then?

This technique works wonders in Torchlight but sucks with games like World in Conflict where you can zoom and pan the camera. I used to love WiC - sniff!
I notice you have used a similar method to what you need to do with Torchlight, which also has unit icons rendered in 2D.



By pulling out convergence as you add depth keeping a unit near the center of the screen at screen depth, everything works in 3D. Doesnt SC2 allow you to zoom in and out then?



This technique works wonders in Torchlight but sucks with games like World in Conflict where you can zoom and pan the camera. I used to love WiC - sniff!

Lord, grant me the serenity to accept the things I cannot change, the courage to change the things I can, and the wisdom to know the difference.
-------------------
Vitals: Windows 7 64bit, i5 2500 @ 4.4ghz, SLI GTX670, 8GB, Viewsonic VX2268WM

Handy Driver Discussion
Helix Mod - community fixes
Bo3b's Shaderhacker School - How to fix 3D in games
3dsolutionsgaming.com - videos, reviews and 3D fixes

#24
Posted 08/27/2010 08:48 PM   
[quote name='chiz' post='1109606' date='Aug 27 2010, 01:33 PM']I'd disagree that Nvidia's approach is fundamentally wrong from a design standpoint, they're simply doing the purest form of 3D using the game's depth values and applying separation/convergence values to render a 2nd camera view. This is actually technically the best way to approach 3D, the downside however is that its performance-expensive as you actually render the scene 2x at different camera angles.[/quote]

You're missing my point, yes rendering the scene twice is the "best" way to go, trying to do the right thing given a stream of primitives already sent to the GPU is the wrong place to do it.
There are way to many things you simply cannot know about the scene that late in the pipeline.
The right place is when you still know the intent of the primitives being rendered, that's inside the games rendering logic, developers (or engine devs for those that buy tech) should in an ideal world be the ones implementing S3D, not the driver writer.

[quote name='chiz' post='1109606' date='Aug 27 2010, 01:33 PM']The problem isn't with Nvidia's approach, its the developers who have become more and more reliant on hackey post-processing and frame buffer effects that DO NOT have the proper depth values and instead, are either rendered in 2D, one-eye, or at screen-depth in 3D. Without S3D it doesn't matter and you won't notice the difference as everything is essentially at screen-depth and 2D, but in true S3D this results in very clear artifacts and rendering issues in 3D causing conflicts once you start adjusting Depth and increasing separation.[/quote]

They aren't hacky there are perfectly valid reasons to do screen space work, in many cases it's the ideal place to sample something. And if as a developer you're not explicitly targeting S3D, you won't bother with the expense of retaining depth information.
This isn't a new thing either, over a decade ago when I worked on N64 titles we never wrote depth for the sky box, it was unnecessary and costly.

[quote name='chiz' post='1109606' date='Aug 27 2010, 01:33 PM']Also, major game devs have already begun taking S3D into account by adjusting and implementing the tools into their engines. UE3 for example announced they were integrating 3D Vision support into their engine earlier this year: [url="http://hothardware.com/News/Unreal-Engine-3-Gains-NVIDIA-3D-Vision-Support-The-Future-Is-Depth/"]http://hothardware.com/News/Unreal-Engine-...uture-Is-Depth/[/url]

CryTek also announced they were going to make CryEngine 3 synonymous with Stereo 3D as a major selling point and tech enabler with ambitions to make it a major commercially licensed engine: [url="http://www.tomshardware.com/news/crysis-cryengine-crytek-stereoscopic-3d,9774.html"]http://www.tomshardware.com/news/crysis-cr...ic-3d,9774.html[/url]

For UE3 we've seen how hit and miss 3D Vision can be with the engine (Batman AA, Borderlands as good examples, many others as poor examples), so I'm sure its just a matter of either disabling some of the problematic post-processing filters UE3 loves to hack into their games, and to properly render overlays and crosshairs at object-depth rather than screen-depth. That's really all there is to it, we've seen Nvidia can selectively target what objects and game assets they render in 2D/3D but we've also seen they cannot easily change the actual depth values, which would involve intercepting rendering calls and injecting their own. Even using their own laser sight, we've seen how performance expensive that can be.[/quote]

I agree we're starting to see some small steps forwards, 3D engines supporting it doesn't surprise me, it's largely a checkbox item for engine vendors. But having an engine that supports it and writing a game that supports it aren't the same thing. The larger issue is big publishers adding S3D to their test matrix we won't see high quality support until it's tested. Right now your best bet on a game having good S3D support is having one of the lead developers who actually plays games in S3D.
[quote name='chiz' post='1109606' date='Aug 27 2010, 01:33 PM']I'd disagree that Nvidia's approach is fundamentally wrong from a design standpoint, they're simply doing the purest form of 3D using the game's depth values and applying separation/convergence values to render a 2nd camera view. This is actually technically the best way to approach 3D, the downside however is that its performance-expensive as you actually render the scene 2x at different camera angles.



You're missing my point, yes rendering the scene twice is the "best" way to go, trying to do the right thing given a stream of primitives already sent to the GPU is the wrong place to do it.

There are way to many things you simply cannot know about the scene that late in the pipeline.

The right place is when you still know the intent of the primitives being rendered, that's inside the games rendering logic, developers (or engine devs for those that buy tech) should in an ideal world be the ones implementing S3D, not the driver writer.



[quote name='chiz' post='1109606' date='Aug 27 2010, 01:33 PM']The problem isn't with Nvidia's approach, its the developers who have become more and more reliant on hackey post-processing and frame buffer effects that DO NOT have the proper depth values and instead, are either rendered in 2D, one-eye, or at screen-depth in 3D. Without S3D it doesn't matter and you won't notice the difference as everything is essentially at screen-depth and 2D, but in true S3D this results in very clear artifacts and rendering issues in 3D causing conflicts once you start adjusting Depth and increasing separation.



They aren't hacky there are perfectly valid reasons to do screen space work, in many cases it's the ideal place to sample something. And if as a developer you're not explicitly targeting S3D, you won't bother with the expense of retaining depth information.

This isn't a new thing either, over a decade ago when I worked on N64 titles we never wrote depth for the sky box, it was unnecessary and costly.



[quote name='chiz' post='1109606' date='Aug 27 2010, 01:33 PM']Also, major game devs have already begun taking S3D into account by adjusting and implementing the tools into their engines. UE3 for example announced they were integrating 3D Vision support into their engine earlier this year: http://hothardware.com/News/Unreal-Engine-...uture-Is-Depth/



CryTek also announced they were going to make CryEngine 3 synonymous with Stereo 3D as a major selling point and tech enabler with ambitions to make it a major commercially licensed engine: http://www.tomshardware.com/news/crysis-cr...ic-3d,9774.html



For UE3 we've seen how hit and miss 3D Vision can be with the engine (Batman AA, Borderlands as good examples, many others as poor examples), so I'm sure its just a matter of either disabling some of the problematic post-processing filters UE3 loves to hack into their games, and to properly render overlays and crosshairs at object-depth rather than screen-depth. That's really all there is to it, we've seen Nvidia can selectively target what objects and game assets they render in 2D/3D but we've also seen they cannot easily change the actual depth values, which would involve intercepting rendering calls and injecting their own. Even using their own laser sight, we've seen how performance expensive that can be.



I agree we're starting to see some small steps forwards, 3D engines supporting it doesn't surprise me, it's largely a checkbox item for engine vendors. But having an engine that supports it and writing a game that supports it aren't the same thing. The larger issue is big publishers adding S3D to their test matrix we won't see high quality support until it's tested. Right now your best bet on a game having good S3D support is having one of the lead developers who actually plays games in S3D.
#25
Posted 08/27/2010 09:01 PM   
[quote name='chiz' post='1109606' date='Aug 27 2010, 01:33 PM']I'd disagree that Nvidia's approach is fundamentally wrong from a design standpoint, they're simply doing the purest form of 3D using the game's depth values and applying separation/convergence values to render a 2nd camera view. This is actually technically the best way to approach 3D, the downside however is that its performance-expensive as you actually render the scene 2x at different camera angles.[/quote]

You're missing my point, yes rendering the scene twice is the "best" way to go, trying to do the right thing given a stream of primitives already sent to the GPU is the wrong place to do it.
There are way to many things you simply cannot know about the scene that late in the pipeline.
The right place is when you still know the intent of the primitives being rendered, that's inside the games rendering logic, developers (or engine devs for those that buy tech) should in an ideal world be the ones implementing S3D, not the driver writer.

[quote name='chiz' post='1109606' date='Aug 27 2010, 01:33 PM']The problem isn't with Nvidia's approach, its the developers who have become more and more reliant on hackey post-processing and frame buffer effects that DO NOT have the proper depth values and instead, are either rendered in 2D, one-eye, or at screen-depth in 3D. Without S3D it doesn't matter and you won't notice the difference as everything is essentially at screen-depth and 2D, but in true S3D this results in very clear artifacts and rendering issues in 3D causing conflicts once you start adjusting Depth and increasing separation.[/quote]

They aren't hacky there are perfectly valid reasons to do screen space work, in many cases it's the ideal place to sample something. And if as a developer you're not explicitly targeting S3D, you won't bother with the expense of retaining depth information.
This isn't a new thing either, over a decade ago when I worked on N64 titles we never wrote depth for the sky box, it was unnecessary and costly.

[quote name='chiz' post='1109606' date='Aug 27 2010, 01:33 PM']Also, major game devs have already begun taking S3D into account by adjusting and implementing the tools into their engines. UE3 for example announced they were integrating 3D Vision support into their engine earlier this year: [url="http://hothardware.com/News/Unreal-Engine-3-Gains-NVIDIA-3D-Vision-Support-The-Future-Is-Depth/"]http://hothardware.com/News/Unreal-Engine-...uture-Is-Depth/[/url]

CryTek also announced they were going to make CryEngine 3 synonymous with Stereo 3D as a major selling point and tech enabler with ambitions to make it a major commercially licensed engine: [url="http://www.tomshardware.com/news/crysis-cryengine-crytek-stereoscopic-3d,9774.html"]http://www.tomshardware.com/news/crysis-cr...ic-3d,9774.html[/url]

For UE3 we've seen how hit and miss 3D Vision can be with the engine (Batman AA, Borderlands as good examples, many others as poor examples), so I'm sure its just a matter of either disabling some of the problematic post-processing filters UE3 loves to hack into their games, and to properly render overlays and crosshairs at object-depth rather than screen-depth. That's really all there is to it, we've seen Nvidia can selectively target what objects and game assets they render in 2D/3D but we've also seen they cannot easily change the actual depth values, which would involve intercepting rendering calls and injecting their own. Even using their own laser sight, we've seen how performance expensive that can be.[/quote]

I agree we're starting to see some small steps forwards, 3D engines supporting it doesn't surprise me, it's largely a checkbox item for engine vendors. But having an engine that supports it and writing a game that supports it aren't the same thing. The larger issue is big publishers adding S3D to their test matrix we won't see high quality support until it's tested. Right now your best bet on a game having good S3D support is having one of the lead developers who actually plays games in S3D.
[quote name='chiz' post='1109606' date='Aug 27 2010, 01:33 PM']I'd disagree that Nvidia's approach is fundamentally wrong from a design standpoint, they're simply doing the purest form of 3D using the game's depth values and applying separation/convergence values to render a 2nd camera view. This is actually technically the best way to approach 3D, the downside however is that its performance-expensive as you actually render the scene 2x at different camera angles.



You're missing my point, yes rendering the scene twice is the "best" way to go, trying to do the right thing given a stream of primitives already sent to the GPU is the wrong place to do it.

There are way to many things you simply cannot know about the scene that late in the pipeline.

The right place is when you still know the intent of the primitives being rendered, that's inside the games rendering logic, developers (or engine devs for those that buy tech) should in an ideal world be the ones implementing S3D, not the driver writer.



[quote name='chiz' post='1109606' date='Aug 27 2010, 01:33 PM']The problem isn't with Nvidia's approach, its the developers who have become more and more reliant on hackey post-processing and frame buffer effects that DO NOT have the proper depth values and instead, are either rendered in 2D, one-eye, or at screen-depth in 3D. Without S3D it doesn't matter and you won't notice the difference as everything is essentially at screen-depth and 2D, but in true S3D this results in very clear artifacts and rendering issues in 3D causing conflicts once you start adjusting Depth and increasing separation.



They aren't hacky there are perfectly valid reasons to do screen space work, in many cases it's the ideal place to sample something. And if as a developer you're not explicitly targeting S3D, you won't bother with the expense of retaining depth information.

This isn't a new thing either, over a decade ago when I worked on N64 titles we never wrote depth for the sky box, it was unnecessary and costly.



[quote name='chiz' post='1109606' date='Aug 27 2010, 01:33 PM']Also, major game devs have already begun taking S3D into account by adjusting and implementing the tools into their engines. UE3 for example announced they were integrating 3D Vision support into their engine earlier this year: http://hothardware.com/News/Unreal-Engine-...uture-Is-Depth/



CryTek also announced they were going to make CryEngine 3 synonymous with Stereo 3D as a major selling point and tech enabler with ambitions to make it a major commercially licensed engine: http://www.tomshardware.com/news/crysis-cr...ic-3d,9774.html



For UE3 we've seen how hit and miss 3D Vision can be with the engine (Batman AA, Borderlands as good examples, many others as poor examples), so I'm sure its just a matter of either disabling some of the problematic post-processing filters UE3 loves to hack into their games, and to properly render overlays and crosshairs at object-depth rather than screen-depth. That's really all there is to it, we've seen Nvidia can selectively target what objects and game assets they render in 2D/3D but we've also seen they cannot easily change the actual depth values, which would involve intercepting rendering calls and injecting their own. Even using their own laser sight, we've seen how performance expensive that can be.



I agree we're starting to see some small steps forwards, 3D engines supporting it doesn't surprise me, it's largely a checkbox item for engine vendors. But having an engine that supports it and writing a game that supports it aren't the same thing. The larger issue is big publishers adding S3D to their test matrix we won't see high quality support until it's tested. Right now your best bet on a game having good S3D support is having one of the lead developers who actually plays games in S3D.
#26
Posted 08/27/2010 09:01 PM   
[quote name='andysonofbob' post='1109616' date='Aug 27 2010, 04:48 PM']I notice you have used a similar method to what you need to do with Torchlight, which also has unit icons rendered in 2D.

By pulling out convergence as you add depth keeping a unit near the center of the screen at screen depth, everything works in 3D. Doesnt SC2 allow you to zoom in and out then?

This technique works wonders in Torchlight but sucks with games like World in Conflict where you can zoom and pan the camera. I used to love WiC - sniff![/quote]
Yep I do this with pretty much any game that poorly implements cursors, overlays, unit icons, crosshairs at screen-depth or in 2D. The goal is to line up the 3D objects with 2D overlays attached to them so they are focused, then everything else can have depth/pop-out which will still look fine because there's no 2D to interfere. Once you start zooming or adjusting camera angles however, things will start to break again which is what you see with WiC (you can rotate np, but you can't zoom). In SC2 it doesn't really matter because the Zoom/FOV adjustment differential are so pathetic I doubt anyone is doing much "zooming" anyways.
[quote name='andysonofbob' post='1109616' date='Aug 27 2010, 04:48 PM']I notice you have used a similar method to what you need to do with Torchlight, which also has unit icons rendered in 2D.



By pulling out convergence as you add depth keeping a unit near the center of the screen at screen depth, everything works in 3D. Doesnt SC2 allow you to zoom in and out then?



This technique works wonders in Torchlight but sucks with games like World in Conflict where you can zoom and pan the camera. I used to love WiC - sniff!

Yep I do this with pretty much any game that poorly implements cursors, overlays, unit icons, crosshairs at screen-depth or in 2D. The goal is to line up the 3D objects with 2D overlays attached to them so they are focused, then everything else can have depth/pop-out which will still look fine because there's no 2D to interfere. Once you start zooming or adjusting camera angles however, things will start to break again which is what you see with WiC (you can rotate np, but you can't zoom). In SC2 it doesn't really matter because the Zoom/FOV adjustment differential are so pathetic I doubt anyone is doing much "zooming" anyways.

-=HeliX=- Mod 3DV Game Fixes
My 3D Vision Games List Ratings

Intel Core i7 5930K @4.5GHz | Gigabyte X99 Gaming 5 | Win10 x64 Pro | Corsair H105
Nvidia GeForce Titan X SLI Hybrid | ROG Swift PG278Q 144Hz + 3D Vision/G-Sync | 32GB Adata DDR4 2666
Intel Samsung 950Pro SSD | Samsung EVO 4x1 RAID 0 |
Yamaha VX-677 A/V Receiver | Polk Audio RM6880 7.1 | LG Blu-Ray
Auzen X-Fi HT HD | Logitech G710/G502/G27 | Corsair Air 540 | EVGA P2-1200W

#27
Posted 08/27/2010 09:16 PM   
[quote name='andysonofbob' post='1109616' date='Aug 27 2010, 04:48 PM']I notice you have used a similar method to what you need to do with Torchlight, which also has unit icons rendered in 2D.

By pulling out convergence as you add depth keeping a unit near the center of the screen at screen depth, everything works in 3D. Doesnt SC2 allow you to zoom in and out then?

This technique works wonders in Torchlight but sucks with games like World in Conflict where you can zoom and pan the camera. I used to love WiC - sniff![/quote]
Yep I do this with pretty much any game that poorly implements cursors, overlays, unit icons, crosshairs at screen-depth or in 2D. The goal is to line up the 3D objects with 2D overlays attached to them so they are focused, then everything else can have depth/pop-out which will still look fine because there's no 2D to interfere. Once you start zooming or adjusting camera angles however, things will start to break again which is what you see with WiC (you can rotate np, but you can't zoom). In SC2 it doesn't really matter because the Zoom/FOV adjustment differential are so pathetic I doubt anyone is doing much "zooming" anyways.
[quote name='andysonofbob' post='1109616' date='Aug 27 2010, 04:48 PM']I notice you have used a similar method to what you need to do with Torchlight, which also has unit icons rendered in 2D.



By pulling out convergence as you add depth keeping a unit near the center of the screen at screen depth, everything works in 3D. Doesnt SC2 allow you to zoom in and out then?



This technique works wonders in Torchlight but sucks with games like World in Conflict where you can zoom and pan the camera. I used to love WiC - sniff!

Yep I do this with pretty much any game that poorly implements cursors, overlays, unit icons, crosshairs at screen-depth or in 2D. The goal is to line up the 3D objects with 2D overlays attached to them so they are focused, then everything else can have depth/pop-out which will still look fine because there's no 2D to interfere. Once you start zooming or adjusting camera angles however, things will start to break again which is what you see with WiC (you can rotate np, but you can't zoom). In SC2 it doesn't really matter because the Zoom/FOV adjustment differential are so pathetic I doubt anyone is doing much "zooming" anyways.

-=HeliX=- Mod 3DV Game Fixes
My 3D Vision Games List Ratings

Intel Core i7 5930K @4.5GHz | Gigabyte X99 Gaming 5 | Win10 x64 Pro | Corsair H105
Nvidia GeForce Titan X SLI Hybrid | ROG Swift PG278Q 144Hz + 3D Vision/G-Sync | 32GB Adata DDR4 2666
Intel Samsung 950Pro SSD | Samsung EVO 4x1 RAID 0 |
Yamaha VX-677 A/V Receiver | Polk Audio RM6880 7.1 | LG Blu-Ray
Auzen X-Fi HT HD | Logitech G710/G502/G27 | Corsair Air 540 | EVGA P2-1200W

#28
Posted 08/27/2010 09:16 PM   
Why can't you all just have some patience and wait? I haven't bought it yet because of the poor S3D support. But I will buy it once it has proper support which is only weeks away!
[quote]expect to have it completed and available by the [b]middle of September[/b]. Patch 1.1 will contain a number of improvements including additional mod features, Editor improvements and bug fixes, some custom game improvements, [b]support for NVIDIA’s 3D Vision[/b], and more.[/quote]
Why can't you all just have some patience and wait? I haven't bought it yet because of the poor S3D support. But I will buy it once it has proper support which is only weeks away!

expect to have it completed and available by the middle of September. Patch 1.1 will contain a number of improvements including additional mod features, Editor improvements and bug fixes, some custom game improvements, support for NVIDIA’s 3D Vision, and more.

#29
Posted 08/27/2010 09:22 PM   
Why can't you all just have some patience and wait? I haven't bought it yet because of the poor S3D support. But I will buy it once it has proper support which is only weeks away!
[quote]expect to have it completed and available by the [b]middle of September[/b]. Patch 1.1 will contain a number of improvements including additional mod features, Editor improvements and bug fixes, some custom game improvements, [b]support for NVIDIA’s 3D Vision[/b], and more.[/quote]
Why can't you all just have some patience and wait? I haven't bought it yet because of the poor S3D support. But I will buy it once it has proper support which is only weeks away!

expect to have it completed and available by the middle of September. Patch 1.1 will contain a number of improvements including additional mod features, Editor improvements and bug fixes, some custom game improvements, support for NVIDIA’s 3D Vision, and more.

#30
Posted 08/27/2010 09:22 PM   
  2 / 3    
Scroll To Top