Possibility of automated Nvidia 3D shader fix?
[Attention programmers: This post is from a layman and could cause your eyes to roll so far back in your head they could get stuck there, obviously, its just science, so use caution when reading.] :) I was thinking about how the proportion of distance between objects [separation] seems to stay the same with a given 3D setting and distance, etc. Couldn't this information be used to generate automated fixes, since X object, at Y distance, must then have Z amount about separation for a given set of settings? I making a lot of assumptions here. Any thoughts from the experts? One assumption i don't think im making, is that these algorithms, if invested in by Nvidia, would be better made SOONER rather than LATER, assuming the costs will be the same simply since it would lead to a longer period of usable 3D for customers. Makes me wonder if Physically based materials will change anything and also, how soon will it be before ray tracing takes over, if at all, and what changes that will bring for 3D.
[Attention programmers: This post is from a layman and could cause your eyes to roll so far back in your head they could get stuck there, obviously, its just science, so use caution when reading.] :)

I was thinking about how the proportion of distance between objects [separation] seems to stay the same with a given 3D setting and distance, etc. Couldn't this information be used to generate automated fixes, since X object, at Y distance, must then have Z amount about separation for a given set of settings? I making a lot of assumptions here. Any thoughts from the experts?

One assumption i don't think im making, is that these algorithms, if invested in by Nvidia, would be better made SOONER rather than LATER, assuming the costs will be the same simply since it would lead to a longer period of usable 3D for customers.

Makes me wonder if Physically based materials will change anything and also, how soon will it be before ray tracing takes over, if at all, and what changes that will bring for 3D.

46" Samsung ES7500 3DTV (checkerboard, high FOV as desktop monitor, highly recommend!) - Metro 2033 3D PNG screens - Metro LL filter realism mod - Flugan's Deus Ex:HR Depth changers - Nvidia tech support online form - Nvidia support: 1-800-797-6530

#1
Posted 08/06/2014 08:06 PM   
Are you talking about the fact that most of the scene is rendered at almost max depth? There is only a short amount of distances that render out of screen or significantly below max depth.
Are you talking about the fact that most of the scene is rendered at almost max depth?

There is only a short amount of distances that render out of screen or significantly below max depth.

Thanks to everybody using my assembler it warms my heart.
To have a critical piece of code that everyone can enjoy!
What more can you ask for?

donations: ulfjalmbrant@hotmail.com

#2
Posted 08/06/2014 08:27 PM   
Said another way, i was thinking you could automatically measure the actual virtual distance of objects in the game, values the graphics card can know, then based on those distances, see which objects do not have the correct amount of separation.
Said another way, i was thinking you could automatically measure the actual virtual distance of objects in the game, values the graphics card can know, then based on those distances, see which objects do not have the correct amount of separation.

46" Samsung ES7500 3DTV (checkerboard, high FOV as desktop monitor, highly recommend!) - Metro 2033 3D PNG screens - Metro LL filter realism mod - Flugan's Deus Ex:HR Depth changers - Nvidia tech support online form - Nvidia support: 1-800-797-6530

#3
Posted 08/06/2014 08:39 PM   
If something is rendered at the wrong depth how can you fix it by reading the wrong depth. If it's already at right depth what would you try to fix.
If something is rendered at the wrong depth how can you fix it by reading the wrong depth.

If it's already at right depth what would you try to fix.

Thanks to everybody using my assembler it warms my heart.
To have a critical piece of code that everyone can enjoy!
What more can you ask for?

donations: ulfjalmbrant@hotmail.com

#4
Posted 08/06/2014 08:48 PM   
I've always been interesting in trying to intercept the render of models and mapping it to the hud text placing the text in 3D depth above the model. So far it has only been an idea as it is best done in the few games that support 3D Vision natively in that fashion. Pushing hud very deep using one of our wrappers can sometimes be good enough.
I've always been interesting in trying to intercept the render of models and mapping it to the hud text placing the text in 3D depth above the model. So far it has only been an idea as it is best done in the few games that support 3D Vision natively in that fashion. Pushing hud very deep using one of our wrappers can sometimes be good enough.

Thanks to everybody using my assembler it warms my heart.
To have a critical piece of code that everyone can enjoy!
What more can you ask for?

donations: ulfjalmbrant@hotmail.com

#5
Posted 08/06/2014 08:58 PM   
[quote="Flugan"]If something is rendered at the wrong depth how can you fix it by reading the wrong depth. [/quote] Im not sure exactly, but since the graphics card knows the distance between any objects, or the viewer, shouldn't it be able to determine the amount of separation for a given convergence and depth setting? [quote]If it's already at right depth what would you try to fix.[/quote]In this case, the algorithm would determine it was correct and move on. Keep in mind, i have a layman perspective, i have no idea what actual values are being throw around or that are easily accessible or easy to modify.
Flugan said:If something is rendered at the wrong depth how can you fix it by reading the wrong depth.

Im not sure exactly, but since the graphics card knows the distance between any objects, or the viewer, shouldn't it be able to determine the amount of separation for a given convergence and depth setting?


If it's already at right depth what would you try to fix.
In this case, the algorithm would determine it was correct and move on.

Keep in mind, i have a layman perspective, i have no idea what actual values are being throw around or that are easily accessible or easy to modify.

46" Samsung ES7500 3DTV (checkerboard, high FOV as desktop monitor, highly recommend!) - Metro 2033 3D PNG screens - Metro LL filter realism mod - Flugan's Deus Ex:HR Depth changers - Nvidia tech support online form - Nvidia support: 1-800-797-6530

#6
Posted 08/06/2014 09:09 PM   
What you're talking about sounds very similar to 3D Vision compatability mode. Render once, Check the z-buffer where the distances are kept. Produce a 3D image using only a single render. Things at screen depth are teleported to the right distance in the process.
What you're talking about sounds very similar to 3D Vision compatability mode.

Render once, Check the z-buffer where the distances are kept. Produce a 3D image using only a single render. Things at screen depth are teleported to the right distance in the process.

Thanks to everybody using my assembler it warms my heart.
To have a critical piece of code that everyone can enjoy!
What more can you ask for?

donations: ulfjalmbrant@hotmail.com

#7
Posted 08/06/2014 09:30 PM   
Lighting and shadows only render at certain distances or in the case of engines like Carmack's tech 4 engine, only things within certain areas will render, which i would think means whatever process could do this would have to move the player or player camera or node or whatever through the world at certain intervals so as to get close enough to everything to enable its functionality and measure it.
Lighting and shadows only render at certain distances or in the case of engines like Carmack's tech 4 engine, only things within certain areas will render, which i would think means whatever process could do this would have to move the player or player camera or node or whatever through the world at certain intervals so as to get close enough to everything to enable its functionality and measure it.

46" Samsung ES7500 3DTV (checkerboard, high FOV as desktop monitor, highly recommend!) - Metro 2033 3D PNG screens - Metro LL filter realism mod - Flugan's Deus Ex:HR Depth changers - Nvidia tech support online form - Nvidia support: 1-800-797-6530

#8
Posted 08/06/2014 09:44 PM   
In theory, that sounds simple enough. But then factor in that a single object has diffuse, bump, specular, shadows, reflections, and it gets a lot more complicated. Then there's particle and atmospheric effects.
In theory, that sounds simple enough. But then factor in that a single object has diffuse, bump, specular, shadows, reflections, and it gets a lot more complicated. Then there's particle and atmospheric effects.

#9
Posted 08/07/2014 07:06 AM   
Like Pirate said, I believe that the issue is that, at the most, the cards/game engine might know the distance to and between the polygonal "entities" (a character, a tree, a house etc) but not the distances of the effects applied to those entities. Also, probably most developers just add effects/shaders on top of older engines to make games look nicer. But those older engines don't know about 3D vision, so they apply the textures/shaders only at the correct 2D object position, not its 3D z-buffer position. I think one example to prove my theory could be that many older games, with less shaders and visual complexity, seem to have fewer issues with 3D vision. I've recently played Prince of Persia 2008 (which was kind of boring) and Prince of Persia: Forgotten Sands (excellent), and these games have almost perfect 3D, with no fixes needed. If the thing Libertine suggests would be that simple, it would have been implemented already. What [u]would[/u] be nice however, would be if game engines took 3D into account, so as to not even [u]allow[/u] for wrong positioning of textures/shaders, always applying them at the correct depth (or asking coders to specify the correct depth, then always keep them attached to their entities). Unfortunately I believe this will not happen in my life time. The closest we'll come to flawless 3D in the near future is perhaps the VR type of renedering, one screen per eye. If game engines will be build to account for this technology, and all games will be rendered accordingly, then Nvidia, if it will still continue with their amazing "3D Vision" technology, can simply piggy-back on the VR rendering and show each eye the imagine initially destined to be displayed on the corresponding separate monitor, in their interleave/framepack method.
Like Pirate said, I believe that the issue is that, at the most, the cards/game engine might know the distance to and between the polygonal "entities" (a character, a tree, a house etc) but not the distances of the effects applied to those entities.

Also, probably most developers just add effects/shaders on top of older engines to make games look nicer. But those older engines don't know about 3D vision, so they apply the textures/shaders only at the correct 2D object position, not its 3D z-buffer position.

I think one example to prove my theory could be that many older games, with less shaders and visual complexity, seem to have fewer issues with 3D vision. I've recently played Prince of Persia 2008 (which was kind of boring) and Prince of Persia: Forgotten Sands (excellent), and these games have almost perfect 3D, with no fixes needed.

If the thing Libertine suggests would be that simple, it would have been implemented already.
What would be nice however, would be if game engines took 3D into account, so as to not even allow for wrong positioning of textures/shaders, always applying them at the correct depth (or asking coders to specify the correct depth, then always keep them attached to their entities).

Unfortunately I believe this will not happen in my life time. The closest we'll come to flawless 3D in the near future is perhaps the VR type of renedering, one screen per eye. If game engines will be build to account for this technology, and all games will be rendered accordingly, then Nvidia, if it will still continue with their amazing "3D Vision" technology, can simply piggy-back on the VR rendering and show each eye the imagine initially destined to be displayed on the corresponding separate monitor, in their interleave/framepack method.

#10
Posted 08/07/2014 08:11 AM   
Scroll To Top