What is the proper way of rendering a 2D background? Currently drawing at screen depth, which looks
I'm working on a small project at the moment and I figured I'd try it out in 3D, not expecting much it actually worked pretty close to perfectly. Just one thing, while the game is a 3D environment, the background is 2D* and is always in the background (kinda hard to explain, but I mean, it doesn't pan or anything, it's effectively a fullscreen image with the geometry getting drawn on top of it). I'm currently just using D3DXSprite with ZWRITEENABLE to FALSE, because that's super easy but in 3D it really messes with my eyes. I think D3DXSprite uses screen space rather than world space, so the sprite is always at screen level.
So, I was just wondering, would it actually be possible to make this work properly, or should I just forget about it? I mean, I could create a draw quad type function, then give it depth, but wouldn't 3D Vision mode cut off the sides of the images?
*: There's a reason for me doing this. I'm writing an interpreter for some old game levels and the backgrounds are all 2D, I assume because the camera was always facing one direction and I don't wanna change any of the resources.
PS Don't really know if this is a good place to post programming questions really, but as far as I know, there isn't a big stereoscopic programming forum yet. You know, it would actually be really useful if nVidia could release some sort of 3D coding FAQs. I mean, there's quite a lot of fairly common problems in 3D Vision mode that can probably be fairly easily addressed. I see backgrounds getting rendered at the wrong depth quite alot with 3D Vision (I think the source engine has that problem). And there's other common problems as well, like shadows getting weirdly rendered (Dead Space, Borderlands, Mass Effect).
I'm working on a small project at the moment and I figured I'd try it out in 3D, not expecting much it actually worked pretty close to perfectly. Just one thing, while the game is a 3D environment, the background is 2D* and is always in the background (kinda hard to explain, but I mean, it doesn't pan or anything, it's effectively a fullscreen image with the geometry getting drawn on top of it). I'm currently just using D3DXSprite with ZWRITEENABLE to FALSE, because that's super easy but in 3D it really messes with my eyes. I think D3DXSprite uses screen space rather than world space, so the sprite is always at screen level.
So, I was just wondering, would it actually be possible to make this work properly, or should I just forget about it? I mean, I could create a draw quad type function, then give it depth, but wouldn't 3D Vision mode cut off the sides of the images?
*: There's a reason for me doing this. I'm writing an interpreter for some old game levels and the backgrounds are all 2D, I assume because the camera was always facing one direction and I don't wanna change any of the resources.
PS Don't really know if this is a good place to post programming questions really, but as far as I know, there isn't a big stereoscopic programming forum yet. You know, it would actually be really useful if nVidia could release some sort of 3D coding FAQs. I mean, there's quite a lot of fairly common problems in 3D Vision mode that can probably be fairly easily addressed. I see backgrounds getting rendered at the wrong depth quite alot with 3D Vision (I think the source engine has that problem). And there's other common problems as well, like shadows getting weirdly rendered (Dead Space, Borderlands, Mass Effect).
[quote name='ERP' date='25 January 2011 - 03:03 AM' timestamp='1295924617' post='1183170']
You need to draw it in 3D at some Z that corresponds to the maximum depth of your scene.
[/quote]
Right, I'll implement some sort of draw quads function then. I'm guessing it will cut off the edges of the picture though.
No you can still write a shader to give you the correct pixel mapping, but you have to have Z enabled (this is a restriction of the heuristics nvidia uses) and the computed depth value has to be correct at the end of the shader.
I think one of the 3DVision white papers on the NVidia dev site has a basic vertex shader that will do the right thing.
No you can still write a shader to give you the correct pixel mapping, but you have to have Z enabled (this is a restriction of the heuristics nvidia uses) and the computed depth value has to be correct at the end of the shader.
I think one of the 3DVision white papers on the NVidia dev site has a basic vertex shader that will do the right thing.
[quote name='ERP' date='25 January 2011 - 06:34 AM' timestamp='1295937263' post='1183217']
No you can still write a shader to give you the correct pixel mapping, but you have to have Z enabled (this is a restriction of the heuristics nvidia uses) and the computed depth value has to be correct at the end of the shader.
I think one of the 3DVision white papers on the NVidia dev site has a basic vertex shader that will do the right thing.
[/quote]
Interesting, I'll definitely take a look at the nVidia dev site, I knew they had one, but it hadn't even occurred to me to take a look there.
[quote name='ERP' date='25 January 2011 - 06:34 AM' timestamp='1295937263' post='1183217']
No you can still write a shader to give you the correct pixel mapping, but you have to have Z enabled (this is a restriction of the heuristics nvidia uses) and the computed depth value has to be correct at the end of the shader.
I think one of the 3DVision white papers on the NVidia dev site has a basic vertex shader that will do the right thing.
Interesting, I'll definitely take a look at the nVidia dev site, I knew they had one, but it hadn't even occurred to me to take a look there.
So, I was just wondering, would it actually be possible to make this work properly, or should I just forget about it? I mean, I could create a draw quad type function, then give it depth, but wouldn't 3D Vision mode cut off the sides of the images?
*: There's a reason for me doing this. I'm writing an interpreter for some old game levels and the backgrounds are all 2D, I assume because the camera was always facing one direction and I don't wanna change any of the resources.
PS Don't really know if this is a good place to post programming questions really, but as far as I know, there isn't a big stereoscopic programming forum yet. You know, it would actually be really useful if nVidia could release some sort of 3D coding FAQs. I mean, there's quite a lot of fairly common problems in 3D Vision mode that can probably be fairly easily addressed. I see backgrounds getting rendered at the wrong depth quite alot with 3D Vision (I think the source engine has that problem). And there's other common problems as well, like shadows getting weirdly rendered (Dead Space, Borderlands, Mass Effect).
So, I was just wondering, would it actually be possible to make this work properly, or should I just forget about it? I mean, I could create a draw quad type function, then give it depth, but wouldn't 3D Vision mode cut off the sides of the images?
*: There's a reason for me doing this. I'm writing an interpreter for some old game levels and the backgrounds are all 2D, I assume because the camera was always facing one direction and I don't wanna change any of the resources.
PS Don't really know if this is a good place to post programming questions really, but as far as I know, there isn't a big stereoscopic programming forum yet. You know, it would actually be really useful if nVidia could release some sort of 3D coding FAQs. I mean, there's quite a lot of fairly common problems in 3D Vision mode that can probably be fairly easily addressed. I see backgrounds getting rendered at the wrong depth quite alot with 3D Vision (I think the source engine has that problem). And there's other common problems as well, like shadows getting weirdly rendered (Dead Space, Borderlands, Mass Effect).
My Blog
You need to draw it in 3D at some Z that corresponds to the maximum depth of your scene.
[/quote]
Right, I'll implement some sort of draw quads function then. I'm guessing it will cut off the edges of the picture though.
You need to draw it in 3D at some Z that corresponds to the maximum depth of your scene.
Right, I'll implement some sort of draw quads function then. I'm guessing it will cut off the edges of the picture though.
I think one of the 3DVision white papers on the NVidia dev site has a basic vertex shader that will do the right thing.
I think one of the 3DVision white papers on the NVidia dev site has a basic vertex shader that will do the right thing.
My Blog
No you can still write a shader to give you the correct pixel mapping, but you have to have Z enabled (this is a restriction of the heuristics nvidia uses) and the computed depth value has to be correct at the end of the shader.
I think one of the 3DVision white papers on the NVidia dev site has a basic vertex shader that will do the right thing.
[/quote]
Interesting, I'll definitely take a look at the nVidia dev site, I knew they had one, but it hadn't even occurred to me to take a look there.
No you can still write a shader to give you the correct pixel mapping, but you have to have Z enabled (this is a restriction of the heuristics nvidia uses) and the computed depth value has to be correct at the end of the shader.
I think one of the 3DVision white papers on the NVidia dev site has a basic vertex shader that will do the right thing.
Interesting, I'll definitely take a look at the nVidia dev site, I knew they had one, but it hadn't even occurred to me to take a look there.