Starcraft 2 and WTF? Rated good but really does nvidia even play the games???
3 / 3
[quote name='ERP' post='1109621' date='Aug 27 2010, 05:01 PM']You're missing my point, yes rendering the scene twice is the "best" way to go, trying to do the right thing given a stream of primitives already sent to the GPU is the wrong place to do it.
There are way to many things you simply cannot know about the scene that late in the pipeline.
The right place is when you still know the intent of the primitives being rendered, that's inside the games rendering logic, developers (or engine devs for those that buy tech) should in an ideal world be the ones implementing S3D, not the driver writer.[/quote]
And again, if all the primitives are rendered properly with appropriate Z-values, it should not matter if the offsets are calculated before they're sent to the GPU by the game engine, or afterwards by the S3D driver. The problem is, they're not. I agree it would be best if the developers undertook the task of implementing Stereo 3D but the fact of the matter is, they're not. In the absense of a solution, would you expect Nvidia to rely on devs to implement one? Or would it be better to approach the problem with their own solution that does the same thing, but relies on devs simply rendering their games in actual 3D and not 2D. Someone has to innovate or no progress would ever be made.
[quote name='ERP' post='1109621' date='Aug 27 2010, 05:01 PM']They aren't hacky there are perfectly valid reasons to do screen space work, in many cases it's the ideal place to sample something. And if as a developer you're not explicitly targeting S3D, you won't bother with the expense of retaining depth information.
This isn't a new thing either, over a decade ago when I worked on N64 titles we never wrote depth for the sky box, it was unnecessary and costly.[/quote]
They are hacky when a game claims its rendered in 3D but has 2D effects hacked in all over the place. Some of this is due to API and hardware limitations, and we clearly see this with all the reliance on post-processing and deferred rendering/lighting with DX9 and DX9-era hardware. But that all changed with DX10 hardware that allows readback of the depth buffer with color samples, so these hacky rendering methods are once again a poor excuse. If you look at some older games, even going as far back as classic OpenGL or DX8 era games (Halo 2 for example), they're perfect in 3D because they don't use any of these hacky post-process/framebuffer effects that don't have proper depth values.
Still most of it just comes down to laziness/oversight and devs taking the path of least resistance. Not implementing 3D crosshairs is probably the most glaring oversight, you don't even need to change it dynamically, you just need to render it far enough away so that you're not bum-eyed when trying to focus on objects behind it.
[quote name='ERP' post='1109621' date='Aug 27 2010, 05:01 PM']I agree we're starting to see some small steps forwards, 3D engines supporting it doesn't surprise me, it's largely a checkbox item for engine vendors. But having an engine that supports it and writing a game that supports it aren't the same thing. The larger issue is big publishers adding S3D to their test matrix we won't see high quality support until it's tested. Right now your best bet on a game having good S3D support is having one of the lead developers who actually plays games in S3D.[/quote]
Well the first step is to properly implement correct and consistent depth values for all rendered objects and overlays, and for baked in effects that can't be corrected, to be able to disable them completely. Disabling effects is a viable workaround only when controls and options to disable them are exposed. Most games don't implement these controls in-game, but we can use a variety of other methods, either deleting assets or modifying .INI files for example. No we shouldn't have to do this, but this is the reality of it when games were designed without S3D in mind. Going forward, the best we can hope for is that developers at least consider S3D and don't explicitly break stuff before they even consider implementing S3D specific "Wow" features like excessive pop-out or advanced shading/dithering effects or color pallettes for a better S3D experience.
[quote name='ERP' post='1109621' date='Aug 27 2010, 05:01 PM']You're missing my point, yes rendering the scene twice is the "best" way to go, trying to do the right thing given a stream of primitives already sent to the GPU is the wrong place to do it.
There are way to many things you simply cannot know about the scene that late in the pipeline.
The right place is when you still know the intent of the primitives being rendered, that's inside the games rendering logic, developers (or engine devs for those that buy tech) should in an ideal world be the ones implementing S3D, not the driver writer.
And again, if all the primitives are rendered properly with appropriate Z-values, it should not matter if the offsets are calculated before they're sent to the GPU by the game engine, or afterwards by the S3D driver. The problem is, they're not. I agree it would be best if the developers undertook the task of implementing Stereo 3D but the fact of the matter is, they're not. In the absense of a solution, would you expect Nvidia to rely on devs to implement one? Or would it be better to approach the problem with their own solution that does the same thing, but relies on devs simply rendering their games in actual 3D and not 2D. Someone has to innovate or no progress would ever be made.
[quote name='ERP' post='1109621' date='Aug 27 2010, 05:01 PM']They aren't hacky there are perfectly valid reasons to do screen space work, in many cases it's the ideal place to sample something. And if as a developer you're not explicitly targeting S3D, you won't bother with the expense of retaining depth information.
This isn't a new thing either, over a decade ago when I worked on N64 titles we never wrote depth for the sky box, it was unnecessary and costly.
They are hacky when a game claims its rendered in 3D but has 2D effects hacked in all over the place. Some of this is due to API and hardware limitations, and we clearly see this with all the reliance on post-processing and deferred rendering/lighting with DX9 and DX9-era hardware. But that all changed with DX10 hardware that allows readback of the depth buffer with color samples, so these hacky rendering methods are once again a poor excuse. If you look at some older games, even going as far back as classic OpenGL or DX8 era games (Halo 2 for example), they're perfect in 3D because they don't use any of these hacky post-process/framebuffer effects that don't have proper depth values.
Still most of it just comes down to laziness/oversight and devs taking the path of least resistance. Not implementing 3D crosshairs is probably the most glaring oversight, you don't even need to change it dynamically, you just need to render it far enough away so that you're not bum-eyed when trying to focus on objects behind it.
[quote name='ERP' post='1109621' date='Aug 27 2010, 05:01 PM']I agree we're starting to see some small steps forwards, 3D engines supporting it doesn't surprise me, it's largely a checkbox item for engine vendors. But having an engine that supports it and writing a game that supports it aren't the same thing. The larger issue is big publishers adding S3D to their test matrix we won't see high quality support until it's tested. Right now your best bet on a game having good S3D support is having one of the lead developers who actually plays games in S3D.
Well the first step is to properly implement correct and consistent depth values for all rendered objects and overlays, and for baked in effects that can't be corrected, to be able to disable them completely. Disabling effects is a viable workaround only when controls and options to disable them are exposed. Most games don't implement these controls in-game, but we can use a variety of other methods, either deleting assets or modifying .INI files for example. No we shouldn't have to do this, but this is the reality of it when games were designed without S3D in mind. Going forward, the best we can hope for is that developers at least consider S3D and don't explicitly break stuff before they even consider implementing S3D specific "Wow" features like excessive pop-out or advanced shading/dithering effects or color pallettes for a better S3D experience.
[quote name='ERP' post='1109621' date='Aug 27 2010, 05:01 PM']You're missing my point, yes rendering the scene twice is the "best" way to go, trying to do the right thing given a stream of primitives already sent to the GPU is the wrong place to do it.
There are way to many things you simply cannot know about the scene that late in the pipeline.
The right place is when you still know the intent of the primitives being rendered, that's inside the games rendering logic, developers (or engine devs for those that buy tech) should in an ideal world be the ones implementing S3D, not the driver writer.[/quote]
And again, if all the primitives are rendered properly with appropriate Z-values, it should not matter if the offsets are calculated before they're sent to the GPU by the game engine, or afterwards by the S3D driver. The problem is, they're not. I agree it would be best if the developers undertook the task of implementing Stereo 3D but the fact of the matter is, they're not. In the absense of a solution, would you expect Nvidia to rely on devs to implement one? Or would it be better to approach the problem with their own solution that does the same thing, but relies on devs simply rendering their games in actual 3D and not 2D. Someone has to innovate or no progress would ever be made.
[quote name='ERP' post='1109621' date='Aug 27 2010, 05:01 PM']They aren't hacky there are perfectly valid reasons to do screen space work, in many cases it's the ideal place to sample something. And if as a developer you're not explicitly targeting S3D, you won't bother with the expense of retaining depth information.
This isn't a new thing either, over a decade ago when I worked on N64 titles we never wrote depth for the sky box, it was unnecessary and costly.[/quote]
They are hacky when a game claims its rendered in 3D but has 2D effects hacked in all over the place. Some of this is due to API and hardware limitations, and we clearly see this with all the reliance on post-processing and deferred rendering/lighting with DX9 and DX9-era hardware. But that all changed with DX10 hardware that allows readback of the depth buffer with color samples, so these hacky rendering methods are once again a poor excuse. If you look at some older games, even going as far back as classic OpenGL or DX8 era games (Halo 2 for example), they're perfect in 3D because they don't use any of these hacky post-process/framebuffer effects that don't have proper depth values.
Still most of it just comes down to laziness/oversight and devs taking the path of least resistance. Not implementing 3D crosshairs is probably the most glaring oversight, you don't even need to change it dynamically, you just need to render it far enough away so that you're not bum-eyed when trying to focus on objects behind it.
[quote name='ERP' post='1109621' date='Aug 27 2010, 05:01 PM']I agree we're starting to see some small steps forwards, 3D engines supporting it doesn't surprise me, it's largely a checkbox item for engine vendors. But having an engine that supports it and writing a game that supports it aren't the same thing. The larger issue is big publishers adding S3D to their test matrix we won't see high quality support until it's tested. Right now your best bet on a game having good S3D support is having one of the lead developers who actually plays games in S3D.[/quote]
Well the first step is to properly implement correct and consistent depth values for all rendered objects and overlays, and for baked in effects that can't be corrected, to be able to disable them completely. Disabling effects is a viable workaround only when controls and options to disable them are exposed. Most games don't implement these controls in-game, but we can use a variety of other methods, either deleting assets or modifying .INI files for example. No we shouldn't have to do this, but this is the reality of it when games were designed without S3D in mind. Going forward, the best we can hope for is that developers at least consider S3D and don't explicitly break stuff before they even consider implementing S3D specific "Wow" features like excessive pop-out or advanced shading/dithering effects or color pallettes for a better S3D experience.
[quote name='ERP' post='1109621' date='Aug 27 2010, 05:01 PM']You're missing my point, yes rendering the scene twice is the "best" way to go, trying to do the right thing given a stream of primitives already sent to the GPU is the wrong place to do it.
There are way to many things you simply cannot know about the scene that late in the pipeline.
The right place is when you still know the intent of the primitives being rendered, that's inside the games rendering logic, developers (or engine devs for those that buy tech) should in an ideal world be the ones implementing S3D, not the driver writer.
And again, if all the primitives are rendered properly with appropriate Z-values, it should not matter if the offsets are calculated before they're sent to the GPU by the game engine, or afterwards by the S3D driver. The problem is, they're not. I agree it would be best if the developers undertook the task of implementing Stereo 3D but the fact of the matter is, they're not. In the absense of a solution, would you expect Nvidia to rely on devs to implement one? Or would it be better to approach the problem with their own solution that does the same thing, but relies on devs simply rendering their games in actual 3D and not 2D. Someone has to innovate or no progress would ever be made.
[quote name='ERP' post='1109621' date='Aug 27 2010, 05:01 PM']They aren't hacky there are perfectly valid reasons to do screen space work, in many cases it's the ideal place to sample something. And if as a developer you're not explicitly targeting S3D, you won't bother with the expense of retaining depth information.
This isn't a new thing either, over a decade ago when I worked on N64 titles we never wrote depth for the sky box, it was unnecessary and costly.
They are hacky when a game claims its rendered in 3D but has 2D effects hacked in all over the place. Some of this is due to API and hardware limitations, and we clearly see this with all the reliance on post-processing and deferred rendering/lighting with DX9 and DX9-era hardware. But that all changed with DX10 hardware that allows readback of the depth buffer with color samples, so these hacky rendering methods are once again a poor excuse. If you look at some older games, even going as far back as classic OpenGL or DX8 era games (Halo 2 for example), they're perfect in 3D because they don't use any of these hacky post-process/framebuffer effects that don't have proper depth values.
Still most of it just comes down to laziness/oversight and devs taking the path of least resistance. Not implementing 3D crosshairs is probably the most glaring oversight, you don't even need to change it dynamically, you just need to render it far enough away so that you're not bum-eyed when trying to focus on objects behind it.
[quote name='ERP' post='1109621' date='Aug 27 2010, 05:01 PM']I agree we're starting to see some small steps forwards, 3D engines supporting it doesn't surprise me, it's largely a checkbox item for engine vendors. But having an engine that supports it and writing a game that supports it aren't the same thing. The larger issue is big publishers adding S3D to their test matrix we won't see high quality support until it's tested. Right now your best bet on a game having good S3D support is having one of the lead developers who actually plays games in S3D.
Well the first step is to properly implement correct and consistent depth values for all rendered objects and overlays, and for baked in effects that can't be corrected, to be able to disable them completely. Disabling effects is a viable workaround only when controls and options to disable them are exposed. Most games don't implement these controls in-game, but we can use a variety of other methods, either deleting assets or modifying .INI files for example. No we shouldn't have to do this, but this is the reality of it when games were designed without S3D in mind. Going forward, the best we can hope for is that developers at least consider S3D and don't explicitly break stuff before they even consider implementing S3D specific "Wow" features like excessive pop-out or advanced shading/dithering effects or color pallettes for a better S3D experience.
[quote name='chiz' post='1109639' date='Aug 27 2010, 02:31 PM']And again, if all the primitives are rendered properly with appropriate Z-values, it should not matter if the offsets are calculated before they're sent to the GPU by the game engine, or afterwards by the S3D driver. The problem is, they're not. I agree it would be best if the developers undertook the task of implementing Stereo 3D but the fact of the matter is, they're not. In the absense of a solution, would you expect Nvidia to rely on devs to implement one? Or would it be better to approach the problem with their own solution that does the same thing, but relies on devs simply rendering their games in actual 3D and not 2D. Someone has to innovate or no progress would ever be made.[/quote]
No as I said in my first response, NVidia has no choice, if they want S3D they're going to have to do the work.
That doesn't make it the right long term approach, even if everything were rendered with correct Z by every game it would still not be the right way of doing it. NVidia would still be minimally redundantly rendering offscreen surfaces in stereo and potentially creating artifacts, which they would have to fix with special case driver code. They have no way to tell if a camera in the scene is the camera, or merely a point of view in the scene for some effect.
[quote name='chiz' post='1109639' date='Aug 27 2010, 02:31 PM']They are hacky when a game claims its rendered in 3D but has 2D effects hacked in all over the place. Some of this is due to API and hardware limitations, and we clearly see this with all the reliance on post-processing and deferred rendering/lighting with DX9 and DX9-era hardware. But that all changed with DX10 hardware that allows readback of the depth buffer with color samples, so these hacky rendering methods are once again a poor excuse. If you look at some older games, even going as far back as classic OpenGL or DX8 era games (Halo 2 for example), they're perfect in 3D because they don't use any of these hacky post-process/framebuffer effects that don't have proper depth values.[/quote]
Old games didn't do these things because they couldn't afford to, fill rate was too expensive.
None of these developers were targetting S3D, so you get what you get.
Hell I've done the opposite in renderers used depth values for nothing but occlusion, i.e all of the geometry was supposed to be coplanar, but I wrote different fixed depths to sort them correctly.
Z values aren't depth in a renderer they are just values, in a buffer that happens to have comparison operations.
If you're targetting S3D you have to be more careful with them.
[quote name='chiz' post='1109639' date='Aug 27 2010, 02:31 PM']Still most of it just comes down to laziness/oversight and devs taking the path of least resistance. Not implementing 3D crosshairs is probably the most glaring oversight, you don't even need to change it dynamically, you just need to render it far enough away so that you're not bum-eyed when trying to focus on objects behind it.[/quote]
I'd say it's more a lack of priority for S3D support.
I've shipped more than a couple of games, and decisions are rarely made because of laziness, at the end of a day you have a set of features, given time constaints they won't all make it, you prioritize and make hard decisions. In practice I've never even seen S3D considerations on a feature list, it's just not visible enough, that was my point about your only real shot for good support is finding a developer that uses it.
If the publisher had a marketing deal with NVidia for S3Dsupport that was worth real $ I guarantee it's make the cut list, but cost wise it's probably not viable for NVidia at this point.
[quote name='chiz' post='1109639' date='Aug 27 2010, 02:31 PM']Well the first step is to properly implement correct and consistent depth values for all rendered objects and overlays, and for baked in effects that can't be corrected, to be able to disable them completely. Disabling effects is a viable workaround only when controls and options to disable them are exposed. Most games don't implement these controls in-game, but we can use a variety of other methods, either deleting assets or modifying .INI files for example. No we shouldn't have to do this, but this is the reality of it when games were designed without S3D in mind. Going forward, the best we can hope for is that developers at least consider S3D and don't explicitly break stuff before they even consider implementing S3D specific "Wow" features like excessive pop-out or advanced shading/dithering effects or color pallettes for a better S3D experience.[/quote]
[quote name='chiz' post='1109639' date='Aug 27 2010, 02:31 PM']And again, if all the primitives are rendered properly with appropriate Z-values, it should not matter if the offsets are calculated before they're sent to the GPU by the game engine, or afterwards by the S3D driver. The problem is, they're not. I agree it would be best if the developers undertook the task of implementing Stereo 3D but the fact of the matter is, they're not. In the absense of a solution, would you expect Nvidia to rely on devs to implement one? Or would it be better to approach the problem with their own solution that does the same thing, but relies on devs simply rendering their games in actual 3D and not 2D. Someone has to innovate or no progress would ever be made.
No as I said in my first response, NVidia has no choice, if they want S3D they're going to have to do the work.
That doesn't make it the right long term approach, even if everything were rendered with correct Z by every game it would still not be the right way of doing it. NVidia would still be minimally redundantly rendering offscreen surfaces in stereo and potentially creating artifacts, which they would have to fix with special case driver code. They have no way to tell if a camera in the scene is the camera, or merely a point of view in the scene for some effect.
[quote name='chiz' post='1109639' date='Aug 27 2010, 02:31 PM']They are hacky when a game claims its rendered in 3D but has 2D effects hacked in all over the place. Some of this is due to API and hardware limitations, and we clearly see this with all the reliance on post-processing and deferred rendering/lighting with DX9 and DX9-era hardware. But that all changed with DX10 hardware that allows readback of the depth buffer with color samples, so these hacky rendering methods are once again a poor excuse. If you look at some older games, even going as far back as classic OpenGL or DX8 era games (Halo 2 for example), they're perfect in 3D because they don't use any of these hacky post-process/framebuffer effects that don't have proper depth values.
Old games didn't do these things because they couldn't afford to, fill rate was too expensive.
None of these developers were targetting S3D, so you get what you get.
Hell I've done the opposite in renderers used depth values for nothing but occlusion, i.e all of the geometry was supposed to be coplanar, but I wrote different fixed depths to sort them correctly.
Z values aren't depth in a renderer they are just values, in a buffer that happens to have comparison operations.
If you're targetting S3D you have to be more careful with them.
[quote name='chiz' post='1109639' date='Aug 27 2010, 02:31 PM']Still most of it just comes down to laziness/oversight and devs taking the path of least resistance. Not implementing 3D crosshairs is probably the most glaring oversight, you don't even need to change it dynamically, you just need to render it far enough away so that you're not bum-eyed when trying to focus on objects behind it.
I'd say it's more a lack of priority for S3D support.
I've shipped more than a couple of games, and decisions are rarely made because of laziness, at the end of a day you have a set of features, given time constaints they won't all make it, you prioritize and make hard decisions. In practice I've never even seen S3D considerations on a feature list, it's just not visible enough, that was my point about your only real shot for good support is finding a developer that uses it.
If the publisher had a marketing deal with NVidia for S3Dsupport that was worth real $ I guarantee it's make the cut list, but cost wise it's probably not viable for NVidia at this point.
[quote name='chiz' post='1109639' date='Aug 27 2010, 02:31 PM']Well the first step is to properly implement correct and consistent depth values for all rendered objects and overlays, and for baked in effects that can't be corrected, to be able to disable them completely. Disabling effects is a viable workaround only when controls and options to disable them are exposed. Most games don't implement these controls in-game, but we can use a variety of other methods, either deleting assets or modifying .INI files for example. No we shouldn't have to do this, but this is the reality of it when games were designed without S3D in mind. Going forward, the best we can hope for is that developers at least consider S3D and don't explicitly break stuff before they even consider implementing S3D specific "Wow" features like excessive pop-out or advanced shading/dithering effects or color pallettes for a better S3D experience.
[quote name='chiz' post='1109639' date='Aug 27 2010, 02:31 PM']And again, if all the primitives are rendered properly with appropriate Z-values, it should not matter if the offsets are calculated before they're sent to the GPU by the game engine, or afterwards by the S3D driver. The problem is, they're not. I agree it would be best if the developers undertook the task of implementing Stereo 3D but the fact of the matter is, they're not. In the absense of a solution, would you expect Nvidia to rely on devs to implement one? Or would it be better to approach the problem with their own solution that does the same thing, but relies on devs simply rendering their games in actual 3D and not 2D. Someone has to innovate or no progress would ever be made.[/quote]
No as I said in my first response, NVidia has no choice, if they want S3D they're going to have to do the work.
That doesn't make it the right long term approach, even if everything were rendered with correct Z by every game it would still not be the right way of doing it. NVidia would still be minimally redundantly rendering offscreen surfaces in stereo and potentially creating artifacts, which they would have to fix with special case driver code. They have no way to tell if a camera in the scene is the camera, or merely a point of view in the scene for some effect.
[quote name='chiz' post='1109639' date='Aug 27 2010, 02:31 PM']They are hacky when a game claims its rendered in 3D but has 2D effects hacked in all over the place. Some of this is due to API and hardware limitations, and we clearly see this with all the reliance on post-processing and deferred rendering/lighting with DX9 and DX9-era hardware. But that all changed with DX10 hardware that allows readback of the depth buffer with color samples, so these hacky rendering methods are once again a poor excuse. If you look at some older games, even going as far back as classic OpenGL or DX8 era games (Halo 2 for example), they're perfect in 3D because they don't use any of these hacky post-process/framebuffer effects that don't have proper depth values.[/quote]
Old games didn't do these things because they couldn't afford to, fill rate was too expensive.
None of these developers were targetting S3D, so you get what you get.
Hell I've done the opposite in renderers used depth values for nothing but occlusion, i.e all of the geometry was supposed to be coplanar, but I wrote different fixed depths to sort them correctly.
Z values aren't depth in a renderer they are just values, in a buffer that happens to have comparison operations.
If you're targetting S3D you have to be more careful with them.
[quote name='chiz' post='1109639' date='Aug 27 2010, 02:31 PM']Still most of it just comes down to laziness/oversight and devs taking the path of least resistance. Not implementing 3D crosshairs is probably the most glaring oversight, you don't even need to change it dynamically, you just need to render it far enough away so that you're not bum-eyed when trying to focus on objects behind it.[/quote]
I'd say it's more a lack of priority for S3D support.
I've shipped more than a couple of games, and decisions are rarely made because of laziness, at the end of a day you have a set of features, given time constaints they won't all make it, you prioritize and make hard decisions. In practice I've never even seen S3D considerations on a feature list, it's just not visible enough, that was my point about your only real shot for good support is finding a developer that uses it.
If the publisher had a marketing deal with NVidia for S3Dsupport that was worth real $ I guarantee it's make the cut list, but cost wise it's probably not viable for NVidia at this point.
[quote name='chiz' post='1109639' date='Aug 27 2010, 02:31 PM']Well the first step is to properly implement correct and consistent depth values for all rendered objects and overlays, and for baked in effects that can't be corrected, to be able to disable them completely. Disabling effects is a viable workaround only when controls and options to disable them are exposed. Most games don't implement these controls in-game, but we can use a variety of other methods, either deleting assets or modifying .INI files for example. No we shouldn't have to do this, but this is the reality of it when games were designed without S3D in mind. Going forward, the best we can hope for is that developers at least consider S3D and don't explicitly break stuff before they even consider implementing S3D specific "Wow" features like excessive pop-out or advanced shading/dithering effects or color pallettes for a better S3D experience.[/quote]
[quote name='chiz' post='1109639' date='Aug 27 2010, 02:31 PM']And again, if all the primitives are rendered properly with appropriate Z-values, it should not matter if the offsets are calculated before they're sent to the GPU by the game engine, or afterwards by the S3D driver. The problem is, they're not. I agree it would be best if the developers undertook the task of implementing Stereo 3D but the fact of the matter is, they're not. In the absense of a solution, would you expect Nvidia to rely on devs to implement one? Or would it be better to approach the problem with their own solution that does the same thing, but relies on devs simply rendering their games in actual 3D and not 2D. Someone has to innovate or no progress would ever be made.
No as I said in my first response, NVidia has no choice, if they want S3D they're going to have to do the work.
That doesn't make it the right long term approach, even if everything were rendered with correct Z by every game it would still not be the right way of doing it. NVidia would still be minimally redundantly rendering offscreen surfaces in stereo and potentially creating artifacts, which they would have to fix with special case driver code. They have no way to tell if a camera in the scene is the camera, or merely a point of view in the scene for some effect.
[quote name='chiz' post='1109639' date='Aug 27 2010, 02:31 PM']They are hacky when a game claims its rendered in 3D but has 2D effects hacked in all over the place. Some of this is due to API and hardware limitations, and we clearly see this with all the reliance on post-processing and deferred rendering/lighting with DX9 and DX9-era hardware. But that all changed with DX10 hardware that allows readback of the depth buffer with color samples, so these hacky rendering methods are once again a poor excuse. If you look at some older games, even going as far back as classic OpenGL or DX8 era games (Halo 2 for example), they're perfect in 3D because they don't use any of these hacky post-process/framebuffer effects that don't have proper depth values.
Old games didn't do these things because they couldn't afford to, fill rate was too expensive.
None of these developers were targetting S3D, so you get what you get.
Hell I've done the opposite in renderers used depth values for nothing but occlusion, i.e all of the geometry was supposed to be coplanar, but I wrote different fixed depths to sort them correctly.
Z values aren't depth in a renderer they are just values, in a buffer that happens to have comparison operations.
If you're targetting S3D you have to be more careful with them.
[quote name='chiz' post='1109639' date='Aug 27 2010, 02:31 PM']Still most of it just comes down to laziness/oversight and devs taking the path of least resistance. Not implementing 3D crosshairs is probably the most glaring oversight, you don't even need to change it dynamically, you just need to render it far enough away so that you're not bum-eyed when trying to focus on objects behind it.
I'd say it's more a lack of priority for S3D support.
I've shipped more than a couple of games, and decisions are rarely made because of laziness, at the end of a day you have a set of features, given time constaints they won't all make it, you prioritize and make hard decisions. In practice I've never even seen S3D considerations on a feature list, it's just not visible enough, that was my point about your only real shot for good support is finding a developer that uses it.
If the publisher had a marketing deal with NVidia for S3Dsupport that was worth real $ I guarantee it's make the cut list, but cost wise it's probably not viable for NVidia at this point.
[quote name='chiz' post='1109639' date='Aug 27 2010, 02:31 PM']Well the first step is to properly implement correct and consistent depth values for all rendered objects and overlays, and for baked in effects that can't be corrected, to be able to disable them completely. Disabling effects is a viable workaround only when controls and options to disable them are exposed. Most games don't implement these controls in-game, but we can use a variety of other methods, either deleting assets or modifying .INI files for example. No we shouldn't have to do this, but this is the reality of it when games were designed without S3D in mind. Going forward, the best we can hope for is that developers at least consider S3D and don't explicitly break stuff before they even consider implementing S3D specific "Wow" features like excessive pop-out or advanced shading/dithering effects or color pallettes for a better S3D experience.
[quote name='ERP' post='1109651' date='Aug 27 2010, 06:13 PM']No as I said in my first response, NVidia has no choice, if they want S3D they're going to have to do the work.
That doesn't make it the right long term approach, even if everything were rendered with correct Z by every game it would still not be the right way of doing it. NVidia would still be minimally redundantly rendering offscreen surfaces in stereo and potentially creating artifacts, which they would have to fix with special case driver code. They have no way to tell if a camera in the scene is the camera, or merely a point of view in the scene for some effect.[/quote]
Right, that's the whole point, they came up with a solution in the absence of one, so I'm not sure how you can say its the wrong approach when you're not advocating a better solution, you're advocating the problem that needed a solution to begin with: lack of developer support for S3D. This is a simple chicken/egg or horse/carriage situation, we can claim hypotheticals all we like about which approach is better but the fact remains, Nvidia's solution is compatible with nearly every "3D" game ever made with some problems while the native dev implemented S3D options you're advocating are virtually non-existent. We have what? Avatar and CryEngine 3 compared to virtually every DirectX compatible 3D game that will at least run in stereo with 3D Vision, albeit not all perfectly or free of problems. I'll take Nvidia's solution over the promise of one anyday.
Also, I'm not sure what you mean by the driver not knowing which camera is "the camera", they have the original 2D rendered camera as a reference, after that its simply mimicking the position of our eyes relative to other which are always going to be side-by-side, which is where the separation/convergence offsets come into play.
[quote name='ERP' post='1109651' date='Aug 27 2010, 06:13 PM']Old games didn't do these things because they couldn't afford to, fill rate was too expensive.
None of these developers were targetting S3D, so you get what you get.
Hell I've done the opposite in renderers used depth values for nothing but occlusion, i.e all of the geometry was supposed to be coplanar, but I wrote different fixed depths to sort them correctly.
Z values aren't depth in a renderer they are just values, in a buffer that happens to have comparison operations.
If you're targetting S3D you have to be more careful with them.[/quote]
Yep and new games don't have to either because of hardware and API improvements, so problems with games of that era are understandable as I've already stated. But at the same time, we've seen games using those same engines and rendering techniques implementing additional workarounds to avoid this on current hardware for a good 3D Vision experience, or at the very least, allowing the end-user to disable the problematic effects.
I'll give another similar example that would go back to your first point about Nvidia going about the problem the wrong way. What's your stance on MSAA, or even more to the point, TrAA? The game engines that use deferred rendering/lighting don't support them natively for the same reason, the hardware and API at the time didn't allow for depth values to be stored and read with corresponding color samples. That changed with DX10-level hardware and in games that did not natively implement AA, Nvidia wrote driver code that allowed the end-user to force AA at significant hit to performance and in some cases, still had problems. Now that code has been worked into many of these engines as DX9 extensions that DO allow MSAA and TrAA to be implemented in-game. Should Nvidia have waited for Devs to get around to actually implementing AA in their games instead of writing their own implementation via driver? Once again, chicken and egg, I would take the imperfect solution today over the hope of one sometime in the future.
[quote name='ERP' post='1109651' date='Aug 27 2010, 06:13 PM']I'd say it's more a lack of priority for S3D support.
I've shipped more than a couple of games, and decisions are rarely made because of laziness, at the end of a day you have a set of features, given time constaints they won't all make it, you prioritize and make hard decisions. In practice I've never even seen S3D considerations on a feature list, it's just not visible enough, that was my point about your only real shot for good support is finding a developer that uses it.
If the publisher had a marketing deal with NVidia for S3Dsupport that was worth real $ I guarantee it's make the cut list, but cost wise it's probably not viable for NVidia at this point.[/quote]
And again, S3D support isn't even a requirement, simply rendering a 3D game in actual 3D with accurate relative depth values is the only requirement for Nvidia's solution to work. We have numerous examples of games that do work almost perfectly that never had 3D support in mind simply because someone during the development phase decided to render everything in 3D instead of bits and pieces in 2D along with 3D. I understand some of this comes down to design decisions like icons/overlays and other art assets etc, but something as simple as a 3D crosshair with the proper depth value is virtually effortless and really unexcusable. We've seen numerous games patch this in very easily after the fact, so its not all about cost cutting and design, its simply a lack of attention to detail or an oversight in most cases.
Nvidia certainly is doing their part, but they really don't have to do anything other than encourage developers to render their 3D games in 3D consistently. Sure they'll put in the extra work for their "3D Vision Ready" titles, but really they just have to get the word out to either render in 2D or render at correct object-depth, but most do the worst thing possible and render in 3D at screen-depth. Or if they can't, then allow the end-user to turn off those effects or disable problematic overlays or even crosshair.
[quote name='ERP' post='1109651' date='Aug 27 2010, 06:13 PM']No as I said in my first response, NVidia has no choice, if they want S3D they're going to have to do the work.
That doesn't make it the right long term approach, even if everything were rendered with correct Z by every game it would still not be the right way of doing it. NVidia would still be minimally redundantly rendering offscreen surfaces in stereo and potentially creating artifacts, which they would have to fix with special case driver code. They have no way to tell if a camera in the scene is the camera, or merely a point of view in the scene for some effect.
Right, that's the whole point, they came up with a solution in the absence of one, so I'm not sure how you can say its the wrong approach when you're not advocating a better solution, you're advocating the problem that needed a solution to begin with: lack of developer support for S3D. This is a simple chicken/egg or horse/carriage situation, we can claim hypotheticals all we like about which approach is better but the fact remains, Nvidia's solution is compatible with nearly every "3D" game ever made with some problems while the native dev implemented S3D options you're advocating are virtually non-existent. We have what? Avatar and CryEngine 3 compared to virtually every DirectX compatible 3D game that will at least run in stereo with 3D Vision, albeit not all perfectly or free of problems. I'll take Nvidia's solution over the promise of one anyday.
Also, I'm not sure what you mean by the driver not knowing which camera is "the camera", they have the original 2D rendered camera as a reference, after that its simply mimicking the position of our eyes relative to other which are always going to be side-by-side, which is where the separation/convergence offsets come into play.
[quote name='ERP' post='1109651' date='Aug 27 2010, 06:13 PM']Old games didn't do these things because they couldn't afford to, fill rate was too expensive.
None of these developers were targetting S3D, so you get what you get.
Hell I've done the opposite in renderers used depth values for nothing but occlusion, i.e all of the geometry was supposed to be coplanar, but I wrote different fixed depths to sort them correctly.
Z values aren't depth in a renderer they are just values, in a buffer that happens to have comparison operations.
If you're targetting S3D you have to be more careful with them.
Yep and new games don't have to either because of hardware and API improvements, so problems with games of that era are understandable as I've already stated. But at the same time, we've seen games using those same engines and rendering techniques implementing additional workarounds to avoid this on current hardware for a good 3D Vision experience, or at the very least, allowing the end-user to disable the problematic effects.
I'll give another similar example that would go back to your first point about Nvidia going about the problem the wrong way. What's your stance on MSAA, or even more to the point, TrAA? The game engines that use deferred rendering/lighting don't support them natively for the same reason, the hardware and API at the time didn't allow for depth values to be stored and read with corresponding color samples. That changed with DX10-level hardware and in games that did not natively implement AA, Nvidia wrote driver code that allowed the end-user to force AA at significant hit to performance and in some cases, still had problems. Now that code has been worked into many of these engines as DX9 extensions that DO allow MSAA and TrAA to be implemented in-game. Should Nvidia have waited for Devs to get around to actually implementing AA in their games instead of writing their own implementation via driver? Once again, chicken and egg, I would take the imperfect solution today over the hope of one sometime in the future.
[quote name='ERP' post='1109651' date='Aug 27 2010, 06:13 PM']I'd say it's more a lack of priority for S3D support.
I've shipped more than a couple of games, and decisions are rarely made because of laziness, at the end of a day you have a set of features, given time constaints they won't all make it, you prioritize and make hard decisions. In practice I've never even seen S3D considerations on a feature list, it's just not visible enough, that was my point about your only real shot for good support is finding a developer that uses it.
If the publisher had a marketing deal with NVidia for S3Dsupport that was worth real $ I guarantee it's make the cut list, but cost wise it's probably not viable for NVidia at this point.
And again, S3D support isn't even a requirement, simply rendering a 3D game in actual 3D with accurate relative depth values is the only requirement for Nvidia's solution to work. We have numerous examples of games that do work almost perfectly that never had 3D support in mind simply because someone during the development phase decided to render everything in 3D instead of bits and pieces in 2D along with 3D. I understand some of this comes down to design decisions like icons/overlays and other art assets etc, but something as simple as a 3D crosshair with the proper depth value is virtually effortless and really unexcusable. We've seen numerous games patch this in very easily after the fact, so its not all about cost cutting and design, its simply a lack of attention to detail or an oversight in most cases.
Nvidia certainly is doing their part, but they really don't have to do anything other than encourage developers to render their 3D games in 3D consistently. Sure they'll put in the extra work for their "3D Vision Ready" titles, but really they just have to get the word out to either render in 2D or render at correct object-depth, but most do the worst thing possible and render in 3D at screen-depth. Or if they can't, then allow the end-user to turn off those effects or disable problematic overlays or even crosshair.
[quote name='ERP' post='1109651' date='Aug 27 2010, 06:13 PM']No as I said in my first response, NVidia has no choice, if they want S3D they're going to have to do the work.
That doesn't make it the right long term approach, even if everything were rendered with correct Z by every game it would still not be the right way of doing it. NVidia would still be minimally redundantly rendering offscreen surfaces in stereo and potentially creating artifacts, which they would have to fix with special case driver code. They have no way to tell if a camera in the scene is the camera, or merely a point of view in the scene for some effect.[/quote]
Right, that's the whole point, they came up with a solution in the absence of one, so I'm not sure how you can say its the wrong approach when you're not advocating a better solution, you're advocating the problem that needed a solution to begin with: lack of developer support for S3D. This is a simple chicken/egg or horse/carriage situation, we can claim hypotheticals all we like about which approach is better but the fact remains, Nvidia's solution is compatible with nearly every "3D" game ever made with some problems while the native dev implemented S3D options you're advocating are virtually non-existent. We have what? Avatar and CryEngine 3 compared to virtually every DirectX compatible 3D game that will at least run in stereo with 3D Vision, albeit not all perfectly or free of problems. I'll take Nvidia's solution over the promise of one anyday.
Also, I'm not sure what you mean by the driver not knowing which camera is "the camera", they have the original 2D rendered camera as a reference, after that its simply mimicking the position of our eyes relative to other which are always going to be side-by-side, which is where the separation/convergence offsets come into play.
[quote name='ERP' post='1109651' date='Aug 27 2010, 06:13 PM']Old games didn't do these things because they couldn't afford to, fill rate was too expensive.
None of these developers were targetting S3D, so you get what you get.
Hell I've done the opposite in renderers used depth values for nothing but occlusion, i.e all of the geometry was supposed to be coplanar, but I wrote different fixed depths to sort them correctly.
Z values aren't depth in a renderer they are just values, in a buffer that happens to have comparison operations.
If you're targetting S3D you have to be more careful with them.[/quote]
Yep and new games don't have to either because of hardware and API improvements, so problems with games of that era are understandable as I've already stated. But at the same time, we've seen games using those same engines and rendering techniques implementing additional workarounds to avoid this on current hardware for a good 3D Vision experience, or at the very least, allowing the end-user to disable the problematic effects.
I'll give another similar example that would go back to your first point about Nvidia going about the problem the wrong way. What's your stance on MSAA, or even more to the point, TrAA? The game engines that use deferred rendering/lighting don't support them natively for the same reason, the hardware and API at the time didn't allow for depth values to be stored and read with corresponding color samples. That changed with DX10-level hardware and in games that did not natively implement AA, Nvidia wrote driver code that allowed the end-user to force AA at significant hit to performance and in some cases, still had problems. Now that code has been worked into many of these engines as DX9 extensions that DO allow MSAA and TrAA to be implemented in-game. Should Nvidia have waited for Devs to get around to actually implementing AA in their games instead of writing their own implementation via driver? Once again, chicken and egg, I would take the imperfect solution today over the hope of one sometime in the future.
[quote name='ERP' post='1109651' date='Aug 27 2010, 06:13 PM']I'd say it's more a lack of priority for S3D support.
I've shipped more than a couple of games, and decisions are rarely made because of laziness, at the end of a day you have a set of features, given time constaints they won't all make it, you prioritize and make hard decisions. In practice I've never even seen S3D considerations on a feature list, it's just not visible enough, that was my point about your only real shot for good support is finding a developer that uses it.
If the publisher had a marketing deal with NVidia for S3Dsupport that was worth real $ I guarantee it's make the cut list, but cost wise it's probably not viable for NVidia at this point.[/quote]
And again, S3D support isn't even a requirement, simply rendering a 3D game in actual 3D with accurate relative depth values is the only requirement for Nvidia's solution to work. We have numerous examples of games that do work almost perfectly that never had 3D support in mind simply because someone during the development phase decided to render everything in 3D instead of bits and pieces in 2D along with 3D. I understand some of this comes down to design decisions like icons/overlays and other art assets etc, but something as simple as a 3D crosshair with the proper depth value is virtually effortless and really unexcusable. We've seen numerous games patch this in very easily after the fact, so its not all about cost cutting and design, its simply a lack of attention to detail or an oversight in most cases.
Nvidia certainly is doing their part, but they really don't have to do anything other than encourage developers to render their 3D games in 3D consistently. Sure they'll put in the extra work for their "3D Vision Ready" titles, but really they just have to get the word out to either render in 2D or render at correct object-depth, but most do the worst thing possible and render in 3D at screen-depth. Or if they can't, then allow the end-user to turn off those effects or disable problematic overlays or even crosshair.
[quote name='ERP' post='1109651' date='Aug 27 2010, 06:13 PM']No as I said in my first response, NVidia has no choice, if they want S3D they're going to have to do the work.
That doesn't make it the right long term approach, even if everything were rendered with correct Z by every game it would still not be the right way of doing it. NVidia would still be minimally redundantly rendering offscreen surfaces in stereo and potentially creating artifacts, which they would have to fix with special case driver code. They have no way to tell if a camera in the scene is the camera, or merely a point of view in the scene for some effect.
Right, that's the whole point, they came up with a solution in the absence of one, so I'm not sure how you can say its the wrong approach when you're not advocating a better solution, you're advocating the problem that needed a solution to begin with: lack of developer support for S3D. This is a simple chicken/egg or horse/carriage situation, we can claim hypotheticals all we like about which approach is better but the fact remains, Nvidia's solution is compatible with nearly every "3D" game ever made with some problems while the native dev implemented S3D options you're advocating are virtually non-existent. We have what? Avatar and CryEngine 3 compared to virtually every DirectX compatible 3D game that will at least run in stereo with 3D Vision, albeit not all perfectly or free of problems. I'll take Nvidia's solution over the promise of one anyday.
Also, I'm not sure what you mean by the driver not knowing which camera is "the camera", they have the original 2D rendered camera as a reference, after that its simply mimicking the position of our eyes relative to other which are always going to be side-by-side, which is where the separation/convergence offsets come into play.
[quote name='ERP' post='1109651' date='Aug 27 2010, 06:13 PM']Old games didn't do these things because they couldn't afford to, fill rate was too expensive.
None of these developers were targetting S3D, so you get what you get.
Hell I've done the opposite in renderers used depth values for nothing but occlusion, i.e all of the geometry was supposed to be coplanar, but I wrote different fixed depths to sort them correctly.
Z values aren't depth in a renderer they are just values, in a buffer that happens to have comparison operations.
If you're targetting S3D you have to be more careful with them.
Yep and new games don't have to either because of hardware and API improvements, so problems with games of that era are understandable as I've already stated. But at the same time, we've seen games using those same engines and rendering techniques implementing additional workarounds to avoid this on current hardware for a good 3D Vision experience, or at the very least, allowing the end-user to disable the problematic effects.
I'll give another similar example that would go back to your first point about Nvidia going about the problem the wrong way. What's your stance on MSAA, or even more to the point, TrAA? The game engines that use deferred rendering/lighting don't support them natively for the same reason, the hardware and API at the time didn't allow for depth values to be stored and read with corresponding color samples. That changed with DX10-level hardware and in games that did not natively implement AA, Nvidia wrote driver code that allowed the end-user to force AA at significant hit to performance and in some cases, still had problems. Now that code has been worked into many of these engines as DX9 extensions that DO allow MSAA and TrAA to be implemented in-game. Should Nvidia have waited for Devs to get around to actually implementing AA in their games instead of writing their own implementation via driver? Once again, chicken and egg, I would take the imperfect solution today over the hope of one sometime in the future.
[quote name='ERP' post='1109651' date='Aug 27 2010, 06:13 PM']I'd say it's more a lack of priority for S3D support.
I've shipped more than a couple of games, and decisions are rarely made because of laziness, at the end of a day you have a set of features, given time constaints they won't all make it, you prioritize and make hard decisions. In practice I've never even seen S3D considerations on a feature list, it's just not visible enough, that was my point about your only real shot for good support is finding a developer that uses it.
If the publisher had a marketing deal with NVidia for S3Dsupport that was worth real $ I guarantee it's make the cut list, but cost wise it's probably not viable for NVidia at this point.
And again, S3D support isn't even a requirement, simply rendering a 3D game in actual 3D with accurate relative depth values is the only requirement for Nvidia's solution to work. We have numerous examples of games that do work almost perfectly that never had 3D support in mind simply because someone during the development phase decided to render everything in 3D instead of bits and pieces in 2D along with 3D. I understand some of this comes down to design decisions like icons/overlays and other art assets etc, but something as simple as a 3D crosshair with the proper depth value is virtually effortless and really unexcusable. We've seen numerous games patch this in very easily after the fact, so its not all about cost cutting and design, its simply a lack of attention to detail or an oversight in most cases.
Nvidia certainly is doing their part, but they really don't have to do anything other than encourage developers to render their 3D games in 3D consistently. Sure they'll put in the extra work for their "3D Vision Ready" titles, but really they just have to get the word out to either render in 2D or render at correct object-depth, but most do the worst thing possible and render in 3D at screen-depth. Or if they can't, then allow the end-user to turn off those effects or disable problematic overlays or even crosshair.
[quote name='chiz' post='1109675' date='Aug 27 2010, 06:08 PM']Right, that's the whole point, they came up with a solution in the absence of one, so I'm not sure how you can say its the wrong approach when you're not advocating a better solution, you're advocating the problem that needed a solution to begin with: lack of developer support for S3D. This is a simple chicken/egg or horse/carriage situation, we can claim hypotheticals all we like about which approach is better but the fact remains, Nvidia's solution is compatible with nearly every "3D" game ever made with some problems while the native dev implemented S3D options you're advocating are virtually non-existent. We have what? Avatar and CryEngine 3 compared to virtually every DirectX compatible 3D game that will at least run in stereo with 3D Vision, albeit not all perfectly or free of problems. I'll take Nvidia's solution over the promise of one anyday.
Also, I'm not sure what you mean by the driver not knowing which camera is "the camera", they have the original 2D rendered camera as a reference, after that its simply mimicking the position of our eyes relative to other which are always going to be side-by-side, which is where the separation/convergence offsets come into play.
Yep and new games don't have to either because of hardware and API improvements, so problems with games of that era are understandable as I've already stated. But at the same time, we've seen games using those same engines and rendering techniques implementing additional workarounds to avoid this on current hardware for a good 3D Vision experience, or at the very least, allowing the end-user to disable the problematic effects.
I'll give another similar example that would go back to your first point about Nvidia going about the problem the wrong way. What's your stance on MSAA, or even more to the point, TrAA? The game engines that use deferred rendering/lighting don't support them natively for the same reason, the hardware and API at the time didn't allow for depth values to be stored and read with corresponding color samples. That changed with DX10-level hardware and in games that did not natively implement AA, Nvidia wrote driver code that allowed the end-user to force AA at significant hit to performance and in some cases, still had problems. Now that code has been worked into many of these engines as DX9 extensions that DO allow MSAA and TrAA to be implemented in-game. Should Nvidia have waited for Devs to get around to actually implementing AA in their games instead of writing their own implementation via driver? Once again, chicken and egg, I would take the imperfect solution today over the hope of one sometime in the future.
And again, S3D support isn't even a requirement, simply rendering a 3D game in actual 3D with accurate relative depth values is the only requirement for Nvidia's solution to work. We have numerous examples of games that do work almost perfectly that never had 3D support in mind simply because someone during the development phase decided to render everything in 3D instead of bits and pieces in 2D along with 3D. I understand some of this comes down to design decisions like icons/overlays and other art assets etc, but something as simple as a 3D crosshair with the proper depth value is virtually effortless and really unexcusable. We've seen numerous games patch this in very easily after the fact, so its not all about cost cutting and design, its simply a lack of attention to detail or an oversight in most cases.
Nvidia certainly is doing their part, but they really don't have to do anything other than encourage developers to render their 3D games in 3D consistently. Sure they'll put in the extra work for their "3D Vision Ready" titles, but really they just have to get the word out to either render in 2D or render at correct object-depth, but most do the worst thing possible and render in 3D at screen-depth. Or if they can't, then allow the end-user to turn off those effects or disable problematic overlays or even crosshair.[/quote]
I too was a bit disappointed with the SC2 3D rendering. Definitely not a "good" 3D rating, even with the newest drivers.
[quote name='chiz' post='1109675' date='Aug 27 2010, 06:08 PM']Right, that's the whole point, they came up with a solution in the absence of one, so I'm not sure how you can say its the wrong approach when you're not advocating a better solution, you're advocating the problem that needed a solution to begin with: lack of developer support for S3D. This is a simple chicken/egg or horse/carriage situation, we can claim hypotheticals all we like about which approach is better but the fact remains, Nvidia's solution is compatible with nearly every "3D" game ever made with some problems while the native dev implemented S3D options you're advocating are virtually non-existent. We have what? Avatar and CryEngine 3 compared to virtually every DirectX compatible 3D game that will at least run in stereo with 3D Vision, albeit not all perfectly or free of problems. I'll take Nvidia's solution over the promise of one anyday.
Also, I'm not sure what you mean by the driver not knowing which camera is "the camera", they have the original 2D rendered camera as a reference, after that its simply mimicking the position of our eyes relative to other which are always going to be side-by-side, which is where the separation/convergence offsets come into play.
Yep and new games don't have to either because of hardware and API improvements, so problems with games of that era are understandable as I've already stated. But at the same time, we've seen games using those same engines and rendering techniques implementing additional workarounds to avoid this on current hardware for a good 3D Vision experience, or at the very least, allowing the end-user to disable the problematic effects.
I'll give another similar example that would go back to your first point about Nvidia going about the problem the wrong way. What's your stance on MSAA, or even more to the point, TrAA? The game engines that use deferred rendering/lighting don't support them natively for the same reason, the hardware and API at the time didn't allow for depth values to be stored and read with corresponding color samples. That changed with DX10-level hardware and in games that did not natively implement AA, Nvidia wrote driver code that allowed the end-user to force AA at significant hit to performance and in some cases, still had problems. Now that code has been worked into many of these engines as DX9 extensions that DO allow MSAA and TrAA to be implemented in-game. Should Nvidia have waited for Devs to get around to actually implementing AA in their games instead of writing their own implementation via driver? Once again, chicken and egg, I would take the imperfect solution today over the hope of one sometime in the future.
And again, S3D support isn't even a requirement, simply rendering a 3D game in actual 3D with accurate relative depth values is the only requirement for Nvidia's solution to work. We have numerous examples of games that do work almost perfectly that never had 3D support in mind simply because someone during the development phase decided to render everything in 3D instead of bits and pieces in 2D along with 3D. I understand some of this comes down to design decisions like icons/overlays and other art assets etc, but something as simple as a 3D crosshair with the proper depth value is virtually effortless and really unexcusable. We've seen numerous games patch this in very easily after the fact, so its not all about cost cutting and design, its simply a lack of attention to detail or an oversight in most cases.
Nvidia certainly is doing their part, but they really don't have to do anything other than encourage developers to render their 3D games in 3D consistently. Sure they'll put in the extra work for their "3D Vision Ready" titles, but really they just have to get the word out to either render in 2D or render at correct object-depth, but most do the worst thing possible and render in 3D at screen-depth. Or if they can't, then allow the end-user to turn off those effects or disable problematic overlays or even crosshair.
I too was a bit disappointed with the SC2 3D rendering. Definitely not a "good" 3D rating, even with the newest drivers.
[quote name='chiz' post='1109675' date='Aug 27 2010, 06:08 PM']Right, that's the whole point, they came up with a solution in the absence of one, so I'm not sure how you can say its the wrong approach when you're not advocating a better solution, you're advocating the problem that needed a solution to begin with: lack of developer support for S3D. This is a simple chicken/egg or horse/carriage situation, we can claim hypotheticals all we like about which approach is better but the fact remains, Nvidia's solution is compatible with nearly every "3D" game ever made with some problems while the native dev implemented S3D options you're advocating are virtually non-existent. We have what? Avatar and CryEngine 3 compared to virtually every DirectX compatible 3D game that will at least run in stereo with 3D Vision, albeit not all perfectly or free of problems. I'll take Nvidia's solution over the promise of one anyday.
Also, I'm not sure what you mean by the driver not knowing which camera is "the camera", they have the original 2D rendered camera as a reference, after that its simply mimicking the position of our eyes relative to other which are always going to be side-by-side, which is where the separation/convergence offsets come into play.
Yep and new games don't have to either because of hardware and API improvements, so problems with games of that era are understandable as I've already stated. But at the same time, we've seen games using those same engines and rendering techniques implementing additional workarounds to avoid this on current hardware for a good 3D Vision experience, or at the very least, allowing the end-user to disable the problematic effects.
I'll give another similar example that would go back to your first point about Nvidia going about the problem the wrong way. What's your stance on MSAA, or even more to the point, TrAA? The game engines that use deferred rendering/lighting don't support them natively for the same reason, the hardware and API at the time didn't allow for depth values to be stored and read with corresponding color samples. That changed with DX10-level hardware and in games that did not natively implement AA, Nvidia wrote driver code that allowed the end-user to force AA at significant hit to performance and in some cases, still had problems. Now that code has been worked into many of these engines as DX9 extensions that DO allow MSAA and TrAA to be implemented in-game. Should Nvidia have waited for Devs to get around to actually implementing AA in their games instead of writing their own implementation via driver? Once again, chicken and egg, I would take the imperfect solution today over the hope of one sometime in the future.
And again, S3D support isn't even a requirement, simply rendering a 3D game in actual 3D with accurate relative depth values is the only requirement for Nvidia's solution to work. We have numerous examples of games that do work almost perfectly that never had 3D support in mind simply because someone during the development phase decided to render everything in 3D instead of bits and pieces in 2D along with 3D. I understand some of this comes down to design decisions like icons/overlays and other art assets etc, but something as simple as a 3D crosshair with the proper depth value is virtually effortless and really unexcusable. We've seen numerous games patch this in very easily after the fact, so its not all about cost cutting and design, its simply a lack of attention to detail or an oversight in most cases.
Nvidia certainly is doing their part, but they really don't have to do anything other than encourage developers to render their 3D games in 3D consistently. Sure they'll put in the extra work for their "3D Vision Ready" titles, but really they just have to get the word out to either render in 2D or render at correct object-depth, but most do the worst thing possible and render in 3D at screen-depth. Or if they can't, then allow the end-user to turn off those effects or disable problematic overlays or even crosshair.[/quote]
I too was a bit disappointed with the SC2 3D rendering. Definitely not a "good" 3D rating, even with the newest drivers.
[quote name='chiz' post='1109675' date='Aug 27 2010, 06:08 PM']Right, that's the whole point, they came up with a solution in the absence of one, so I'm not sure how you can say its the wrong approach when you're not advocating a better solution, you're advocating the problem that needed a solution to begin with: lack of developer support for S3D. This is a simple chicken/egg or horse/carriage situation, we can claim hypotheticals all we like about which approach is better but the fact remains, Nvidia's solution is compatible with nearly every "3D" game ever made with some problems while the native dev implemented S3D options you're advocating are virtually non-existent. We have what? Avatar and CryEngine 3 compared to virtually every DirectX compatible 3D game that will at least run in stereo with 3D Vision, albeit not all perfectly or free of problems. I'll take Nvidia's solution over the promise of one anyday.
Also, I'm not sure what you mean by the driver not knowing which camera is "the camera", they have the original 2D rendered camera as a reference, after that its simply mimicking the position of our eyes relative to other which are always going to be side-by-side, which is where the separation/convergence offsets come into play.
Yep and new games don't have to either because of hardware and API improvements, so problems with games of that era are understandable as I've already stated. But at the same time, we've seen games using those same engines and rendering techniques implementing additional workarounds to avoid this on current hardware for a good 3D Vision experience, or at the very least, allowing the end-user to disable the problematic effects.
I'll give another similar example that would go back to your first point about Nvidia going about the problem the wrong way. What's your stance on MSAA, or even more to the point, TrAA? The game engines that use deferred rendering/lighting don't support them natively for the same reason, the hardware and API at the time didn't allow for depth values to be stored and read with corresponding color samples. That changed with DX10-level hardware and in games that did not natively implement AA, Nvidia wrote driver code that allowed the end-user to force AA at significant hit to performance and in some cases, still had problems. Now that code has been worked into many of these engines as DX9 extensions that DO allow MSAA and TrAA to be implemented in-game. Should Nvidia have waited for Devs to get around to actually implementing AA in their games instead of writing their own implementation via driver? Once again, chicken and egg, I would take the imperfect solution today over the hope of one sometime in the future.
And again, S3D support isn't even a requirement, simply rendering a 3D game in actual 3D with accurate relative depth values is the only requirement for Nvidia's solution to work. We have numerous examples of games that do work almost perfectly that never had 3D support in mind simply because someone during the development phase decided to render everything in 3D instead of bits and pieces in 2D along with 3D. I understand some of this comes down to design decisions like icons/overlays and other art assets etc, but something as simple as a 3D crosshair with the proper depth value is virtually effortless and really unexcusable. We've seen numerous games patch this in very easily after the fact, so its not all about cost cutting and design, its simply a lack of attention to detail or an oversight in most cases.
Nvidia certainly is doing their part, but they really don't have to do anything other than encourage developers to render their 3D games in 3D consistently. Sure they'll put in the extra work for their "3D Vision Ready" titles, but really they just have to get the word out to either render in 2D or render at correct object-depth, but most do the worst thing possible and render in 3D at screen-depth. Or if they can't, then allow the end-user to turn off those effects or disable problematic overlays or even crosshair.
I too was a bit disappointed with the SC2 3D rendering. Definitely not a "good" 3D rating, even with the newest drivers.
[quote name='Callsign_Vega' post='1109688' date='Aug 27 2010, 07:51 PM']I too was a bit disappointed with the SC2 3D rendering. Definitely not a "good" 3D rating, even with the newest drivers.[/quote]
Sounds like you may be doing it wrong, did you check out the screenshots I provided above? SC2 is at least as good in the top-down isometric view as similar RTS titles with 2D cursors/icons like Dawn of War 2 or Supreme Commander and both of those are rated as "Excellent" even though they have far more disturbing issues with 2D icons/cursor than SC2.
[quote name='Callsign_Vega' post='1109688' date='Aug 27 2010, 07:51 PM']I too was a bit disappointed with the SC2 3D rendering. Definitely not a "good" 3D rating, even with the newest drivers.
Sounds like you may be doing it wrong, did you check out the screenshots I provided above? SC2 is at least as good in the top-down isometric view as similar RTS titles with 2D cursors/icons like Dawn of War 2 or Supreme Commander and both of those are rated as "Excellent" even though they have far more disturbing issues with 2D icons/cursor than SC2.
[quote name='Callsign_Vega' post='1109688' date='Aug 27 2010, 07:51 PM']I too was a bit disappointed with the SC2 3D rendering. Definitely not a "good" 3D rating, even with the newest drivers.[/quote]
Sounds like you may be doing it wrong, did you check out the screenshots I provided above? SC2 is at least as good in the top-down isometric view as similar RTS titles with 2D cursors/icons like Dawn of War 2 or Supreme Commander and both of those are rated as "Excellent" even though they have far more disturbing issues with 2D icons/cursor than SC2.
[quote name='Callsign_Vega' post='1109688' date='Aug 27 2010, 07:51 PM']I too was a bit disappointed with the SC2 3D rendering. Definitely not a "good" 3D rating, even with the newest drivers.
Sounds like you may be doing it wrong, did you check out the screenshots I provided above? SC2 is at least as good in the top-down isometric view as similar RTS titles with 2D cursors/icons like Dawn of War 2 or Supreme Commander and both of those are rated as "Excellent" even though they have far more disturbing issues with 2D icons/cursor than SC2.
There are way to many things you simply cannot know about the scene that late in the pipeline.
The right place is when you still know the intent of the primitives being rendered, that's inside the games rendering logic, developers (or engine devs for those that buy tech) should in an ideal world be the ones implementing S3D, not the driver writer.[/quote]
And again, if all the primitives are rendered properly with appropriate Z-values, it should not matter if the offsets are calculated before they're sent to the GPU by the game engine, or afterwards by the S3D driver. The problem is, they're not. I agree it would be best if the developers undertook the task of implementing Stereo 3D but the fact of the matter is, they're not. In the absense of a solution, would you expect Nvidia to rely on devs to implement one? Or would it be better to approach the problem with their own solution that does the same thing, but relies on devs simply rendering their games in actual 3D and not 2D. Someone has to innovate or no progress would ever be made.
[quote name='ERP' post='1109621' date='Aug 27 2010, 05:01 PM']They aren't hacky there are perfectly valid reasons to do screen space work, in many cases it's the ideal place to sample something. And if as a developer you're not explicitly targeting S3D, you won't bother with the expense of retaining depth information.
This isn't a new thing either, over a decade ago when I worked on N64 titles we never wrote depth for the sky box, it was unnecessary and costly.[/quote]
They are hacky when a game claims its rendered in 3D but has 2D effects hacked in all over the place. Some of this is due to API and hardware limitations, and we clearly see this with all the reliance on post-processing and deferred rendering/lighting with DX9 and DX9-era hardware. But that all changed with DX10 hardware that allows readback of the depth buffer with color samples, so these hacky rendering methods are once again a poor excuse. If you look at some older games, even going as far back as classic OpenGL or DX8 era games (Halo 2 for example), they're perfect in 3D because they don't use any of these hacky post-process/framebuffer effects that don't have proper depth values.
Still most of it just comes down to laziness/oversight and devs taking the path of least resistance. Not implementing 3D crosshairs is probably the most glaring oversight, you don't even need to change it dynamically, you just need to render it far enough away so that you're not bum-eyed when trying to focus on objects behind it.
[quote name='ERP' post='1109621' date='Aug 27 2010, 05:01 PM']I agree we're starting to see some small steps forwards, 3D engines supporting it doesn't surprise me, it's largely a checkbox item for engine vendors. But having an engine that supports it and writing a game that supports it aren't the same thing. The larger issue is big publishers adding S3D to their test matrix we won't see high quality support until it's tested. Right now your best bet on a game having good S3D support is having one of the lead developers who actually plays games in S3D.[/quote]
Well the first step is to properly implement correct and consistent depth values for all rendered objects and overlays, and for baked in effects that can't be corrected, to be able to disable them completely. Disabling effects is a viable workaround only when controls and options to disable them are exposed. Most games don't implement these controls in-game, but we can use a variety of other methods, either deleting assets or modifying .INI files for example. No we shouldn't have to do this, but this is the reality of it when games were designed without S3D in mind. Going forward, the best we can hope for is that developers at least consider S3D and don't explicitly break stuff before they even consider implementing S3D specific "Wow" features like excessive pop-out or advanced shading/dithering effects or color pallettes for a better S3D experience.
There are way to many things you simply cannot know about the scene that late in the pipeline.
The right place is when you still know the intent of the primitives being rendered, that's inside the games rendering logic, developers (or engine devs for those that buy tech) should in an ideal world be the ones implementing S3D, not the driver writer.
And again, if all the primitives are rendered properly with appropriate Z-values, it should not matter if the offsets are calculated before they're sent to the GPU by the game engine, or afterwards by the S3D driver. The problem is, they're not. I agree it would be best if the developers undertook the task of implementing Stereo 3D but the fact of the matter is, they're not. In the absense of a solution, would you expect Nvidia to rely on devs to implement one? Or would it be better to approach the problem with their own solution that does the same thing, but relies on devs simply rendering their games in actual 3D and not 2D. Someone has to innovate or no progress would ever be made.
[quote name='ERP' post='1109621' date='Aug 27 2010, 05:01 PM']They aren't hacky there are perfectly valid reasons to do screen space work, in many cases it's the ideal place to sample something. And if as a developer you're not explicitly targeting S3D, you won't bother with the expense of retaining depth information.
This isn't a new thing either, over a decade ago when I worked on N64 titles we never wrote depth for the sky box, it was unnecessary and costly.
They are hacky when a game claims its rendered in 3D but has 2D effects hacked in all over the place. Some of this is due to API and hardware limitations, and we clearly see this with all the reliance on post-processing and deferred rendering/lighting with DX9 and DX9-era hardware. But that all changed with DX10 hardware that allows readback of the depth buffer with color samples, so these hacky rendering methods are once again a poor excuse. If you look at some older games, even going as far back as classic OpenGL or DX8 era games (Halo 2 for example), they're perfect in 3D because they don't use any of these hacky post-process/framebuffer effects that don't have proper depth values.
Still most of it just comes down to laziness/oversight and devs taking the path of least resistance. Not implementing 3D crosshairs is probably the most glaring oversight, you don't even need to change it dynamically, you just need to render it far enough away so that you're not bum-eyed when trying to focus on objects behind it.
[quote name='ERP' post='1109621' date='Aug 27 2010, 05:01 PM']I agree we're starting to see some small steps forwards, 3D engines supporting it doesn't surprise me, it's largely a checkbox item for engine vendors. But having an engine that supports it and writing a game that supports it aren't the same thing. The larger issue is big publishers adding S3D to their test matrix we won't see high quality support until it's tested. Right now your best bet on a game having good S3D support is having one of the lead developers who actually plays games in S3D.
Well the first step is to properly implement correct and consistent depth values for all rendered objects and overlays, and for baked in effects that can't be corrected, to be able to disable them completely. Disabling effects is a viable workaround only when controls and options to disable them are exposed. Most games don't implement these controls in-game, but we can use a variety of other methods, either deleting assets or modifying .INI files for example. No we shouldn't have to do this, but this is the reality of it when games were designed without S3D in mind. Going forward, the best we can hope for is that developers at least consider S3D and don't explicitly break stuff before they even consider implementing S3D specific "Wow" features like excessive pop-out or advanced shading/dithering effects or color pallettes for a better S3D experience.
-=HeliX=- Mod 3DV Game Fixes
My 3D Vision Games List Ratings
Intel Core i7 5930K @4.5GHz | Gigabyte X99 Gaming 5 | Win10 x64 Pro | Corsair H105
Nvidia GeForce Titan X SLI Hybrid | ROG Swift PG278Q 144Hz + 3D Vision/G-Sync | 32GB Adata DDR4 2666
Intel Samsung 950Pro SSD | Samsung EVO 4x1 RAID 0 |
Yamaha VX-677 A/V Receiver | Polk Audio RM6880 7.1 | LG Blu-Ray
Auzen X-Fi HT HD | Logitech G710/G502/G27 | Corsair Air 540 | EVGA P2-1200W
There are way to many things you simply cannot know about the scene that late in the pipeline.
The right place is when you still know the intent of the primitives being rendered, that's inside the games rendering logic, developers (or engine devs for those that buy tech) should in an ideal world be the ones implementing S3D, not the driver writer.[/quote]
And again, if all the primitives are rendered properly with appropriate Z-values, it should not matter if the offsets are calculated before they're sent to the GPU by the game engine, or afterwards by the S3D driver. The problem is, they're not. I agree it would be best if the developers undertook the task of implementing Stereo 3D but the fact of the matter is, they're not. In the absense of a solution, would you expect Nvidia to rely on devs to implement one? Or would it be better to approach the problem with their own solution that does the same thing, but relies on devs simply rendering their games in actual 3D and not 2D. Someone has to innovate or no progress would ever be made.
[quote name='ERP' post='1109621' date='Aug 27 2010, 05:01 PM']They aren't hacky there are perfectly valid reasons to do screen space work, in many cases it's the ideal place to sample something. And if as a developer you're not explicitly targeting S3D, you won't bother with the expense of retaining depth information.
This isn't a new thing either, over a decade ago when I worked on N64 titles we never wrote depth for the sky box, it was unnecessary and costly.[/quote]
They are hacky when a game claims its rendered in 3D but has 2D effects hacked in all over the place. Some of this is due to API and hardware limitations, and we clearly see this with all the reliance on post-processing and deferred rendering/lighting with DX9 and DX9-era hardware. But that all changed with DX10 hardware that allows readback of the depth buffer with color samples, so these hacky rendering methods are once again a poor excuse. If you look at some older games, even going as far back as classic OpenGL or DX8 era games (Halo 2 for example), they're perfect in 3D because they don't use any of these hacky post-process/framebuffer effects that don't have proper depth values.
Still most of it just comes down to laziness/oversight and devs taking the path of least resistance. Not implementing 3D crosshairs is probably the most glaring oversight, you don't even need to change it dynamically, you just need to render it far enough away so that you're not bum-eyed when trying to focus on objects behind it.
[quote name='ERP' post='1109621' date='Aug 27 2010, 05:01 PM']I agree we're starting to see some small steps forwards, 3D engines supporting it doesn't surprise me, it's largely a checkbox item for engine vendors. But having an engine that supports it and writing a game that supports it aren't the same thing. The larger issue is big publishers adding S3D to their test matrix we won't see high quality support until it's tested. Right now your best bet on a game having good S3D support is having one of the lead developers who actually plays games in S3D.[/quote]
Well the first step is to properly implement correct and consistent depth values for all rendered objects and overlays, and for baked in effects that can't be corrected, to be able to disable them completely. Disabling effects is a viable workaround only when controls and options to disable them are exposed. Most games don't implement these controls in-game, but we can use a variety of other methods, either deleting assets or modifying .INI files for example. No we shouldn't have to do this, but this is the reality of it when games were designed without S3D in mind. Going forward, the best we can hope for is that developers at least consider S3D and don't explicitly break stuff before they even consider implementing S3D specific "Wow" features like excessive pop-out or advanced shading/dithering effects or color pallettes for a better S3D experience.
There are way to many things you simply cannot know about the scene that late in the pipeline.
The right place is when you still know the intent of the primitives being rendered, that's inside the games rendering logic, developers (or engine devs for those that buy tech) should in an ideal world be the ones implementing S3D, not the driver writer.
And again, if all the primitives are rendered properly with appropriate Z-values, it should not matter if the offsets are calculated before they're sent to the GPU by the game engine, or afterwards by the S3D driver. The problem is, they're not. I agree it would be best if the developers undertook the task of implementing Stereo 3D but the fact of the matter is, they're not. In the absense of a solution, would you expect Nvidia to rely on devs to implement one? Or would it be better to approach the problem with their own solution that does the same thing, but relies on devs simply rendering their games in actual 3D and not 2D. Someone has to innovate or no progress would ever be made.
[quote name='ERP' post='1109621' date='Aug 27 2010, 05:01 PM']They aren't hacky there are perfectly valid reasons to do screen space work, in many cases it's the ideal place to sample something. And if as a developer you're not explicitly targeting S3D, you won't bother with the expense of retaining depth information.
This isn't a new thing either, over a decade ago when I worked on N64 titles we never wrote depth for the sky box, it was unnecessary and costly.
They are hacky when a game claims its rendered in 3D but has 2D effects hacked in all over the place. Some of this is due to API and hardware limitations, and we clearly see this with all the reliance on post-processing and deferred rendering/lighting with DX9 and DX9-era hardware. But that all changed with DX10 hardware that allows readback of the depth buffer with color samples, so these hacky rendering methods are once again a poor excuse. If you look at some older games, even going as far back as classic OpenGL or DX8 era games (Halo 2 for example), they're perfect in 3D because they don't use any of these hacky post-process/framebuffer effects that don't have proper depth values.
Still most of it just comes down to laziness/oversight and devs taking the path of least resistance. Not implementing 3D crosshairs is probably the most glaring oversight, you don't even need to change it dynamically, you just need to render it far enough away so that you're not bum-eyed when trying to focus on objects behind it.
[quote name='ERP' post='1109621' date='Aug 27 2010, 05:01 PM']I agree we're starting to see some small steps forwards, 3D engines supporting it doesn't surprise me, it's largely a checkbox item for engine vendors. But having an engine that supports it and writing a game that supports it aren't the same thing. The larger issue is big publishers adding S3D to their test matrix we won't see high quality support until it's tested. Right now your best bet on a game having good S3D support is having one of the lead developers who actually plays games in S3D.
Well the first step is to properly implement correct and consistent depth values for all rendered objects and overlays, and for baked in effects that can't be corrected, to be able to disable them completely. Disabling effects is a viable workaround only when controls and options to disable them are exposed. Most games don't implement these controls in-game, but we can use a variety of other methods, either deleting assets or modifying .INI files for example. No we shouldn't have to do this, but this is the reality of it when games were designed without S3D in mind. Going forward, the best we can hope for is that developers at least consider S3D and don't explicitly break stuff before they even consider implementing S3D specific "Wow" features like excessive pop-out or advanced shading/dithering effects or color pallettes for a better S3D experience.
-=HeliX=- Mod 3DV Game Fixes
My 3D Vision Games List Ratings
Intel Core i7 5930K @4.5GHz | Gigabyte X99 Gaming 5 | Win10 x64 Pro | Corsair H105
Nvidia GeForce Titan X SLI Hybrid | ROG Swift PG278Q 144Hz + 3D Vision/G-Sync | 32GB Adata DDR4 2666
Intel Samsung 950Pro SSD | Samsung EVO 4x1 RAID 0 |
Yamaha VX-677 A/V Receiver | Polk Audio RM6880 7.1 | LG Blu-Ray
Auzen X-Fi HT HD | Logitech G710/G502/G27 | Corsair Air 540 | EVGA P2-1200W
No as I said in my first response, NVidia has no choice, if they want S3D they're going to have to do the work.
That doesn't make it the right long term approach, even if everything were rendered with correct Z by every game it would still not be the right way of doing it. NVidia would still be minimally redundantly rendering offscreen surfaces in stereo and potentially creating artifacts, which they would have to fix with special case driver code. They have no way to tell if a camera in the scene is the camera, or merely a point of view in the scene for some effect.
[quote name='chiz' post='1109639' date='Aug 27 2010, 02:31 PM']They are hacky when a game claims its rendered in 3D but has 2D effects hacked in all over the place. Some of this is due to API and hardware limitations, and we clearly see this with all the reliance on post-processing and deferred rendering/lighting with DX9 and DX9-era hardware. But that all changed with DX10 hardware that allows readback of the depth buffer with color samples, so these hacky rendering methods are once again a poor excuse. If you look at some older games, even going as far back as classic OpenGL or DX8 era games (Halo 2 for example), they're perfect in 3D because they don't use any of these hacky post-process/framebuffer effects that don't have proper depth values.[/quote]
Old games didn't do these things because they couldn't afford to, fill rate was too expensive.
None of these developers were targetting S3D, so you get what you get.
Hell I've done the opposite in renderers used depth values for nothing but occlusion, i.e all of the geometry was supposed to be coplanar, but I wrote different fixed depths to sort them correctly.
Z values aren't depth in a renderer they are just values, in a buffer that happens to have comparison operations.
If you're targetting S3D you have to be more careful with them.
[quote name='chiz' post='1109639' date='Aug 27 2010, 02:31 PM']Still most of it just comes down to laziness/oversight and devs taking the path of least resistance. Not implementing 3D crosshairs is probably the most glaring oversight, you don't even need to change it dynamically, you just need to render it far enough away so that you're not bum-eyed when trying to focus on objects behind it.[/quote]
I'd say it's more a lack of priority for S3D support.
I've shipped more than a couple of games, and decisions are rarely made because of laziness, at the end of a day you have a set of features, given time constaints they won't all make it, you prioritize and make hard decisions. In practice I've never even seen S3D considerations on a feature list, it's just not visible enough, that was my point about your only real shot for good support is finding a developer that uses it.
If the publisher had a marketing deal with NVidia for S3Dsupport that was worth real $ I guarantee it's make the cut list, but cost wise it's probably not viable for NVidia at this point.
[quote name='chiz' post='1109639' date='Aug 27 2010, 02:31 PM']Well the first step is to properly implement correct and consistent depth values for all rendered objects and overlays, and for baked in effects that can't be corrected, to be able to disable them completely. Disabling effects is a viable workaround only when controls and options to disable them are exposed. Most games don't implement these controls in-game, but we can use a variety of other methods, either deleting assets or modifying .INI files for example. No we shouldn't have to do this, but this is the reality of it when games were designed without S3D in mind. Going forward, the best we can hope for is that developers at least consider S3D and don't explicitly break stuff before they even consider implementing S3D specific "Wow" features like excessive pop-out or advanced shading/dithering effects or color pallettes for a better S3D experience.[/quote]
No as I said in my first response, NVidia has no choice, if they want S3D they're going to have to do the work.
That doesn't make it the right long term approach, even if everything were rendered with correct Z by every game it would still not be the right way of doing it. NVidia would still be minimally redundantly rendering offscreen surfaces in stereo and potentially creating artifacts, which they would have to fix with special case driver code. They have no way to tell if a camera in the scene is the camera, or merely a point of view in the scene for some effect.
[quote name='chiz' post='1109639' date='Aug 27 2010, 02:31 PM']They are hacky when a game claims its rendered in 3D but has 2D effects hacked in all over the place. Some of this is due to API and hardware limitations, and we clearly see this with all the reliance on post-processing and deferred rendering/lighting with DX9 and DX9-era hardware. But that all changed with DX10 hardware that allows readback of the depth buffer with color samples, so these hacky rendering methods are once again a poor excuse. If you look at some older games, even going as far back as classic OpenGL or DX8 era games (Halo 2 for example), they're perfect in 3D because they don't use any of these hacky post-process/framebuffer effects that don't have proper depth values.
Old games didn't do these things because they couldn't afford to, fill rate was too expensive.
None of these developers were targetting S3D, so you get what you get.
Hell I've done the opposite in renderers used depth values for nothing but occlusion, i.e all of the geometry was supposed to be coplanar, but I wrote different fixed depths to sort them correctly.
Z values aren't depth in a renderer they are just values, in a buffer that happens to have comparison operations.
If you're targetting S3D you have to be more careful with them.
[quote name='chiz' post='1109639' date='Aug 27 2010, 02:31 PM']Still most of it just comes down to laziness/oversight and devs taking the path of least resistance. Not implementing 3D crosshairs is probably the most glaring oversight, you don't even need to change it dynamically, you just need to render it far enough away so that you're not bum-eyed when trying to focus on objects behind it.
I'd say it's more a lack of priority for S3D support.
I've shipped more than a couple of games, and decisions are rarely made because of laziness, at the end of a day you have a set of features, given time constaints they won't all make it, you prioritize and make hard decisions. In practice I've never even seen S3D considerations on a feature list, it's just not visible enough, that was my point about your only real shot for good support is finding a developer that uses it.
If the publisher had a marketing deal with NVidia for S3Dsupport that was worth real $ I guarantee it's make the cut list, but cost wise it's probably not viable for NVidia at this point.
[quote name='chiz' post='1109639' date='Aug 27 2010, 02:31 PM']Well the first step is to properly implement correct and consistent depth values for all rendered objects and overlays, and for baked in effects that can't be corrected, to be able to disable them completely. Disabling effects is a viable workaround only when controls and options to disable them are exposed. Most games don't implement these controls in-game, but we can use a variety of other methods, either deleting assets or modifying .INI files for example. No we shouldn't have to do this, but this is the reality of it when games were designed without S3D in mind. Going forward, the best we can hope for is that developers at least consider S3D and don't explicitly break stuff before they even consider implementing S3D specific "Wow" features like excessive pop-out or advanced shading/dithering effects or color pallettes for a better S3D experience.
My Blog
No as I said in my first response, NVidia has no choice, if they want S3D they're going to have to do the work.
That doesn't make it the right long term approach, even if everything were rendered with correct Z by every game it would still not be the right way of doing it. NVidia would still be minimally redundantly rendering offscreen surfaces in stereo and potentially creating artifacts, which they would have to fix with special case driver code. They have no way to tell if a camera in the scene is the camera, or merely a point of view in the scene for some effect.
[quote name='chiz' post='1109639' date='Aug 27 2010, 02:31 PM']They are hacky when a game claims its rendered in 3D but has 2D effects hacked in all over the place. Some of this is due to API and hardware limitations, and we clearly see this with all the reliance on post-processing and deferred rendering/lighting with DX9 and DX9-era hardware. But that all changed with DX10 hardware that allows readback of the depth buffer with color samples, so these hacky rendering methods are once again a poor excuse. If you look at some older games, even going as far back as classic OpenGL or DX8 era games (Halo 2 for example), they're perfect in 3D because they don't use any of these hacky post-process/framebuffer effects that don't have proper depth values.[/quote]
Old games didn't do these things because they couldn't afford to, fill rate was too expensive.
None of these developers were targetting S3D, so you get what you get.
Hell I've done the opposite in renderers used depth values for nothing but occlusion, i.e all of the geometry was supposed to be coplanar, but I wrote different fixed depths to sort them correctly.
Z values aren't depth in a renderer they are just values, in a buffer that happens to have comparison operations.
If you're targetting S3D you have to be more careful with them.
[quote name='chiz' post='1109639' date='Aug 27 2010, 02:31 PM']Still most of it just comes down to laziness/oversight and devs taking the path of least resistance. Not implementing 3D crosshairs is probably the most glaring oversight, you don't even need to change it dynamically, you just need to render it far enough away so that you're not bum-eyed when trying to focus on objects behind it.[/quote]
I'd say it's more a lack of priority for S3D support.
I've shipped more than a couple of games, and decisions are rarely made because of laziness, at the end of a day you have a set of features, given time constaints they won't all make it, you prioritize and make hard decisions. In practice I've never even seen S3D considerations on a feature list, it's just not visible enough, that was my point about your only real shot for good support is finding a developer that uses it.
If the publisher had a marketing deal with NVidia for S3Dsupport that was worth real $ I guarantee it's make the cut list, but cost wise it's probably not viable for NVidia at this point.
[quote name='chiz' post='1109639' date='Aug 27 2010, 02:31 PM']Well the first step is to properly implement correct and consistent depth values for all rendered objects and overlays, and for baked in effects that can't be corrected, to be able to disable them completely. Disabling effects is a viable workaround only when controls and options to disable them are exposed. Most games don't implement these controls in-game, but we can use a variety of other methods, either deleting assets or modifying .INI files for example. No we shouldn't have to do this, but this is the reality of it when games were designed without S3D in mind. Going forward, the best we can hope for is that developers at least consider S3D and don't explicitly break stuff before they even consider implementing S3D specific "Wow" features like excessive pop-out or advanced shading/dithering effects or color pallettes for a better S3D experience.[/quote]
No as I said in my first response, NVidia has no choice, if they want S3D they're going to have to do the work.
That doesn't make it the right long term approach, even if everything were rendered with correct Z by every game it would still not be the right way of doing it. NVidia would still be minimally redundantly rendering offscreen surfaces in stereo and potentially creating artifacts, which they would have to fix with special case driver code. They have no way to tell if a camera in the scene is the camera, or merely a point of view in the scene for some effect.
[quote name='chiz' post='1109639' date='Aug 27 2010, 02:31 PM']They are hacky when a game claims its rendered in 3D but has 2D effects hacked in all over the place. Some of this is due to API and hardware limitations, and we clearly see this with all the reliance on post-processing and deferred rendering/lighting with DX9 and DX9-era hardware. But that all changed with DX10 hardware that allows readback of the depth buffer with color samples, so these hacky rendering methods are once again a poor excuse. If you look at some older games, even going as far back as classic OpenGL or DX8 era games (Halo 2 for example), they're perfect in 3D because they don't use any of these hacky post-process/framebuffer effects that don't have proper depth values.
Old games didn't do these things because they couldn't afford to, fill rate was too expensive.
None of these developers were targetting S3D, so you get what you get.
Hell I've done the opposite in renderers used depth values for nothing but occlusion, i.e all of the geometry was supposed to be coplanar, but I wrote different fixed depths to sort them correctly.
Z values aren't depth in a renderer they are just values, in a buffer that happens to have comparison operations.
If you're targetting S3D you have to be more careful with them.
[quote name='chiz' post='1109639' date='Aug 27 2010, 02:31 PM']Still most of it just comes down to laziness/oversight and devs taking the path of least resistance. Not implementing 3D crosshairs is probably the most glaring oversight, you don't even need to change it dynamically, you just need to render it far enough away so that you're not bum-eyed when trying to focus on objects behind it.
I'd say it's more a lack of priority for S3D support.
I've shipped more than a couple of games, and decisions are rarely made because of laziness, at the end of a day you have a set of features, given time constaints they won't all make it, you prioritize and make hard decisions. In practice I've never even seen S3D considerations on a feature list, it's just not visible enough, that was my point about your only real shot for good support is finding a developer that uses it.
If the publisher had a marketing deal with NVidia for S3Dsupport that was worth real $ I guarantee it's make the cut list, but cost wise it's probably not viable for NVidia at this point.
[quote name='chiz' post='1109639' date='Aug 27 2010, 02:31 PM']Well the first step is to properly implement correct and consistent depth values for all rendered objects and overlays, and for baked in effects that can't be corrected, to be able to disable them completely. Disabling effects is a viable workaround only when controls and options to disable them are exposed. Most games don't implement these controls in-game, but we can use a variety of other methods, either deleting assets or modifying .INI files for example. No we shouldn't have to do this, but this is the reality of it when games were designed without S3D in mind. Going forward, the best we can hope for is that developers at least consider S3D and don't explicitly break stuff before they even consider implementing S3D specific "Wow" features like excessive pop-out or advanced shading/dithering effects or color pallettes for a better S3D experience.
My Blog
That doesn't make it the right long term approach, even if everything were rendered with correct Z by every game it would still not be the right way of doing it. NVidia would still be minimally redundantly rendering offscreen surfaces in stereo and potentially creating artifacts, which they would have to fix with special case driver code. They have no way to tell if a camera in the scene is the camera, or merely a point of view in the scene for some effect.[/quote]
Right, that's the whole point, they came up with a solution in the absence of one, so I'm not sure how you can say its the wrong approach when you're not advocating a better solution, you're advocating the problem that needed a solution to begin with: lack of developer support for S3D. This is a simple chicken/egg or horse/carriage situation, we can claim hypotheticals all we like about which approach is better but the fact remains, Nvidia's solution is compatible with nearly every "3D" game ever made with some problems while the native dev implemented S3D options you're advocating are virtually non-existent. We have what? Avatar and CryEngine 3 compared to virtually every DirectX compatible 3D game that will at least run in stereo with 3D Vision, albeit not all perfectly or free of problems. I'll take Nvidia's solution over the promise of one anyday.
Also, I'm not sure what you mean by the driver not knowing which camera is "the camera", they have the original 2D rendered camera as a reference, after that its simply mimicking the position of our eyes relative to other which are always going to be side-by-side, which is where the separation/convergence offsets come into play.
[quote name='ERP' post='1109651' date='Aug 27 2010, 06:13 PM']Old games didn't do these things because they couldn't afford to, fill rate was too expensive.
None of these developers were targetting S3D, so you get what you get.
Hell I've done the opposite in renderers used depth values for nothing but occlusion, i.e all of the geometry was supposed to be coplanar, but I wrote different fixed depths to sort them correctly.
Z values aren't depth in a renderer they are just values, in a buffer that happens to have comparison operations.
If you're targetting S3D you have to be more careful with them.[/quote]
Yep and new games don't have to either because of hardware and API improvements, so problems with games of that era are understandable as I've already stated. But at the same time, we've seen games using those same engines and rendering techniques implementing additional workarounds to avoid this on current hardware for a good 3D Vision experience, or at the very least, allowing the end-user to disable the problematic effects.
I'll give another similar example that would go back to your first point about Nvidia going about the problem the wrong way. What's your stance on MSAA, or even more to the point, TrAA? The game engines that use deferred rendering/lighting don't support them natively for the same reason, the hardware and API at the time didn't allow for depth values to be stored and read with corresponding color samples. That changed with DX10-level hardware and in games that did not natively implement AA, Nvidia wrote driver code that allowed the end-user to force AA at significant hit to performance and in some cases, still had problems. Now that code has been worked into many of these engines as DX9 extensions that DO allow MSAA and TrAA to be implemented in-game. Should Nvidia have waited for Devs to get around to actually implementing AA in their games instead of writing their own implementation via driver? Once again, chicken and egg, I would take the imperfect solution today over the hope of one sometime in the future.
[quote name='ERP' post='1109651' date='Aug 27 2010, 06:13 PM']I'd say it's more a lack of priority for S3D support.
I've shipped more than a couple of games, and decisions are rarely made because of laziness, at the end of a day you have a set of features, given time constaints they won't all make it, you prioritize and make hard decisions. In practice I've never even seen S3D considerations on a feature list, it's just not visible enough, that was my point about your only real shot for good support is finding a developer that uses it.
If the publisher had a marketing deal with NVidia for S3Dsupport that was worth real $ I guarantee it's make the cut list, but cost wise it's probably not viable for NVidia at this point.[/quote]
And again, S3D support isn't even a requirement, simply rendering a 3D game in actual 3D with accurate relative depth values is the only requirement for Nvidia's solution to work. We have numerous examples of games that do work almost perfectly that never had 3D support in mind simply because someone during the development phase decided to render everything in 3D instead of bits and pieces in 2D along with 3D. I understand some of this comes down to design decisions like icons/overlays and other art assets etc, but something as simple as a 3D crosshair with the proper depth value is virtually effortless and really unexcusable. We've seen numerous games patch this in very easily after the fact, so its not all about cost cutting and design, its simply a lack of attention to detail or an oversight in most cases.
Nvidia certainly is doing their part, but they really don't have to do anything other than encourage developers to render their 3D games in 3D consistently. Sure they'll put in the extra work for their "3D Vision Ready" titles, but really they just have to get the word out to either render in 2D or render at correct object-depth, but most do the worst thing possible and render in 3D at screen-depth. Or if they can't, then allow the end-user to turn off those effects or disable problematic overlays or even crosshair.
That doesn't make it the right long term approach, even if everything were rendered with correct Z by every game it would still not be the right way of doing it. NVidia would still be minimally redundantly rendering offscreen surfaces in stereo and potentially creating artifacts, which they would have to fix with special case driver code. They have no way to tell if a camera in the scene is the camera, or merely a point of view in the scene for some effect.
Right, that's the whole point, they came up with a solution in the absence of one, so I'm not sure how you can say its the wrong approach when you're not advocating a better solution, you're advocating the problem that needed a solution to begin with: lack of developer support for S3D. This is a simple chicken/egg or horse/carriage situation, we can claim hypotheticals all we like about which approach is better but the fact remains, Nvidia's solution is compatible with nearly every "3D" game ever made with some problems while the native dev implemented S3D options you're advocating are virtually non-existent. We have what? Avatar and CryEngine 3 compared to virtually every DirectX compatible 3D game that will at least run in stereo with 3D Vision, albeit not all perfectly or free of problems. I'll take Nvidia's solution over the promise of one anyday.
Also, I'm not sure what you mean by the driver not knowing which camera is "the camera", they have the original 2D rendered camera as a reference, after that its simply mimicking the position of our eyes relative to other which are always going to be side-by-side, which is where the separation/convergence offsets come into play.
[quote name='ERP' post='1109651' date='Aug 27 2010, 06:13 PM']Old games didn't do these things because they couldn't afford to, fill rate was too expensive.
None of these developers were targetting S3D, so you get what you get.
Hell I've done the opposite in renderers used depth values for nothing but occlusion, i.e all of the geometry was supposed to be coplanar, but I wrote different fixed depths to sort them correctly.
Z values aren't depth in a renderer they are just values, in a buffer that happens to have comparison operations.
If you're targetting S3D you have to be more careful with them.
Yep and new games don't have to either because of hardware and API improvements, so problems with games of that era are understandable as I've already stated. But at the same time, we've seen games using those same engines and rendering techniques implementing additional workarounds to avoid this on current hardware for a good 3D Vision experience, or at the very least, allowing the end-user to disable the problematic effects.
I'll give another similar example that would go back to your first point about Nvidia going about the problem the wrong way. What's your stance on MSAA, or even more to the point, TrAA? The game engines that use deferred rendering/lighting don't support them natively for the same reason, the hardware and API at the time didn't allow for depth values to be stored and read with corresponding color samples. That changed with DX10-level hardware and in games that did not natively implement AA, Nvidia wrote driver code that allowed the end-user to force AA at significant hit to performance and in some cases, still had problems. Now that code has been worked into many of these engines as DX9 extensions that DO allow MSAA and TrAA to be implemented in-game. Should Nvidia have waited for Devs to get around to actually implementing AA in their games instead of writing their own implementation via driver? Once again, chicken and egg, I would take the imperfect solution today over the hope of one sometime in the future.
[quote name='ERP' post='1109651' date='Aug 27 2010, 06:13 PM']I'd say it's more a lack of priority for S3D support.
I've shipped more than a couple of games, and decisions are rarely made because of laziness, at the end of a day you have a set of features, given time constaints they won't all make it, you prioritize and make hard decisions. In practice I've never even seen S3D considerations on a feature list, it's just not visible enough, that was my point about your only real shot for good support is finding a developer that uses it.
If the publisher had a marketing deal with NVidia for S3Dsupport that was worth real $ I guarantee it's make the cut list, but cost wise it's probably not viable for NVidia at this point.
And again, S3D support isn't even a requirement, simply rendering a 3D game in actual 3D with accurate relative depth values is the only requirement for Nvidia's solution to work. We have numerous examples of games that do work almost perfectly that never had 3D support in mind simply because someone during the development phase decided to render everything in 3D instead of bits and pieces in 2D along with 3D. I understand some of this comes down to design decisions like icons/overlays and other art assets etc, but something as simple as a 3D crosshair with the proper depth value is virtually effortless and really unexcusable. We've seen numerous games patch this in very easily after the fact, so its not all about cost cutting and design, its simply a lack of attention to detail or an oversight in most cases.
Nvidia certainly is doing their part, but they really don't have to do anything other than encourage developers to render their 3D games in 3D consistently. Sure they'll put in the extra work for their "3D Vision Ready" titles, but really they just have to get the word out to either render in 2D or render at correct object-depth, but most do the worst thing possible and render in 3D at screen-depth. Or if they can't, then allow the end-user to turn off those effects or disable problematic overlays or even crosshair.
-=HeliX=- Mod 3DV Game Fixes
My 3D Vision Games List Ratings
Intel Core i7 5930K @4.5GHz | Gigabyte X99 Gaming 5 | Win10 x64 Pro | Corsair H105
Nvidia GeForce Titan X SLI Hybrid | ROG Swift PG278Q 144Hz + 3D Vision/G-Sync | 32GB Adata DDR4 2666
Intel Samsung 950Pro SSD | Samsung EVO 4x1 RAID 0 |
Yamaha VX-677 A/V Receiver | Polk Audio RM6880 7.1 | LG Blu-Ray
Auzen X-Fi HT HD | Logitech G710/G502/G27 | Corsair Air 540 | EVGA P2-1200W
That doesn't make it the right long term approach, even if everything were rendered with correct Z by every game it would still not be the right way of doing it. NVidia would still be minimally redundantly rendering offscreen surfaces in stereo and potentially creating artifacts, which they would have to fix with special case driver code. They have no way to tell if a camera in the scene is the camera, or merely a point of view in the scene for some effect.[/quote]
Right, that's the whole point, they came up with a solution in the absence of one, so I'm not sure how you can say its the wrong approach when you're not advocating a better solution, you're advocating the problem that needed a solution to begin with: lack of developer support for S3D. This is a simple chicken/egg or horse/carriage situation, we can claim hypotheticals all we like about which approach is better but the fact remains, Nvidia's solution is compatible with nearly every "3D" game ever made with some problems while the native dev implemented S3D options you're advocating are virtually non-existent. We have what? Avatar and CryEngine 3 compared to virtually every DirectX compatible 3D game that will at least run in stereo with 3D Vision, albeit not all perfectly or free of problems. I'll take Nvidia's solution over the promise of one anyday.
Also, I'm not sure what you mean by the driver not knowing which camera is "the camera", they have the original 2D rendered camera as a reference, after that its simply mimicking the position of our eyes relative to other which are always going to be side-by-side, which is where the separation/convergence offsets come into play.
[quote name='ERP' post='1109651' date='Aug 27 2010, 06:13 PM']Old games didn't do these things because they couldn't afford to, fill rate was too expensive.
None of these developers were targetting S3D, so you get what you get.
Hell I've done the opposite in renderers used depth values for nothing but occlusion, i.e all of the geometry was supposed to be coplanar, but I wrote different fixed depths to sort them correctly.
Z values aren't depth in a renderer they are just values, in a buffer that happens to have comparison operations.
If you're targetting S3D you have to be more careful with them.[/quote]
Yep and new games don't have to either because of hardware and API improvements, so problems with games of that era are understandable as I've already stated. But at the same time, we've seen games using those same engines and rendering techniques implementing additional workarounds to avoid this on current hardware for a good 3D Vision experience, or at the very least, allowing the end-user to disable the problematic effects.
I'll give another similar example that would go back to your first point about Nvidia going about the problem the wrong way. What's your stance on MSAA, or even more to the point, TrAA? The game engines that use deferred rendering/lighting don't support them natively for the same reason, the hardware and API at the time didn't allow for depth values to be stored and read with corresponding color samples. That changed with DX10-level hardware and in games that did not natively implement AA, Nvidia wrote driver code that allowed the end-user to force AA at significant hit to performance and in some cases, still had problems. Now that code has been worked into many of these engines as DX9 extensions that DO allow MSAA and TrAA to be implemented in-game. Should Nvidia have waited for Devs to get around to actually implementing AA in their games instead of writing their own implementation via driver? Once again, chicken and egg, I would take the imperfect solution today over the hope of one sometime in the future.
[quote name='ERP' post='1109651' date='Aug 27 2010, 06:13 PM']I'd say it's more a lack of priority for S3D support.
I've shipped more than a couple of games, and decisions are rarely made because of laziness, at the end of a day you have a set of features, given time constaints they won't all make it, you prioritize and make hard decisions. In practice I've never even seen S3D considerations on a feature list, it's just not visible enough, that was my point about your only real shot for good support is finding a developer that uses it.
If the publisher had a marketing deal with NVidia for S3Dsupport that was worth real $ I guarantee it's make the cut list, but cost wise it's probably not viable for NVidia at this point.[/quote]
And again, S3D support isn't even a requirement, simply rendering a 3D game in actual 3D with accurate relative depth values is the only requirement for Nvidia's solution to work. We have numerous examples of games that do work almost perfectly that never had 3D support in mind simply because someone during the development phase decided to render everything in 3D instead of bits and pieces in 2D along with 3D. I understand some of this comes down to design decisions like icons/overlays and other art assets etc, but something as simple as a 3D crosshair with the proper depth value is virtually effortless and really unexcusable. We've seen numerous games patch this in very easily after the fact, so its not all about cost cutting and design, its simply a lack of attention to detail or an oversight in most cases.
Nvidia certainly is doing their part, but they really don't have to do anything other than encourage developers to render their 3D games in 3D consistently. Sure they'll put in the extra work for their "3D Vision Ready" titles, but really they just have to get the word out to either render in 2D or render at correct object-depth, but most do the worst thing possible and render in 3D at screen-depth. Or if they can't, then allow the end-user to turn off those effects or disable problematic overlays or even crosshair.
That doesn't make it the right long term approach, even if everything were rendered with correct Z by every game it would still not be the right way of doing it. NVidia would still be minimally redundantly rendering offscreen surfaces in stereo and potentially creating artifacts, which they would have to fix with special case driver code. They have no way to tell if a camera in the scene is the camera, or merely a point of view in the scene for some effect.
Right, that's the whole point, they came up with a solution in the absence of one, so I'm not sure how you can say its the wrong approach when you're not advocating a better solution, you're advocating the problem that needed a solution to begin with: lack of developer support for S3D. This is a simple chicken/egg or horse/carriage situation, we can claim hypotheticals all we like about which approach is better but the fact remains, Nvidia's solution is compatible with nearly every "3D" game ever made with some problems while the native dev implemented S3D options you're advocating are virtually non-existent. We have what? Avatar and CryEngine 3 compared to virtually every DirectX compatible 3D game that will at least run in stereo with 3D Vision, albeit not all perfectly or free of problems. I'll take Nvidia's solution over the promise of one anyday.
Also, I'm not sure what you mean by the driver not knowing which camera is "the camera", they have the original 2D rendered camera as a reference, after that its simply mimicking the position of our eyes relative to other which are always going to be side-by-side, which is where the separation/convergence offsets come into play.
[quote name='ERP' post='1109651' date='Aug 27 2010, 06:13 PM']Old games didn't do these things because they couldn't afford to, fill rate was too expensive.
None of these developers were targetting S3D, so you get what you get.
Hell I've done the opposite in renderers used depth values for nothing but occlusion, i.e all of the geometry was supposed to be coplanar, but I wrote different fixed depths to sort them correctly.
Z values aren't depth in a renderer they are just values, in a buffer that happens to have comparison operations.
If you're targetting S3D you have to be more careful with them.
Yep and new games don't have to either because of hardware and API improvements, so problems with games of that era are understandable as I've already stated. But at the same time, we've seen games using those same engines and rendering techniques implementing additional workarounds to avoid this on current hardware for a good 3D Vision experience, or at the very least, allowing the end-user to disable the problematic effects.
I'll give another similar example that would go back to your first point about Nvidia going about the problem the wrong way. What's your stance on MSAA, or even more to the point, TrAA? The game engines that use deferred rendering/lighting don't support them natively for the same reason, the hardware and API at the time didn't allow for depth values to be stored and read with corresponding color samples. That changed with DX10-level hardware and in games that did not natively implement AA, Nvidia wrote driver code that allowed the end-user to force AA at significant hit to performance and in some cases, still had problems. Now that code has been worked into many of these engines as DX9 extensions that DO allow MSAA and TrAA to be implemented in-game. Should Nvidia have waited for Devs to get around to actually implementing AA in their games instead of writing their own implementation via driver? Once again, chicken and egg, I would take the imperfect solution today over the hope of one sometime in the future.
[quote name='ERP' post='1109651' date='Aug 27 2010, 06:13 PM']I'd say it's more a lack of priority for S3D support.
I've shipped more than a couple of games, and decisions are rarely made because of laziness, at the end of a day you have a set of features, given time constaints they won't all make it, you prioritize and make hard decisions. In practice I've never even seen S3D considerations on a feature list, it's just not visible enough, that was my point about your only real shot for good support is finding a developer that uses it.
If the publisher had a marketing deal with NVidia for S3Dsupport that was worth real $ I guarantee it's make the cut list, but cost wise it's probably not viable for NVidia at this point.
And again, S3D support isn't even a requirement, simply rendering a 3D game in actual 3D with accurate relative depth values is the only requirement for Nvidia's solution to work. We have numerous examples of games that do work almost perfectly that never had 3D support in mind simply because someone during the development phase decided to render everything in 3D instead of bits and pieces in 2D along with 3D. I understand some of this comes down to design decisions like icons/overlays and other art assets etc, but something as simple as a 3D crosshair with the proper depth value is virtually effortless and really unexcusable. We've seen numerous games patch this in very easily after the fact, so its not all about cost cutting and design, its simply a lack of attention to detail or an oversight in most cases.
Nvidia certainly is doing their part, but they really don't have to do anything other than encourage developers to render their 3D games in 3D consistently. Sure they'll put in the extra work for their "3D Vision Ready" titles, but really they just have to get the word out to either render in 2D or render at correct object-depth, but most do the worst thing possible and render in 3D at screen-depth. Or if they can't, then allow the end-user to turn off those effects or disable problematic overlays or even crosshair.
-=HeliX=- Mod 3DV Game Fixes
My 3D Vision Games List Ratings
Intel Core i7 5930K @4.5GHz | Gigabyte X99 Gaming 5 | Win10 x64 Pro | Corsair H105
Nvidia GeForce Titan X SLI Hybrid | ROG Swift PG278Q 144Hz + 3D Vision/G-Sync | 32GB Adata DDR4 2666
Intel Samsung 950Pro SSD | Samsung EVO 4x1 RAID 0 |
Yamaha VX-677 A/V Receiver | Polk Audio RM6880 7.1 | LG Blu-Ray
Auzen X-Fi HT HD | Logitech G710/G502/G27 | Corsair Air 540 | EVGA P2-1200W
Also, I'm not sure what you mean by the driver not knowing which camera is "the camera", they have the original 2D rendered camera as a reference, after that its simply mimicking the position of our eyes relative to other which are always going to be side-by-side, which is where the separation/convergence offsets come into play.
Yep and new games don't have to either because of hardware and API improvements, so problems with games of that era are understandable as I've already stated. But at the same time, we've seen games using those same engines and rendering techniques implementing additional workarounds to avoid this on current hardware for a good 3D Vision experience, or at the very least, allowing the end-user to disable the problematic effects.
I'll give another similar example that would go back to your first point about Nvidia going about the problem the wrong way. What's your stance on MSAA, or even more to the point, TrAA? The game engines that use deferred rendering/lighting don't support them natively for the same reason, the hardware and API at the time didn't allow for depth values to be stored and read with corresponding color samples. That changed with DX10-level hardware and in games that did not natively implement AA, Nvidia wrote driver code that allowed the end-user to force AA at significant hit to performance and in some cases, still had problems. Now that code has been worked into many of these engines as DX9 extensions that DO allow MSAA and TrAA to be implemented in-game. Should Nvidia have waited for Devs to get around to actually implementing AA in their games instead of writing their own implementation via driver? Once again, chicken and egg, I would take the imperfect solution today over the hope of one sometime in the future.
And again, S3D support isn't even a requirement, simply rendering a 3D game in actual 3D with accurate relative depth values is the only requirement for Nvidia's solution to work. We have numerous examples of games that do work almost perfectly that never had 3D support in mind simply because someone during the development phase decided to render everything in 3D instead of bits and pieces in 2D along with 3D. I understand some of this comes down to design decisions like icons/overlays and other art assets etc, but something as simple as a 3D crosshair with the proper depth value is virtually effortless and really unexcusable. We've seen numerous games patch this in very easily after the fact, so its not all about cost cutting and design, its simply a lack of attention to detail or an oversight in most cases.
Nvidia certainly is doing their part, but they really don't have to do anything other than encourage developers to render their 3D games in 3D consistently. Sure they'll put in the extra work for their "3D Vision Ready" titles, but really they just have to get the word out to either render in 2D or render at correct object-depth, but most do the worst thing possible and render in 3D at screen-depth. Or if they can't, then allow the end-user to turn off those effects or disable problematic overlays or even crosshair.[/quote]
I too was a bit disappointed with the SC2 3D rendering. Definitely not a "good" 3D rating, even with the newest drivers.
Also, I'm not sure what you mean by the driver not knowing which camera is "the camera", they have the original 2D rendered camera as a reference, after that its simply mimicking the position of our eyes relative to other which are always going to be side-by-side, which is where the separation/convergence offsets come into play.
Yep and new games don't have to either because of hardware and API improvements, so problems with games of that era are understandable as I've already stated. But at the same time, we've seen games using those same engines and rendering techniques implementing additional workarounds to avoid this on current hardware for a good 3D Vision experience, or at the very least, allowing the end-user to disable the problematic effects.
I'll give another similar example that would go back to your first point about Nvidia going about the problem the wrong way. What's your stance on MSAA, or even more to the point, TrAA? The game engines that use deferred rendering/lighting don't support them natively for the same reason, the hardware and API at the time didn't allow for depth values to be stored and read with corresponding color samples. That changed with DX10-level hardware and in games that did not natively implement AA, Nvidia wrote driver code that allowed the end-user to force AA at significant hit to performance and in some cases, still had problems. Now that code has been worked into many of these engines as DX9 extensions that DO allow MSAA and TrAA to be implemented in-game. Should Nvidia have waited for Devs to get around to actually implementing AA in their games instead of writing their own implementation via driver? Once again, chicken and egg, I would take the imperfect solution today over the hope of one sometime in the future.
And again, S3D support isn't even a requirement, simply rendering a 3D game in actual 3D with accurate relative depth values is the only requirement for Nvidia's solution to work. We have numerous examples of games that do work almost perfectly that never had 3D support in mind simply because someone during the development phase decided to render everything in 3D instead of bits and pieces in 2D along with 3D. I understand some of this comes down to design decisions like icons/overlays and other art assets etc, but something as simple as a 3D crosshair with the proper depth value is virtually effortless and really unexcusable. We've seen numerous games patch this in very easily after the fact, so its not all about cost cutting and design, its simply a lack of attention to detail or an oversight in most cases.
Nvidia certainly is doing their part, but they really don't have to do anything other than encourage developers to render their 3D games in 3D consistently. Sure they'll put in the extra work for their "3D Vision Ready" titles, but really they just have to get the word out to either render in 2D or render at correct object-depth, but most do the worst thing possible and render in 3D at screen-depth. Or if they can't, then allow the end-user to turn off those effects or disable problematic overlays or even crosshair.
I too was a bit disappointed with the SC2 3D rendering. Definitely not a "good" 3D rating, even with the newest drivers.
Also, I'm not sure what you mean by the driver not knowing which camera is "the camera", they have the original 2D rendered camera as a reference, after that its simply mimicking the position of our eyes relative to other which are always going to be side-by-side, which is where the separation/convergence offsets come into play.
Yep and new games don't have to either because of hardware and API improvements, so problems with games of that era are understandable as I've already stated. But at the same time, we've seen games using those same engines and rendering techniques implementing additional workarounds to avoid this on current hardware for a good 3D Vision experience, or at the very least, allowing the end-user to disable the problematic effects.
I'll give another similar example that would go back to your first point about Nvidia going about the problem the wrong way. What's your stance on MSAA, or even more to the point, TrAA? The game engines that use deferred rendering/lighting don't support them natively for the same reason, the hardware and API at the time didn't allow for depth values to be stored and read with corresponding color samples. That changed with DX10-level hardware and in games that did not natively implement AA, Nvidia wrote driver code that allowed the end-user to force AA at significant hit to performance and in some cases, still had problems. Now that code has been worked into many of these engines as DX9 extensions that DO allow MSAA and TrAA to be implemented in-game. Should Nvidia have waited for Devs to get around to actually implementing AA in their games instead of writing their own implementation via driver? Once again, chicken and egg, I would take the imperfect solution today over the hope of one sometime in the future.
And again, S3D support isn't even a requirement, simply rendering a 3D game in actual 3D with accurate relative depth values is the only requirement for Nvidia's solution to work. We have numerous examples of games that do work almost perfectly that never had 3D support in mind simply because someone during the development phase decided to render everything in 3D instead of bits and pieces in 2D along with 3D. I understand some of this comes down to design decisions like icons/overlays and other art assets etc, but something as simple as a 3D crosshair with the proper depth value is virtually effortless and really unexcusable. We've seen numerous games patch this in very easily after the fact, so its not all about cost cutting and design, its simply a lack of attention to detail or an oversight in most cases.
Nvidia certainly is doing their part, but they really don't have to do anything other than encourage developers to render their 3D games in 3D consistently. Sure they'll put in the extra work for their "3D Vision Ready" titles, but really they just have to get the word out to either render in 2D or render at correct object-depth, but most do the worst thing possible and render in 3D at screen-depth. Or if they can't, then allow the end-user to turn off those effects or disable problematic overlays or even crosshair.[/quote]
I too was a bit disappointed with the SC2 3D rendering. Definitely not a "good" 3D rating, even with the newest drivers.
Also, I'm not sure what you mean by the driver not knowing which camera is "the camera", they have the original 2D rendered camera as a reference, after that its simply mimicking the position of our eyes relative to other which are always going to be side-by-side, which is where the separation/convergence offsets come into play.
Yep and new games don't have to either because of hardware and API improvements, so problems with games of that era are understandable as I've already stated. But at the same time, we've seen games using those same engines and rendering techniques implementing additional workarounds to avoid this on current hardware for a good 3D Vision experience, or at the very least, allowing the end-user to disable the problematic effects.
I'll give another similar example that would go back to your first point about Nvidia going about the problem the wrong way. What's your stance on MSAA, or even more to the point, TrAA? The game engines that use deferred rendering/lighting don't support them natively for the same reason, the hardware and API at the time didn't allow for depth values to be stored and read with corresponding color samples. That changed with DX10-level hardware and in games that did not natively implement AA, Nvidia wrote driver code that allowed the end-user to force AA at significant hit to performance and in some cases, still had problems. Now that code has been worked into many of these engines as DX9 extensions that DO allow MSAA and TrAA to be implemented in-game. Should Nvidia have waited for Devs to get around to actually implementing AA in their games instead of writing their own implementation via driver? Once again, chicken and egg, I would take the imperfect solution today over the hope of one sometime in the future.
And again, S3D support isn't even a requirement, simply rendering a 3D game in actual 3D with accurate relative depth values is the only requirement for Nvidia's solution to work. We have numerous examples of games that do work almost perfectly that never had 3D support in mind simply because someone during the development phase decided to render everything in 3D instead of bits and pieces in 2D along with 3D. I understand some of this comes down to design decisions like icons/overlays and other art assets etc, but something as simple as a 3D crosshair with the proper depth value is virtually effortless and really unexcusable. We've seen numerous games patch this in very easily after the fact, so its not all about cost cutting and design, its simply a lack of attention to detail or an oversight in most cases.
Nvidia certainly is doing their part, but they really don't have to do anything other than encourage developers to render their 3D games in 3D consistently. Sure they'll put in the extra work for their "3D Vision Ready" titles, but really they just have to get the word out to either render in 2D or render at correct object-depth, but most do the worst thing possible and render in 3D at screen-depth. Or if they can't, then allow the end-user to turn off those effects or disable problematic overlays or even crosshair.
I too was a bit disappointed with the SC2 3D rendering. Definitely not a "good" 3D rating, even with the newest drivers.
Sounds like you may be doing it wrong, did you check out the screenshots I provided above? SC2 is at least as good in the top-down isometric view as similar RTS titles with 2D cursors/icons like Dawn of War 2 or Supreme Commander and both of those are rated as "Excellent" even though they have far more disturbing issues with 2D icons/cursor than SC2.
Sounds like you may be doing it wrong, did you check out the screenshots I provided above? SC2 is at least as good in the top-down isometric view as similar RTS titles with 2D cursors/icons like Dawn of War 2 or Supreme Commander and both of those are rated as "Excellent" even though they have far more disturbing issues with 2D icons/cursor than SC2.
-=HeliX=- Mod 3DV Game Fixes
My 3D Vision Games List Ratings
Intel Core i7 5930K @4.5GHz | Gigabyte X99 Gaming 5 | Win10 x64 Pro | Corsair H105
Nvidia GeForce Titan X SLI Hybrid | ROG Swift PG278Q 144Hz + 3D Vision/G-Sync | 32GB Adata DDR4 2666
Intel Samsung 950Pro SSD | Samsung EVO 4x1 RAID 0 |
Yamaha VX-677 A/V Receiver | Polk Audio RM6880 7.1 | LG Blu-Ray
Auzen X-Fi HT HD | Logitech G710/G502/G27 | Corsair Air 540 | EVGA P2-1200W
Sounds like you may be doing it wrong, did you check out the screenshots I provided above? SC2 is at least as good in the top-down isometric view as similar RTS titles with 2D cursors/icons like Dawn of War 2 or Supreme Commander and both of those are rated as "Excellent" even though they have far more disturbing issues with 2D icons/cursor than SC2.
Sounds like you may be doing it wrong, did you check out the screenshots I provided above? SC2 is at least as good in the top-down isometric view as similar RTS titles with 2D cursors/icons like Dawn of War 2 or Supreme Commander and both of those are rated as "Excellent" even though they have far more disturbing issues with 2D icons/cursor than SC2.
-=HeliX=- Mod 3DV Game Fixes
My 3D Vision Games List Ratings
Intel Core i7 5930K @4.5GHz | Gigabyte X99 Gaming 5 | Win10 x64 Pro | Corsair H105
Nvidia GeForce Titan X SLI Hybrid | ROG Swift PG278Q 144Hz + 3D Vision/G-Sync | 32GB Adata DDR4 2666
Intel Samsung 950Pro SSD | Samsung EVO 4x1 RAID 0 |
Yamaha VX-677 A/V Receiver | Polk Audio RM6880 7.1 | LG Blu-Ray
Auzen X-Fi HT HD | Logitech G710/G502/G27 | Corsair Air 540 | EVGA P2-1200W