Not really understand what are differences between the real and the fake 3d, please someone can help me posting some pictures?? or list games whith fake 3D
Thanks
PD: sorry bad english
Not really understand what are differences between the real and the fake 3d, please someone can help me posting some pictures?? or list games whith fake 3D
Thanks
PD: sorry bad english
i7 4970k@4.5Ghz, SLI GTX1080Ti Aorus Gigabyte Xtreme, 16GB G Skill 2400hrz, 3*PG258Q in 3D surround.
"Real" 3d in a film sense is when the movie is filmed from two angles at the same time, simulating the fact that we have two eyes. In much the same way, "real" 3d in a game uses two virtual cameras to render the scene twice from two different angles, creating an accurate 3d experience with the potential for strong depth, convergence (pop-out) and a sense of volume to the world.
Fake 3d in movies are when films are shot in 2d, then the footage for the second "eye" is generated on a computer (it still involves a team of artists to do so). That's kind of what's happening here, by the looks of things. This new method appears to render the scene once, then use the depth buffer to guess how "deep" objects should be, and push them back accordingly. The end result is you get a fairly limited range of depth and convergence, as there isn't enough visual information to generate the second eye at enough of an offset. As a consequence of this, you get something that looks like "layers" of flat paper at different depths, instead of objects with real volume. You also introduce a number of visual anomolies, most significantly "halos" around objects close to the screen (like the player character or weaponmodel in first person).
"Real" 3d in a film sense is when the movie is filmed from two angles at the same time, simulating the fact that we have two eyes. In much the same way, "real" 3d in a game uses two virtual cameras to render the scene twice from two different angles, creating an accurate 3d experience with the potential for strong depth, convergence (pop-out) and a sense of volume to the world.
Fake 3d in movies are when films are shot in 2d, then the footage for the second "eye" is generated on a computer (it still involves a team of artists to do so). That's kind of what's happening here, by the looks of things. This new method appears to render the scene once, then use the depth buffer to guess how "deep" objects should be, and push them back accordingly. The end result is you get a fairly limited range of depth and convergence, as there isn't enough visual information to generate the second eye at enough of an offset. As a consequence of this, you get something that looks like "layers" of flat paper at different depths, instead of objects with real volume. You also introduce a number of visual anomolies, most significantly "halos" around objects close to the screen (like the player character or weaponmodel in first person).
Good overview, Pirateguybrush. I would just tweak this part:
[quote="Pirateguybrush"]This new method appears to render the scene once, then use the depth buffer to guess how "deep" objects should be, and push them back accordingly.[/quote]
Thanks to the depth buffer, there's no guess work as to how deep things should be in relation to one another. The depth information is available. What's missing is how the scene should look from two unique camera angles. If the engine is only rendering a single viewpoint, it's impossible to generate a second accurately. It can, at best, fake a second view.
Hence, fake 3D. The depth should be fairly accurate though, as opposed to 3D movie conversions, which are absolutely guesswork. On the other hand, with movies they can take the time to hand craft every fake 3D frame to make it as indistinguishable from true 3D as possible, provided they have the time and budget. PC games have the luxury of the depth buffer, but not the luxury of time to analyze a series of frames to carefully craft artifact-free 3D images.
Good overview, Pirateguybrush. I would just tweak this part:
Pirateguybrush said:This new method appears to render the scene once, then use the depth buffer to guess how "deep" objects should be, and push them back accordingly.
Thanks to the depth buffer, there's no guess work as to how deep things should be in relation to one another. The depth information is available. What's missing is how the scene should look from two unique camera angles. If the engine is only rendering a single viewpoint, it's impossible to generate a second accurately. It can, at best, fake a second view.
Hence, fake 3D. The depth should be fairly accurate though, as opposed to 3D movie conversions, which are absolutely guesswork. On the other hand, with movies they can take the time to hand craft every fake 3D frame to make it as indistinguishable from true 3D as possible, provided they have the time and budget. PC games have the luxury of the depth buffer, but not the luxury of time to analyze a series of frames to carefully craft artifact-free 3D images.
i would like to thank the two posters. i found this page http://realorfake3d.com/ ...but it is only for movies. There is something like for gaming?
Thanks again...
3d vision games are real 3d, with the exception of the Crysis series, and the games listed here:
https://forums.geforce.com/default/topic/679547/3d-vision/official-334-67-driver-thread-for-new-3d-vision-game-support-feedback/
There is some disagreement here but this might help.
Look at the awesome gifs made by tsaebeht. I believe they demonstrate very clearly the difference between true and fake 3D. Notice how differently the two games are rendered: Wargames AA (True) verses Crysis 2 (Fake)
Wargames:
http://img580.imageshack.us/img580/4959/yjxu.gif
Notice how the sync line is horizontal and moves up the scene in a line? Depth increases linearly from near to far. As you would expect.
Crysis 2:
http://img842.imageshack.us/img842/9836/h8ku.gif
Notice how the gun is rendered properly but anything beyond middle distance is rendered differently?
I made a 3D scene of Skyrim where I converted a 2D image + depth image to 3D by selecting sections at various depth and changing the vergence. I would share but SpeedyShare lost it. It looked rubbish but I think the rendering is similar in principle to nVidia's new solution. Rendering 2D objects at appropriate depth.
edit
https://forums.geforce.com/default/topic/573166/3d-vision/i-see-mention-of-quot-fake-3d-quot-ocasionally-on-here-what-is-it-and-what-are-the-differences-/1/
Here is a similar thread where 'Fake' and 'Real' 3D was also discussed.
There is some disagreement here but this might help.
Look at the awesome gifs made by tsaebeht. I believe they demonstrate very clearly the difference between true and fake 3D. Notice how differently the two games are rendered: Wargames AA (True) verses Crysis 2 (Fake)
I made a 3D scene of Skyrim where I converted a 2D image + depth image to 3D by selecting sections at various depth and changing the vergence. I would share but SpeedyShare lost it. It looked rubbish but I think the rendering is similar in principle to nVidia's new solution. Rendering 2D objects at appropriate depth.
Lord, grant me the serenity to accept the things I cannot change, the courage to change the things I can, and the wisdom to know the difference.
-------------------
Vitals: Windows 7 64bit, i5 2500 @ 4.4ghz, SLI GTX670, 8GB, Viewsonic VX2268WM
Thank you for the explanations. But why games would be using Fake3d seeing that the graphic engine knows the exact coordinates of all volumes? In this case, does anyone can give us a list of games using this technology ?
Thank You
Thank you for the explanations. But why games would be using Fake3d seeing that the graphic engine knows the exact coordinates of all volumes? In this case, does anyone can give us a list of games using this technology ?
Devs will often render things in 2D e.g. shadows, crosshairs, HUD elements. Because of this solutions are needed to fix these issues.
Lord, grant me the serenity to accept the things I cannot change, the courage to change the things I can, and the wisdom to know the difference.
-------------------
Vitals: Windows 7 64bit, i5 2500 @ 4.4ghz, SLI GTX670, 8GB, Viewsonic VX2268WM
Only Cryengine implements it into there games natively as far as I know.
Most developers would either not bother before opting to implement depth buffer tbh. Here is the games that I know of..
Crysis 2/3
"some" Cryengine based games - MechWarrior Online / Panzar
Renderers such as 3D vision that use it are Tridef/ Vorpx[Oculus Rift].
---------------------------------------
Proper 3D implementation can be time consuming/expensive.
Depth buffer is relatively simple.
Only Cryengine implements it into there games natively as far as I know.
Most developers would either not bother before opting to implement depth buffer tbh. Here is the games that I know of..
Crysis 2/3
"some" Cryengine based games - MechWarrior Online / Panzar
Renderers such as 3D vision that use it are Tridef/ Vorpx[Oculus Rift].
---------------------------------------
Proper 3D implementation can be time consuming/expensive.
Depth buffer is relatively simple.
Co-founder of helixmod.blog.com
If you like one of my helixmod patches and want to donate. Can send to me through paypal - eqzitara@yahoo.com
The thing to think about with real, geometric, 3D is that with two actually different viewing angles, you can actually more of an image. Especially for things that are up close, one eye can see further around a given object than the other. Just like if you panned around an object to see another side, your two eyes do this by getting slightly different angles on the same object.
Now in fake-3D, that extra information does not exist, because the scene is generated from a single viewpoint. This is why some items look like cardboard cutouts, because they don't have that subtle but important wrap around to get more data. Typically fake-3D implementation fake this missing data by blurring the edges or making a transparent halo around things.
It's not always terrible for everyone. Some people do not notice or care about the missing data, some people find it extremely annoying and fake. It is also sensitive to screen size. If you use a projector, you don't need as much depth to make a convincing scene, and so fake-3D works better for projector users.
The primary reason for developers to go this route is for performance reasons. It's much cheaper to generate a single deep image, then fake up the 2nd view point, than it is to render the entire image twice. And for true 3D, you do have to render it fully twice, because of that subtle ability to see around the edges of things.
In NVidia's case, it's not exactly for performance reasons. They are doing this mode as a way to side-step some of the problems with shaders, like broken shadows, or broken fire effects, things that are typically only done in 2D in games.
The thing to think about with real, geometric, 3D is that with two actually different viewing angles, you can actually more of an image. Especially for things that are up close, one eye can see further around a given object than the other. Just like if you panned around an object to see another side, your two eyes do this by getting slightly different angles on the same object.
Now in fake-3D, that extra information does not exist, because the scene is generated from a single viewpoint. This is why some items look like cardboard cutouts, because they don't have that subtle but important wrap around to get more data. Typically fake-3D implementation fake this missing data by blurring the edges or making a transparent halo around things.
It's not always terrible for everyone. Some people do not notice or care about the missing data, some people find it extremely annoying and fake. It is also sensitive to screen size. If you use a projector, you don't need as much depth to make a convincing scene, and so fake-3D works better for projector users.
The primary reason for developers to go this route is for performance reasons. It's much cheaper to generate a single deep image, then fake up the 2nd view point, than it is to render the entire image twice. And for true 3D, you do have to render it fully twice, because of that subtle ability to see around the edges of things.
In NVidia's case, it's not exactly for performance reasons. They are doing this mode as a way to side-step some of the problems with shaders, like broken shadows, or broken fire effects, things that are typically only done in 2D in games.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607 Latest 3Dmigoto Release Bo3b's School for ShaderHackers
Here's some 'differences' for you. :) I took a couple screenshots of Far Cry 3, one in Real 3D Vision in all it's Helix fixed glory and another in NVIDIA 2D Vision ... first the difference layer shifts(my depth translates to approximately 50% on a 27" monitor), these had to be resized(1014p source) as the 3D Vision one weighed in at about 25MB(19 frames), 2D Vision one was <12MB(9 frames), and I went with urls so as not to slow down the page ... the shifts were done from the ground in front of 'you' to 'infinity'.
3D Vision:
[url]https://dl.dropboxusercontent.com/s/n6f24ewis67tkdd/farcry3001_Real3D_DifSm.gif[/url]
2D Vision:
[url]https://dl.dropboxusercontent.com/s/iklhy930gem44ub/farcry3_d3d11001_Fake3D_DifSm.gif[/url]
These last two are 'wiggle 3D' images of the furthest most parts of the image, the parts with these least amount of depth. These I found the most fascinating, there's still enough of a depth shift for the 3D Vision one to look fairly decent ... the 2D Vision one on the other hand ... well let's just say a picture is worth a thousand words ...
3D Vision:
[img]https://dl.dropboxusercontent.com/s/9bkdpetwmv3lg13/farcry3001_Real3D_Crop.gif[/img]
2D Vision:
[img]https://dl.dropboxusercontent.com/s/4ta4lbq1s12oeev/farcry3_d3d11001_Fake3D_Crop.gif[/img]
... or was that a thousand lols? :D
Source PNSs:
[url]https://www.dropbox.com/s/aw8rjaxjoxiym50/farcry3001_Real3D.pns[/url]
[url]https://www.dropbox.com/s/zrfs45q53npuziq/farcry3_d3d11001_Fake3D.pns[/url]
I aligned infinity, can you spot the differences? :)
[url]https://dl.dropboxusercontent.com/s/2x7uh14kyi4xlu9/farcry3001_Real3D.gif[/url]
[url]https://dl.dropboxusercontent.com/s/2v1ock4p05re2cm/farcry3_d3d11001_Fake3D.gif[/url]
Here's some 'differences' for you. :) I took a couple screenshots of Far Cry 3, one in Real 3D Vision in all it's Helix fixed glory and another in NVIDIA 2D Vision ... first the difference layer shifts(my depth translates to approximately 50% on a 27" monitor), these had to be resized(1014p source) as the 3D Vision one weighed in at about 25MB(19 frames), 2D Vision one was <12MB(9 frames), and I went with urls so as not to slow down the page ... the shifts were done from the ground in front of 'you' to 'infinity'.
These last two are 'wiggle 3D' images of the furthest most parts of the image, the parts with these least amount of depth. These I found the most fascinating, there's still enough of a depth shift for the 3D Vision one to look fairly decent ... the 2D Vision one on the other hand ... well let's just say a picture is worth a thousand words ...
BioShock Infinite (Ultra, Diffusion DoF, no FXAA)
3D Vision Automatic + helixmod vs 3D Vision Reprojection
[url=http://abload.de/image.php?img=bioshockinfinite3dvis0ash7.png][img]http://abload.de/thumb/bioshockinfinite3dvis0ash7.png[/img][/url] [url=http://abload.de/image.php?img=bioshockinfinite3dvisf3s5j.png][img]http://abload.de/thumb/bioshockinfinite3dvisf3s5j.png[/img][/url]
BioShock Infinite (Ultra, Diffusion DoF, no FXAA)
3D Vision Automatic + helixmod vs 3D Vision Reprojection
NVIDIA TITAN X (Pascal), Intel Core i7-6900K, Win 10 Pro,
ASUS ROG Rampage V Edition 10, G.Skill RipJaws V 4x 8GB DDR4-3200 CL14-14-14-34,
ASUS ROG Swift PG258Q, ASUS ROG Swift PG278Q, Acer Predator XB280HK, BenQ W710ST
[quote="Kingping1"]BioShockInfinite 3D Vision Automatic vs Reprojection[/quote]Whose 'reprojection'? TriDef's or NVIDIA's? I haven't had a chance to try BI with the new beta drivers, nor with TriDef's Power 3D ... and is that 'vs' just plain 3D Vision Automatic without Helix's fix?
Kingping1 said:BioShockInfinite 3D Vision Automatic vs Reprojection
Whose 'reprojection'? TriDef's or NVIDIA's? I haven't had a chance to try BI with the new beta drivers, nor with TriDef's Power 3D ... and is that 'vs' just plain 3D Vision Automatic without Helix's fix?
Thanks
PD: sorry bad english
i7 4970k@4.5Ghz, SLI GTX1080Ti Aorus Gigabyte Xtreme, 16GB G Skill 2400hrz, 3*PG258Q in 3D surround.
Fake 3d in movies are when films are shot in 2d, then the footage for the second "eye" is generated on a computer (it still involves a team of artists to do so). That's kind of what's happening here, by the looks of things. This new method appears to render the scene once, then use the depth buffer to guess how "deep" objects should be, and push them back accordingly. The end result is you get a fairly limited range of depth and convergence, as there isn't enough visual information to generate the second eye at enough of an offset. As a consequence of this, you get something that looks like "layers" of flat paper at different depths, instead of objects with real volume. You also introduce a number of visual anomolies, most significantly "halos" around objects close to the screen (like the player character or weaponmodel in first person).
Thanks to the depth buffer, there's no guess work as to how deep things should be in relation to one another. The depth information is available. What's missing is how the scene should look from two unique camera angles. If the engine is only rendering a single viewpoint, it's impossible to generate a second accurately. It can, at best, fake a second view.
Hence, fake 3D. The depth should be fairly accurate though, as opposed to 3D movie conversions, which are absolutely guesswork. On the other hand, with movies they can take the time to hand craft every fake 3D frame to make it as indistinguishable from true 3D as possible, provided they have the time and budget. PC games have the luxury of the depth buffer, but not the luxury of time to analyze a series of frames to carefully craft artifact-free 3D images.
Thanks again...
i7 4970k@4.5Ghz, SLI GTX1080Ti Aorus Gigabyte Xtreme, 16GB G Skill 2400hrz, 3*PG258Q in 3D surround.
https://forums.geforce.com/default/topic/679547/3d-vision/official-334-67-driver-thread-for-new-3d-vision-game-support-feedback/
Look at the awesome gifs made by tsaebeht. I believe they demonstrate very clearly the difference between true and fake 3D. Notice how differently the two games are rendered: Wargames AA (True) verses Crysis 2 (Fake)
Wargames:
http://img580.imageshack.us/img580/4959/yjxu.gif
Notice how the sync line is horizontal and moves up the scene in a line? Depth increases linearly from near to far. As you would expect.
Crysis 2:
http://img842.imageshack.us/img842/9836/h8ku.gif
Notice how the gun is rendered properly but anything beyond middle distance is rendered differently?
I made a 3D scene of Skyrim where I converted a 2D image + depth image to 3D by selecting sections at various depth and changing the vergence. I would share but SpeedyShare lost it. It looked rubbish but I think the rendering is similar in principle to nVidia's new solution. Rendering 2D objects at appropriate depth.
edit
https://forums.geforce.com/default/topic/573166/3d-vision/i-see-mention-of-quot-fake-3d-quot-ocasionally-on-here-what-is-it-and-what-are-the-differences-/1/
Here is a similar thread where 'Fake' and 'Real' 3D was also discussed.
Lord, grant me the serenity to accept the things I cannot change, the courage to change the things I can, and the wisdom to know the difference.
-------------------
Vitals: Windows 7 64bit, i5 2500 @ 4.4ghz, SLI GTX670, 8GB, Viewsonic VX2268WM
Handy Driver Discussion
Helix Mod - community fixes
Bo3b's Shaderhacker School - How to fix 3D in games
3dsolutionsgaming.com - videos, reviews and 3D fixes
Thank You
INTEL I5 3570K @4,2GHz, GIGABYTE GA-Z77X-D3H, CORSAIR VENGEANCE 1600 16GB RAM, MSI GTX 970 on VIEWSONIC V3D245, Windows 10 x64, SSD 256GB
Lord, grant me the serenity to accept the things I cannot change, the courage to change the things I can, and the wisdom to know the difference.
-------------------
Vitals: Windows 7 64bit, i5 2500 @ 4.4ghz, SLI GTX670, 8GB, Viewsonic VX2268WM
Handy Driver Discussion
Helix Mod - community fixes
Bo3b's Shaderhacker School - How to fix 3D in games
3dsolutionsgaming.com - videos, reviews and 3D fixes
Most developers would either not bother before opting to implement depth buffer tbh. Here is the games that I know of..
Crysis 2/3
"some" Cryengine based games - MechWarrior Online / Panzar
Renderers such as 3D vision that use it are Tridef/ Vorpx[Oculus Rift].
---------------------------------------
Proper 3D implementation can be time consuming/expensive.
Depth buffer is relatively simple.
Co-founder of helixmod.blog.com
If you like one of my helixmod patches and want to donate. Can send to me through paypal - eqzitara@yahoo.com
Now in fake-3D, that extra information does not exist, because the scene is generated from a single viewpoint. This is why some items look like cardboard cutouts, because they don't have that subtle but important wrap around to get more data. Typically fake-3D implementation fake this missing data by blurring the edges or making a transparent halo around things.
It's not always terrible for everyone. Some people do not notice or care about the missing data, some people find it extremely annoying and fake. It is also sensitive to screen size. If you use a projector, you don't need as much depth to make a convincing scene, and so fake-3D works better for projector users.
The primary reason for developers to go this route is for performance reasons. It's much cheaper to generate a single deep image, then fake up the 2nd view point, than it is to render the entire image twice. And for true 3D, you do have to render it fully twice, because of that subtle ability to see around the edges of things.
In NVidia's case, it's not exactly for performance reasons. They are doing this mode as a way to side-step some of the problems with shaders, like broken shadows, or broken fire effects, things that are typically only done in 2D in games.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers
3D Vision:
https://dl.dropboxusercontent.com/s/n6f24ewis67tkdd/farcry3001_Real3D_DifSm.gif
2D Vision:
https://dl.dropboxusercontent.com/s/iklhy930gem44ub/farcry3_d3d11001_Fake3D_DifSm.gif
These last two are 'wiggle 3D' images of the furthest most parts of the image, the parts with these least amount of depth. These I found the most fascinating, there's still enough of a depth shift for the 3D Vision one to look fairly decent ... the 2D Vision one on the other hand ... well let's just say a picture is worth a thousand words ...
3D Vision:
2D Vision:
... or was that a thousand lols? :D
Source PNSs:
https://www.dropbox.com/s/aw8rjaxjoxiym50/farcry3001_Real3D.pns
https://www.dropbox.com/s/zrfs45q53npuziq/farcry3_d3d11001_Fake3D.pns
I aligned infinity, can you spot the differences? :)
https://dl.dropboxusercontent.com/s/2x7uh14kyi4xlu9/farcry3001_Real3D.gif
https://dl.dropboxusercontent.com/s/2v1ock4p05re2cm/farcry3_d3d11001_Fake3D.gif
[MonitorSizeOverride][Global/Base Profile Tweaks][Depth=IPD]
3D Vision Automatic + helixmod vs 3D Vision Reprojection
NVIDIA TITAN X (Pascal), Intel Core i7-6900K, Win 10 Pro,
ASUS ROG Rampage V Edition 10, G.Skill RipJaws V 4x 8GB DDR4-3200 CL14-14-14-34,
ASUS ROG Swift PG258Q, ASUS ROG Swift PG278Q, Acer Predator XB280HK, BenQ W710ST
[MonitorSizeOverride][Global/Base Profile Tweaks][Depth=IPD]