New VR performance increase rendering technique - does this sound like depth buffer / fake 3D to you...
Apparently Oculus' new rendering technique gains some 20% increase in performance. When I read the article, I read depth buffer and see an image of sphere with a halo <-- depth buffer / fake 3D? [url]https://www.roadtovr.com/oculus-new-stereo-shading-reprojection-brings-big-performance-gains-certain-vr-scenes/[/url] I hope not! I am one of the scummy 3D snobs that doesn't get on with fake 3D :/ What do you think?
Apparently Oculus' new rendering technique gains some 20% increase in performance.

When I read the article, I read depth buffer and see an image of sphere with a halo <-- depth buffer / fake 3D?

https://www.roadtovr.com/oculus-new-stereo-shading-reprojection-brings-big-performance-gains-certain-vr-scenes/

I hope not! I am one of the scummy 3D snobs that doesn't get on with fake 3D :/

What do you think?

Lord, grant me the serenity to accept the things I cannot change, the courage to change the things I can, and the wisdom to know the difference.
-------------------
Vitals: Windows 7 64bit, i5 2500 @ 4.4ghz, SLI GTX670, 8GB, Viewsonic VX2268WM

Handy Driver Discussion
Helix Mod - community fixes
Bo3b's Shaderhacker School - How to fix 3D in games
3dsolutionsgaming.com - videos, reviews and 3D fixes

#1
Posted 08/07/2017 08:28 PM   
Sure sounds like fake 3D to me... left eye = left eye, right eye = left eye + depth... Seems pretty sad to only get 20% performance over real 3D... what the hell ever happened to Single Pass Stereo? I was really hoping to see it find it's way into 3D Vision, you know, where we really could use it... :/ [url]https://developer.nvidia.com/vrworks/graphics/singlepassstereo[/url]
Sure sounds like fake 3D to me... left eye = left eye, right eye = left eye + depth...

Seems pretty sad to only get 20% performance over real 3D... what the hell ever happened to Single Pass Stereo?

I was really hoping to see it find it's way into 3D Vision, you know, where we really could use it... :/

https://developer.nvidia.com/vrworks/graphics/singlepassstereo
#2
Posted 08/08/2017 11:34 AM   
Ha ha ha....im still waiting for dx12 to TaKe over the world with its doglike processing power and Doubled frame rates,,, muahaha...
Ha ha ha....im still waiting for dx12 to TaKe over the world with its doglike processing power and
Doubled frame rates,,, muahaha...

CoreX9 Custom watercooling (valkswagen polo radiator)
I7-8700k@stock
TitanX pascal with shitty stock cooler
Win7/10
Video: Passive 3D fullhd 3D@60hz/channel Denon x1200w /Hc5 x 2 Geobox501->eeColorBoxes->polarizers/omega filttersCustom made silverscreen
Ocupation: Enterprenior.Painting/surfacing/constructions
Interests/skills:
3D gaming,3D movies, 3D printing,Drums, Bass and guitar.
Suomi - FINLAND - perkele

#3
Posted 08/08/2017 12:24 PM   
Its more advanced. You wont get the same artifacts you get with nvida fake 3d, that is why its only 20 percent performance increase. Thats what they say at least. Im confident that if this shows any obvious artifacts, it wont be implemented in many games. Could be more useful for mobile vr. https://developer.oculus.com/blog/introducing-stereo-shading-reprojection-for-unity/ Few interesting points. [quote="article"] Occluded Area Detection Due to something called “binocular parallax,” our left and right eyes see objects from slightly different positions, which results in some pixels the right eye can see that are simply not available in the left eye’s framebuffer. These pixels are occluded from the left eye’s point of view, which is what causes the edge artifacts. To fix these artifacts, the problematic areas can be re-drawn with correct pixels from the normal right eye camera rendering. To do this, a reprojected pixel needs to be identified as either a valid-reprojection or a false-reprojection. By masking out the valid-reprojection areas, we know which part the reprojection works well and which part needs to be re-rendered. Pixel Culling Mask Now that we can identify which parts of the image are valid-reprojections, we can mask out these areas to avoid re-rendering them. Most materials work well with reprojection. Mirror or very shiny materials, however, can look wrong since their appearance is very view-dependent. To solve this, our solution gives content creators the ability to disable reprojection on a per-material basis..... [quote]
Its more advanced. You wont get the same artifacts you get with nvida fake 3d, that is why its only 20 percent performance increase. Thats what they say at least. Im confident that if this shows any obvious artifacts, it wont be implemented in many games. Could be more useful for mobile vr.

https://developer.oculus.com/blog/introducing-stereo-shading-reprojection-for-unity/

Few interesting points.

article said:

Occluded Area Detection

Due to something called “binocular parallax,” our left and right eyes see objects from slightly different positions, which results in some pixels the right eye can see that are simply not available in the left eye’s framebuffer. These pixels are occluded from the left eye’s point of view, which is what causes the edge artifacts. To fix these artifacts, the problematic areas can be re-drawn with correct pixels from the normal right eye camera rendering.

To do this, a reprojected pixel needs to be identified as either a valid-reprojection or a false-reprojection. By masking out the valid-reprojection areas, we know which part the reprojection works well and which part needs to be re-rendered.

Pixel Culling Mask

Now that we can identify which parts of the image are valid-reprojections, we can mask out these areas to avoid re-rendering them. Most materials work well with reprojection. Mirror or very shiny materials, however, can look wrong since their appearance is very view-dependent. To solve this, our solution gives content creators the ability to disable reprojection on a per-material basis.....

#4
Posted 08/08/2017 01:23 PM   
This article made me wonder if Nvidia or Oculus has thought about taking advantage of the fact that stereo vision only works up to a certain distance, I think 240 meters. Beyond that you don't have to render those objects twice, just somehow get their position right. 240 meters is with very high "resolution" human vision, able to see minute details. With the Rift or even with a 3D Vision setup, that distance is probably even lower. For games like Metro with its close quarters environments, it might not help much, but for games like Skyrim or jet combat games I guess it would a lot.
This article made me wonder if Nvidia or Oculus has thought about taking advantage of the fact that stereo vision only works up to a certain distance, I think 240 meters. Beyond that you don't have to render those objects twice, just somehow get their position right. 240 meters is with very high "resolution" human vision, able to see minute details. With the Rift or even with a 3D Vision setup, that distance is probably even lower. For games like Metro with its close quarters environments, it might not help much, but for games like Skyrim or jet combat games I guess it would a lot.

46" Samsung ES7500 3DTV (checkerboard, high FOV as desktop monitor, highly recommend!) - Metro 2033 3D PNG screens - Metro LL filter realism mod - Flugan's Deus Ex:HR Depth changers - Nvidia tech support online form - Nvidia support: 1-800-797-6530

#5
Posted 08/08/2017 05:00 PM   
It works similarly to fake 3d approaches. Except that instead of duplicating pixels for occluded portions using nearby pixels (Like i did in my BOTW fix), it masks them and renders the masked portion using the correct eye. IMO this approach isn't very promising, mainly because it's completely unusable if single pass stereo is used. Single pass stereo has bigger performance gains, especially in geometry heavy scenes. This also only effects the very edges of the rendered region when there is movement that causes some of the pixels to be reprojected to the other eye. Multi-Res Shading is a better solution to cut down on computation costs for edge cases. I also doubt that this approach is usable with stuff like lens matched shading that is supported in hardware on newer gpus.
It works similarly to fake 3d approaches. Except that instead of duplicating pixels for occluded portions using nearby pixels (Like i did in my BOTW fix), it masks them and renders the masked portion using the correct eye.

IMO this approach isn't very promising, mainly because it's completely unusable if single pass stereo is used. Single pass stereo has bigger performance gains, especially in geometry heavy scenes.

This also only effects the very edges of the rendered region when there is movement that causes some of the pixels to be reprojected to the other eye. Multi-Res Shading is a better solution to cut down on computation costs for edge cases.

I also doubt that this approach is usable with stuff like lens matched shading that is supported in hardware on newer gpus.

Like my work? You can send a donation via Paypal to sgs.rules@gmail.com

Windows 7 Pro 64x - Nvidia Driver 398.82 - EVGA 980Ti SC - Optoma HD26 with Edid override - 3D Vision 2 - i7-8700K CPU at 5.0Ghz - ASROCK Z370 Ext 4 Motherboard - 32 GB RAM Corsair Vengeance - 512 GB Samsung SSD 850 Pro - Creative Sound Blaster Z

#6
Posted 08/08/2017 10:52 PM   
Of course it still is true stereo 3D rendering. From Oculus: [i]Typical virtual reality apps render the scene twice, once from the left eye’s view, and once from the right eye’s view. The two rendered images usually look very similar. Intuitively, one would think that maybe we can share some pixel rendering work between both eyes, so we implemented a tech called Stereo Shading Reprojection to make pixel sharing possible. Below we’ll provide an overview of this solution, different scenarios for optimization, and integration best practices.[/i] This is about calculating some pixels once instead of twice because of stereo 3D, and so gaining performance under certain circumstances.
Of course it still is true stereo 3D rendering. From Oculus:

Typical virtual reality apps render the scene twice, once from the left eye’s view, and once from the right eye’s view. The two rendered images usually look very similar. Intuitively, one would think that maybe we can share some pixel rendering work between both eyes, so we implemented a tech called Stereo Shading Reprojection to make pixel sharing possible. Below we’ll provide an overview of this solution, different scenarios for optimization, and integration best practices.

This is about calculating some pixels once instead of twice because of stereo 3D, and so gaining performance under certain circumstances.

#7
Posted 08/09/2017 12:28 PM   
[quote="Libertine"]This article made me wonder if Nvidia or Oculus has thought about taking advantage of the fact that stereo vision only works up to a certain distance, I think 240 meters. Beyond that you don't have to render those objects twice, just somehow get their position right. 240 meters is with very high "resolution" human vision, able to see minute details. With the Rift or even with a 3D Vision setup, that distance is probably even lower. For games like Metro with its close quarters environments, it might not help much, but for games like Skyrim or jet combat games I guess it would a lot.[/quote] The problem with that graph is that it doesn't consider all the biomechanics involved. Sure the graph shows minute gains past the 100m mark but most people have been to events or locations where you can see much greater than 200m. Even when stationary, I for one can certainly can tell the difference between distance of static objects, especially larger objects. Totally agree about resolution, the objects tend to be billboards so rather large so I guess that actually follows the curve then! Larger the object the greater the range your vision can rely on stereoscopy. I hope Oculus don't blanket depth buffer reprojection everything beyond the 100 or so metre mark. I know some people disagreed with me here but I think the reason Crysis 2 and 3 3D was so poor was because past a certain distance it did seem to render as 2D at depth. It made my eyes feel funny anyway!
Libertine said:This article made me wonder if Nvidia or Oculus has thought about taking advantage of the fact that stereo vision only works up to a certain distance, I think 240 meters. Beyond that you don't have to render those objects twice, just somehow get their position right. 240 meters is with very high "resolution" human vision, able to see minute details. With the Rift or even with a 3D Vision setup, that distance is probably even lower. For games like Metro with its close quarters environments, it might not help much, but for games like Skyrim or jet combat games I guess it would a lot.


The problem with that graph is that it doesn't consider all the biomechanics involved. Sure the graph shows minute gains past the 100m mark but most people have been to events or locations where you can see much greater than 200m. Even when stationary, I for one can certainly can tell the difference between distance of static objects, especially larger objects.

Totally agree about resolution, the objects tend to be billboards so rather large so I guess that actually follows the curve then! Larger the object the greater the range your vision can rely on stereoscopy.

I hope Oculus don't blanket depth buffer reprojection everything beyond the 100 or so metre mark. I know some people disagreed with me here but I think the reason Crysis 2 and 3 3D was so poor was because past a certain distance it did seem to render as 2D at depth. It made my eyes feel funny anyway!

Lord, grant me the serenity to accept the things I cannot change, the courage to change the things I can, and the wisdom to know the difference.
-------------------
Vitals: Windows 7 64bit, i5 2500 @ 4.4ghz, SLI GTX670, 8GB, Viewsonic VX2268WM

Handy Driver Discussion
Helix Mod - community fixes
Bo3b's Shaderhacker School - How to fix 3D in games
3dsolutionsgaming.com - videos, reviews and 3D fixes

#8
Posted 08/09/2017 03:27 PM   
Scroll To Top