Where are the gtx 1080 3d vision reviews? what about cpu bottlenecking for high FPS?
  2 / 2    
So why can't this SMP be reversed engineered to work with 3d vision? I mean VR and stereoscopic renderring are basically the same thing, right - a left and right output with enough separation to create the illusion of depth along the z axis - I fail to understand why VR and 3d vision aren't fully compatible.
So why can't this SMP be reversed engineered to work with 3d vision? I mean VR and stereoscopic renderring are basically the same thing, right - a left and right output with enough separation to create the illusion of depth along the z axis - I fail to understand why VR and 3d vision aren't fully compatible.

#16
Posted 06/12/2016 02:40 AM   
The Single Pass Stereo could in fact be added back to work with 3D Vision, it is roughly the same as VR. But, and it's a big but, it's going to require developers to do some work. I don't think there is any way for the driver to do an automatic injection thing like we get with 3D Vision Automatic. And given that it requires developer intervention, I have to assume it's dead on arrival. Developers have never shown any interest in even SLI, let alone something more unusual like this. I can't find the document I read earlier, but the basic premise was to do vertex shader and geometry shader and tesselation only once, but then do multiple pixel shader passes. They were banking on some characteristic of the geometry shaders, and the lack of use of geometry shaders by developers. There was a technical discussion of how they implemented it, but I can't seem to dig that up now. Overall, my take is that it's another NVidia tech that will see very limited adoption because of the console specific nature of game development. Even implementing this as part of something like the Oculus SDK or OpenVR would require rearchitecting their display model, and I really doubt that is going to happen.
The Single Pass Stereo could in fact be added back to work with 3D Vision, it is roughly the same as VR. But, and it's a big but, it's going to require developers to do some work. I don't think there is any way for the driver to do an automatic injection thing like we get with 3D Vision Automatic. And given that it requires developer intervention, I have to assume it's dead on arrival. Developers have never shown any interest in even SLI, let alone something more unusual like this.

I can't find the document I read earlier, but the basic premise was to do vertex shader and geometry shader and tesselation only once, but then do multiple pixel shader passes. They were banking on some characteristic of the geometry shaders, and the lack of use of geometry shaders by developers.

There was a technical discussion of how they implemented it, but I can't seem to dig that up now.


Overall, my take is that it's another NVidia tech that will see very limited adoption because of the console specific nature of game development.

Even implementing this as part of something like the Oculus SDK or OpenVR would require rearchitecting their display model, and I really doubt that is going to happen.

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

#17
Posted 06/12/2016 10:24 AM   
OpenGL has a new extension for OVER_multiview that is a work in progress. I'm unsure of it's implications. GL_OVR_multivew status: incomplete https://www.opengl.org/registry/specs/OVR/multiview.txt Overview The method of stereo rendering supported in OpenGL is currently achieved by rendering to the two eye buffers sequentially. This typically incurs double the application and driver overhead, despite the fact that the command streams and render states are almost identical. This extension seeks to address the inefficiency of sequential multiview rendering by adding a means to render to multiple elements of a 2D texture array simultaneously. In multiview rendering, draw calls are instanced into each corresponding element of the texture array. The vertex program uses a new ViewID variable to compute per-view values, typically the vertex position and view-dependent variables like reflection. The formulation of this extension is high level in order to allow implementation freedom. On existing hardware, applications and drivers can realize the benefits of a single scene traversal, even if all GPU work is fully duplicated per-view. But future support could enable simultaneous rendering via multi-GPU, tile-based architectures could sort geometry into tiles for multiple views in a single pass, and the implementation could even choose to interleave at the fragment level for better texture cache utilization and more coherent fragment shader branching. The most obvious use case in this model is to support two simultaneous views: one view for each eye. However, we also anticipate a usage where two views are rendered per eye, where one has a wide field of view and the other has a narrow one. The nature of wide field of view planar projection is that the sample density can become unacceptably low in the view direction. By rendering two inset eye views per eye, we can get the required sample density in the center of projection without wasting samples, memory, and time by oversampling in the periphery. Contributors John Carmack, Oculus Tom Forsyth, Oculus Maurice Ribble, Qualcomm James Dolan, NVIDIA Corporation Mark Kilgard, NVIDIA Corporation Michael Songy, NVIDIA Corporation Yury Uralsky, NVIDIA Corporation Jesse Hall, Google Timothy Lottes, Epic Jan-Harald Fredriksen, ARM Jonas Gustavsson, Sony Mobile Sam Holmes, Qualcomm
OpenGL has a new extension for OVER_multiview that is a work in progress. I'm unsure of it's implications.

GL_OVR_multivew

status: incomplete

https://www.opengl.org/registry/specs/OVR/multiview.txt

Overview


The method of stereo rendering supported in OpenGL is currently achieved by
rendering to the two eye buffers sequentially. This typically incurs double
the application and driver overhead, despite the fact that the command
streams and render states are almost identical.

This extension seeks to address the inefficiency of sequential multiview
rendering by adding a means to render to multiple elements of a 2D texture
array simultaneously. In multiview rendering, draw calls are instanced into
each corresponding element of the texture array. The vertex program uses a
new ViewID variable to compute per-view values, typically the vertex
position and view-dependent variables like reflection.

The formulation of this extension is high level in order to allow
implementation freedom. On existing hardware, applications and drivers can
realize the benefits of a single scene traversal, even if all GPU work is
fully duplicated per-view. But future support could enable simultaneous
rendering via multi-GPU, tile-based architectures could sort geometry into
tiles for multiple views in a single pass, and the implementation could even
choose to interleave at the fragment level for better texture cache
utilization and more coherent fragment shader branching.

The most obvious use case in this model is to support two simultaneous
views: one view for each eye. However, we also anticipate a usage where two
views are rendered per eye, where one has a wide field of view and the other
has a narrow one. The nature of wide field of view planar projection is
that the sample density can become unacceptably low in the view direction.
By rendering two inset eye views per eye, we can get the required sample
density in the center of projection without wasting samples, memory, and
time by oversampling in the periphery.

Contributors

John Carmack, Oculus
Tom Forsyth, Oculus
Maurice Ribble, Qualcomm
James Dolan, NVIDIA Corporation
Mark Kilgard, NVIDIA Corporation
Michael Songy, NVIDIA Corporation
Yury Uralsky, NVIDIA Corporation
Jesse Hall, Google
Timothy Lottes, Epic
Jan-Harald Fredriksen, ARM
Jonas Gustavsson, Sony Mobile
Sam Holmes, Qualcomm

#18
Posted 06/12/2016 10:55 AM   
Also in a previous PDF they mention that Maxwell supports this, if I understand it correctly. But it seems that it's exclusive to Pascals now? https://developer.nvidia.com/sites/default/files/akamai/gameworks/vr/GameWorks_VR_2015_Final_handouts.pdf "The key thing that makes this technique a performance win is the multi-projection hardware we have on NVIDIA’s Maxwell architecture – in other words, the GeForce GTX 900 series and Titan X. Ordinarily, replicating all scene geometry to a number of viewports would be prohibitively expensive. There are a variety of ways you can do it, such as resubmitting draw calls, instancing, and geometry shader expansion – but all of those will add enough overhead to eat up any gains you got from reducing the pixel count. With Maxwell, we have the ability to very efficiently broadcast the geometry to many viewports in hardware, while only running the GPU geometry pipeline once per eye."
Also in a previous PDF they mention that Maxwell supports this, if I understand it correctly.

But it seems that it's exclusive to Pascals now?

https://developer.nvidia.com/sites/default/files/akamai/gameworks/vr/GameWorks_VR_2015_Final_handouts.pdf

"The key thing that makes this technique a performance win is the multi-projection
hardware we have on NVIDIA’s Maxwell architecture – in other words, the GeForce GTX
900 series and Titan X.
Ordinarily, replicating all scene geometry to a number of viewports would be prohibitively
expensive. There are a variety of ways you can do it, such as resubmitting draw calls,
instancing, and geometry shader expansion – but all of those will add enough overhead to
eat up any gains you got from reducing the pixel count.
With Maxwell, we have the ability to very efficiently broadcast the geometry to many
viewports in hardware, while only running the GPU geometry pipeline once per eye."

#19
Posted 06/12/2016 11:02 AM   
Thanks D-Man11, really interesting stuff and I'll keep my eye on it;) It can't be PASCAL exclusive as is an API and no specific hardware requirements exist. Of course, Nvidia can only release this API for a certain hardware generation (artificial software lock), but we will see;)
Thanks D-Man11, really interesting stuff and I'll keep my eye on it;)
It can't be PASCAL exclusive as is an API and no specific hardware requirements exist. Of course, Nvidia can only release this API for a certain hardware generation (artificial software lock), but we will see;)

1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc


My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com

(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)

#20
Posted 06/12/2016 11:13 AM   
An interesting article from a few days ago about single pass stereo. https://developer.nvidia.com/pascal-vr-tech
An interesting article from a few days ago about single pass stereo.


https://developer.nvidia.com/pascal-vr-tech

#21
Posted 06/29/2016 08:17 AM   
Is it just me, or does the single pass stereo sound like it would potentially resolve the issue with effects being broken by deferred rendering? By having the vertex shader calculate the 2 different X coordinates and pass them to the pixel shader, rather than having it hacked in by the 3D Vision driver in between, hopefully that would result in the pixel shader having the correct coordinates when performing those calculations. Then again, I could also interpret that as essentially doing the same thing as the 3D Vision driver, and therefore not resolving that issue... I choose to remain hopeful, but considering the Nvidia we all know and (hate to) love, I'm certainly not going to hold my breath for that (not to mention we'll be lucky to see that implemented in 3D Vision, and not just VR. Best we can hope for would be a consolidation between 3DVision and VR drivers, provided that doesn't break things even further for us).
Is it just me, or does the single pass stereo sound like it would potentially resolve the issue with effects being broken by deferred rendering? By having the vertex shader calculate the 2 different X coordinates and pass them to the pixel shader, rather than having it hacked in by the 3D Vision driver in between, hopefully that would result in the pixel shader having the correct coordinates when performing those calculations. Then again, I could also interpret that as essentially doing the same thing as the 3D Vision driver, and therefore not resolving that issue... I choose to remain hopeful, but considering the Nvidia we all know and (hate to) love, I'm certainly not going to hold my breath for that (not to mention we'll be lucky to see that implemented in 3D Vision, and not just VR. Best we can hope for would be a consolidation between 3DVision and VR drivers, provided that doesn't break things even further for us).

3D Gaming Rig: CPU: i7 7700K @ 4.9Ghz | Mobo: Asus Maximus Hero VIII | RAM: Corsair Dominator 16GB | GPU: 2 x GTX 1080 Ti SLI | 3xSSDs for OS and Apps, 2 x HDD's for 11GB storage | PSU: Seasonic X-1250 M2| Case: Corsair C70 | Cooling: Corsair H115i Hydro cooler | Displays: Asus PG278QR, BenQ XL2420TX & BenQ HT1075 | OS: Windows 10 Pro + Windows 7 dual boot

Like my fixes? Dontations can be made to: www.paypal.me/DShanz or rshannonca@gmail.com
Like electronic music? Check out: www.soundcloud.com/dj-ryan-king

#22
Posted 06/30/2016 12:37 AM   
Sounds like Titan P may be emerging. For example, see [url]http://vrworld.com/2016/07/05/nvidia-gp100-titan-faster-geforce-1080/[/url].
Sounds like Titan P may be emerging. For example, see http://vrworld.com/2016/07/05/nvidia-gp100-titan-faster-geforce-1080/.

#23
Posted 07/06/2016 02:14 PM   
[quote="DJ-RK"]Is it just me, or does the single pass stereo sound like it would potentially resolve the issue with effects being broken by deferred rendering? By having the vertex shader calculate the 2 different X coordinates and pass them to the pixel shader, rather than having it hacked in by the 3D Vision driver in between, hopefully that would result in the pixel shader having the correct coordinates when performing those calculations. Then again, I could also interpret that as essentially doing the same thing as the 3D Vision driver, and therefore not resolving that issue... I choose to remain hopeful, but considering the Nvidia we all know and (hate to) love, I'm certainly not going to hold my breath for that (not to mention we'll be lucky to see that implemented in 3D Vision, and not just VR. Best we can hope for would be a consolidation between 3DVision and VR drivers, provided that doesn't break things even further for us).[/quote] It looks like it can definitely help, but the problem is that it requires intervention by the game developer. It can't be easily hacked in like the 3D Vision Automatic does. It requires new API calls, and replacing things like CreateVertexShader with their single-pass version with the dual output. Game developers have a poor track record of picking up on NVidia technologies, not least because they cannot ignore all the old hardware out there. I can't exactly tell, but at present this is a Pascal only feature- which means no one can afford to adopt it for 3-5 years.
DJ-RK said:Is it just me, or does the single pass stereo sound like it would potentially resolve the issue with effects being broken by deferred rendering? By having the vertex shader calculate the 2 different X coordinates and pass them to the pixel shader, rather than having it hacked in by the 3D Vision driver in between, hopefully that would result in the pixel shader having the correct coordinates when performing those calculations. Then again, I could also interpret that as essentially doing the same thing as the 3D Vision driver, and therefore not resolving that issue... I choose to remain hopeful, but considering the Nvidia we all know and (hate to) love, I'm certainly not going to hold my breath for that (not to mention we'll be lucky to see that implemented in 3D Vision, and not just VR. Best we can hope for would be a consolidation between 3DVision and VR drivers, provided that doesn't break things even further for us).

It looks like it can definitely help, but the problem is that it requires intervention by the game developer. It can't be easily hacked in like the 3D Vision Automatic does. It requires new API calls, and replacing things like CreateVertexShader with their single-pass version with the dual output.

Game developers have a poor track record of picking up on NVidia technologies, not least because they cannot ignore all the old hardware out there. I can't exactly tell, but at present this is a Pascal only feature- which means no one can afford to adopt it for 3-5 years.

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

#24
Posted 07/07/2016 02:35 AM   
[quote="masterotaku"][quote="whyme466"]With my 4K 3D display (LG passive EDID mod), Dark Souls 3 was the first game I have played that REQUIRED 980 Ti SLI to get acceptable frame rates (above 25 fps).[/quote] I wonder if and overclocked 1080 would be enough for Dark Souls 3 at 2560x1440 at 60fps per eye. Assuming you get 30fps as the average, that SLI scaling is perfect and that the 1080 is 25% more powerful than a 980Ti: 30*0.5*((3840*2160)/(2560*1440))*1.25 = 42.1875fps With OC it could be higher. Although a 980Ti gets 37fps average in 2D at 4k, and with SLI 60fps average (hitting the cap, so it should be a bit more) according to this: http://gamegpu.com/images/stories/Test_GPU/MMO/DARK_SOULS_III/test/ds3_3840.jpg And that must be at stock clocks. Well, I guess I can use a lower custom resolution with black bars. [/quote] Just for the record, I am currently playing Dark souls 2 at 2560x1440 in 3dvision and the rekindled reshader. With Titan X 2xSLI I am getting average 60fps but it can drop down to about 50. Not sure about how this compares to a single 1080 though. I can't seem to find any comparison. For me frame rate is king and I would rather have 60fps any day over any resolution increase as I feel it does much for for visual fidelity and responsiveness of game-play. But I'm one of those people who can't tolerate anything under 45 fps.
masterotaku said:
whyme466 said:With my 4K 3D display (LG passive EDID mod), Dark Souls 3 was the first game I have played that REQUIRED 980 Ti SLI to get acceptable frame rates (above 25 fps).


I wonder if and overclocked 1080 would be enough for Dark Souls 3 at 2560x1440 at 60fps per eye. Assuming you get 30fps as the average, that SLI scaling is perfect and that the 1080 is 25% more powerful than a 980Ti:

30*0.5*((3840*2160)/(2560*1440))*1.25 = 42.1875fps

With OC it could be higher. Although a 980Ti gets 37fps average in 2D at 4k, and with SLI 60fps average (hitting the cap, so it should be a bit more) according to this: http://gamegpu.com/images/stories/Test_GPU/MMO/DARK_SOULS_III/test/ds3_3840.jpg

And that must be at stock clocks.

Well, I guess I can use a lower custom resolution with black bars.


Just for the record, I am currently playing Dark souls 2 at 2560x1440 in 3dvision and the rekindled reshader.

With Titan X 2xSLI I am getting average 60fps but it can drop down to about 50.
Not sure about how this compares to a single 1080 though. I can't seem to find any comparison.

For me frame rate is king and I would rather have 60fps any day over any resolution increase as I feel it does much for for visual fidelity and responsiveness of game-play. But I'm one of those people who can't tolerate anything under 45 fps.

i7-4790K CPU 4.8Ghz stable overclock.
16 GB RAM Corsair
EVGA 1080TI SLI
Samsung SSD 840Pro
ASUS Z97-WS
3D Surround ASUS Rog Swift PG278Q(R), 2x PG278Q (yes it works)
Obutto R3volution.
Windows 10 pro 64x (Windows 7 Dual boot)

#25
Posted 07/07/2016 03:46 AM   
[quote="whyme466"]With my 4K 3D display (LG passive EDID mod), Dark Souls 3 was the first game I have played that REQUIRED 980 Ti SLI to get acceptable frame rates (above 25 fps). Note that the 4K display processing load is more than Surround setups, even though half the display pixels are thrown away during interlace formatting. The EDID mod forces 3840x2160 Desktop (60 Hz), and in-game resolution changes do not produce good 3D for any other resolution in some games like Dark Souls 3.[/quote]Received and modified Pascal Titan X (added Arctic Accelero Hybrid III-120 cooler). Briefly ran Dark Souls 3 to compare Titan X performance with 980 Ti SLI - the single Pascal Titan X outperformed the SLI pair, with frame rates averaging 30-35 fps (using same graphics settings). Note that card set (via MSI Afterburner) to use 120% power and slight core overclock (+100).
whyme466 said:With my 4K 3D display (LG passive EDID mod), Dark Souls 3 was the first game I have played that REQUIRED 980 Ti SLI to get acceptable frame rates (above 25 fps). Note that the 4K display processing load is more than Surround setups, even though half the display pixels are thrown away during interlace formatting. The EDID mod forces 3840x2160 Desktop (60 Hz), and in-game resolution changes do not produce good 3D for any other resolution in some games like Dark Souls 3.
Received and modified Pascal Titan X (added Arctic Accelero Hybrid III-120 cooler). Briefly ran Dark Souls 3 to compare Titan X performance with 980 Ti SLI - the single Pascal Titan X outperformed the SLI pair, with frame rates averaging 30-35 fps (using same graphics settings). Note that card set (via MSI Afterburner) to use 120% power and slight core overclock (+100).

#26
Posted 08/07/2016 02:08 PM   
  2 / 2    
Scroll To Top