Native 3D Vision support in Unity Games via Injector
  3 / 8    
@bo3d I saw this in my bookmarks, I had forgotten about it. No idea if there's any truth to it or usefulness https://sites.google.com/site/snippetsanddriblits/DirectXWindowed "What We've Found As of driver version 270 (or thereabouts), nVidia started supporting the windowed mode for their 3D Vision drivers. There are two important caveats though, that we've found with this. You must use the DirectX 9 feature level. You can either use the DirectX 9 SDK or the DirectX 11 SDK, but if you use the DirectX 11 SDK, you must ensure that you limit the feature level flag to 9.3. Otherwise, no workie. Your application must be named a name that matches nVidia's list of approved 3D Vision applications. For full screen, it doesn't seem to matter whether you're on that list or not. But for windowed mode, for some reason, it does. We're guessing that it's part of nVidia's quality assurance, so that they can maintain a good image for their product (and who could really blame them - their driver is doing quite a bit in the background). Anyways, if your application is not part of the approved list, try naming it googleearth.exe or play.exe." This might explain why in WoW, I could play in 3D in window mode with dx11
@bo3d

I saw this in my bookmarks, I had forgotten about it. No idea if there's any truth to it or usefulness

https://sites.google.com/site/snippetsanddriblits/DirectXWindowed


"What We've Found

As of driver version 270 (or thereabouts), nVidia started supporting the windowed mode for their 3D Vision drivers. There are two important caveats though, that we've found with this.

You must use the DirectX 9 feature level. You can either use the DirectX 9 SDK or the DirectX 11 SDK, but if you use the DirectX 11 SDK, you must ensure that you limit the feature level flag to 9.3. Otherwise, no workie.
Your application must be named a name that matches nVidia's list of approved 3D Vision applications. For full screen, it doesn't seem to matter whether you're on that list or not. But for windowed mode, for some reason, it does. We're guessing that it's part of nVidia's quality assurance, so that they can maintain a good image for their product (and who could really blame them - their driver is doing quite a bit in the background). Anyways, if your application is not part of the approved list, try naming it googleearth.exe or play.exe."

This might explain why in WoW, I could play in 3D in window mode with dx11

#31
Posted 05/07/2017 01:59 AM   
@D-Man11: Thanks for that info. I tried to get it working in DX11 windowed, but did not have any success, even after adding the GoogleEarth profile. I'll see if playing with the feature level will work. My code is up on GitHub for anyone interested. [url]https://github.com/bo3b/3D-Vision-Direct[/url] I'll be improving this a bit more, it's more complicated than it needs to be, and some comments are wrong. But, it's fully functional. DX11 code (C++) using 3D Vision Direct. @sgrules: Do you think we need a DX9 version? Now that I know how it all works, I'm sure I can make a DX9 variant if that would be necessary for older Unity games. For Unreal games, I like that idea, but don't know enough about Unreal to know how feasible this will be. If they have a plug-in architecture where we can hook into the engine, we can very likely make it work there too. Like a lot of this stuff, once you figure out the problems, the actual code is really small and not very hard. (Of course it takes like 10 days to figure out the problems though. :-) So adding this Direct Mode should be relatively easy, and can very likely be done directly from C#, by using about 8 nvapi helper routines.
@D-Man11: Thanks for that info. I tried to get it working in DX11 windowed, but did not have any success, even after adding the GoogleEarth profile. I'll see if playing with the feature level will work.


My code is up on GitHub for anyone interested.

https://github.com/bo3b/3D-Vision-Direct

I'll be improving this a bit more, it's more complicated than it needs to be, and some comments are wrong. But, it's fully functional. DX11 code (C++) using 3D Vision Direct.


@sgrules: Do you think we need a DX9 version? Now that I know how it all works, I'm sure I can make a DX9 variant if that would be necessary for older Unity games.

For Unreal games, I like that idea, but don't know enough about Unreal to know how feasible this will be. If they have a plug-in architecture where we can hook into the engine, we can very likely make it work there too.


Like a lot of this stuff, once you figure out the problems, the actual code is really small and not very hard. (Of course it takes like 10 days to figure out the problems though. :-)

So adding this Direct Mode should be relatively easy, and can very likely be done directly from C#, by using about 8 nvapi helper routines.

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

#32
Posted 05/08/2017 08:23 AM   
[quote="bo3b"]For Unreal games, I like that idea, but don't know enough about Unreal to know how feasible this will be. If they have a plug-in architecture where we can hook into the engine, we can very likely make it work there too.[/quote] Maybe this? https://ue4arch.com/forum/forum/main-boards/general-discussion/108-cool-vr-plugin-for-unreal Or what about that directX to OpenGL that Nvidia did. It's on github. They must have used a hook early in the draw calls, perhaps you can leverage it? [url=https://developer.nvidia.com/sites/default/files/akamai/gameworks/events/gdc14/GDC_14_Bringing%20Unreal%20Engine%204%20to%20OpenGL.pdf] PDF[/url] The video is a good [url=http://www.gdcvault.com/play/1020663/Bringing-Unreal-Engine-4-to]watch[/url] as well
bo3b said:For Unreal games, I like that idea, but don't know enough about Unreal to know how feasible this will be. If they have a plug-in architecture where we can hook into the engine, we can very likely make it work there too.


Maybe this?
https://ue4arch.com/forum/forum/main-boards/general-discussion/108-cool-vr-plugin-for-unreal

Or what about that directX to OpenGL that Nvidia did. It's on github. They must have used a hook early in the draw calls, perhaps you can leverage it? PDF

The video is a good watch as well

#33
Posted 05/08/2017 09:57 AM   
[quote="bo3b"] My code is up on GitHub for anyone interested. [url]https://github.com/bo3b/3D-Vision-Direct[/url] I'll be improving this a bit more, it's more complicated than it needs to be, and some comments are wrong. But, it's fully functional. DX11 code (C++) using 3D Vision Direct. [/quote] Great stuff. I quickly browsed through the code and it mostly makes sense, i'll let you know if i have any questions. I'll wait for you to finish up before i dive in. Btw I'm getting linking errors when trying to build it in VS. Am i missing a library? I also spotted some code to modify the projection matrix for stereo use. I normally just create off center matrices from scratch, but this could potentially resolve the issues i'm having with INSIDE's reflection cameras. [quote="bo3b"] @sgrules: Do you think we need a DX9 version? Now that I know how it all works, I'm sure I can make a DX9 variant if that would be necessary for older Unity games. [/quote] I'd say we keep things simple for now and focus on DX11. I think most of the notable DX9 unity games have already been fixed. [quote="bo3b"] For Unreal games, I like that idea, but don't know enough about Unreal to know how feasible this will be. If they have a plug-in architecture where we can hook into the engine, we can very likely make it work there too. [/quote] I don't really know much about UE either, i've done some quick searching and came up empty. I imagine this could be done by building a plugin for it but the issue is getting UE to actually load it. Maybe an existing plugin could be wrapped and then the custom one would be piggy backed on to it. [quote="D-Man11"] Maybe this? https://ue4arch.com/forum/forum/main-boards/general-discussion/108-cool-vr-plugin-for-unreal Or what about that directX to OpenGL that Nvidia did. It's on github. They must have used a hook early in the draw calls, perhaps you can leverage it? PDF [/quote] That could be useful info. I think the main hurdle is getting UE to load a plugin or inject code into it. [quote="bo3b"] So adding this Direct Mode should be relatively easy, and can very likely be done directly from C#, by using about 8 nvapi helper routines.[/quote] This all seems pretty doable. If it's ok with you once we finish the fix via the injector I would like to create a native unity plugin and release it for free in the Asset store. I'm hoping that a simple solution will entice more developers to use 3d vision and keep the tech alive. Another idea I've had was to create a open source alternative to nvidia's CM mode or Tridef 3d. This way we wouldn't have to muck about with profiles on games that we can't fix. I've already added native CM support to Helifax's OpenGL wrapper, I'm currently cleaning up the code. I've been using it to add 3d to Zelda, Breath of the Wild. I think my implementation looks better than any other CM solution i'v seen. This is getting a bit off topic so when i have a bit more time (I'm at work right not ;) ) I'll start a new thread. Thanks again for your MANY contributions to the 3dVision scene Bo3b. Cheers!
bo3b said:

My code is up on GitHub for anyone interested.

https://github.com/bo3b/3D-Vision-Direct

I'll be improving this a bit more, it's more complicated than it needs to be, and some comments are wrong. But, it's fully functional. DX11 code (C++) using 3D Vision Direct.

Great stuff. I quickly browsed through the code and it mostly makes sense, i'll let you know if i have any questions. I'll wait for you to finish up before i dive in.

Btw I'm getting linking errors when trying to build it in VS. Am i missing a library?

I also spotted some code to modify the projection matrix for stereo use. I normally just create off center matrices from scratch, but this could potentially resolve the issues i'm having with INSIDE's reflection cameras.

bo3b said:
@sgrules: Do you think we need a DX9 version? Now that I know how it all works, I'm sure I can make a DX9 variant if that would be necessary for older Unity games.

I'd say we keep things simple for now and focus on DX11. I think most of the notable DX9 unity games have already been fixed.
bo3b said:
For Unreal games, I like that idea, but don't know enough about Unreal to know how feasible this will be. If they have a plug-in architecture where we can hook into the engine, we can very likely make it work there too.

I don't really know much about UE either, i've done some quick searching and came up empty. I imagine this could be done by building a plugin for it but the issue is getting UE to actually load it. Maybe an existing plugin could be wrapped and then the custom one would be piggy backed on to it.

D-Man11 said:
Maybe this?
https://ue4arch.com/forum/forum/main-boards/general-discussion/108-cool-vr-plugin-for-unreal

Or what about that directX to OpenGL that Nvidia did. It's on github. They must have used a hook early in the draw calls, perhaps you can leverage it? PDF

That could be useful info. I think the main hurdle is getting UE to load a plugin or inject code into it.

bo3b said:
So adding this Direct Mode should be relatively easy, and can very likely be done directly from C#, by using about 8 nvapi helper routines.


This all seems pretty doable. If it's ok with you once we finish the fix via the injector I would like to create a native unity plugin and release it for free in the Asset store. I'm hoping that a simple solution will entice more developers to use 3d vision and keep the tech alive.

Another idea I've had was to create a open source alternative to nvidia's CM mode or Tridef 3d. This way we wouldn't have to muck about with profiles on games that we can't fix. I've already added native CM support to Helifax's OpenGL wrapper, I'm currently cleaning up the code. I've been using it to add 3d to Zelda, Breath of the Wild. I think my implementation looks better than any other CM solution i'v seen. This is getting a bit off topic so when i have a bit more time (I'm at work right not ;) ) I'll start a new thread.

Thanks again for your MANY contributions to the 3dVision scene Bo3b.

Cheers!

Like my work? You can send a donation via Paypal to sgs.rules@gmail.com

Windows 7 Pro 64x - Nvidia Driver 398.82 - EVGA 980Ti SC - Optoma HD26 with Edid override - 3D Vision 2 - i7-8700K CPU at 5.0Ghz - ASROCK Z370 Ext 4 Motherboard - 32 GB RAM Corsair Vengeance - 512 GB Samsung SSD 850 Pro - Creative Sound Blaster Z

#34
Posted 05/08/2017 08:04 PM   
[quote="sgsrules"]Great stuff. I quickly browsed through the code and it mostly makes sense, i'll let you know if i have any questions. I'll wait for you to finish up before i dive in. Btw I'm getting linking errors when trying to build it in VS. Am i missing a library? I also spotted some code to modify the projection matrix for stereo use. I normally just create off center matrices from scratch, but this could potentially resolve the issues i'm having with INSIDE's reflection cameras. [/quote]Are you trying to build for x32? I didn't add the nvapi library, under the assumption that we probably don't need an x32 target for DX11 games. Does that seem reasonable, or are there Unity games that can benefit from x32? If it's not missing nvapi, let me know what error you are seeing. I used VS2013 for this, because I have a full license for it. I don't think this is dependent on anything funny like an old DirectX SDK, and would expect it to compile and build with VS2015. I don't think the projection matrix modification will solve the reflection problem directly, but if we can narrow down the actual matrix (named or even just a raw cb(n) buffer), then we can apply that formula to it. The projection matrix modification is doing the same thing your off-center matrix is doing, just modifying an existing matrix instead. It's equivalent to multiplying the starting matrix by an x/y/z/w translation matrix with only x=separation. However, if we can catch their starting matrix early enough and modify it to suit, then that modification would presumably percolate to the reflection matrix. [quote]I'd say we keep things simple for now and focus on DX11. I think most of the notable DX9 unity games have already been fixed.[/quote]Sounds good, no DX9 variant until we need it. [quote]This all seems pretty doable. If it's ok with you once we finish the fix via the injector I would like to create a native unity plugin and release it for free in the Asset store. I'm hoping that a simple solution will entice more developers to use 3d vision and keep the tech alive.[/quote]That sounds great. Please feel free to build and release anything you like with it. I set it to a very permissive MIT license because that's what the Microsoft code used, and also to support people doing anything they like. Please let me know if you'd like my help with other pieces, or the setup for the nvapi. I'm pretty stoked that we'll have the ability to do 3D Vision Direct.
sgsrules said:Great stuff. I quickly browsed through the code and it mostly makes sense, i'll let you know if i have any questions. I'll wait for you to finish up before i dive in.

Btw I'm getting linking errors when trying to build it in VS. Am i missing a library?

I also spotted some code to modify the projection matrix for stereo use. I normally just create off center matrices from scratch, but this could potentially resolve the issues i'm having with INSIDE's reflection cameras.
Are you trying to build for x32? I didn't add the nvapi library, under the assumption that we probably don't need an x32 target for DX11 games.

Does that seem reasonable, or are there Unity games that can benefit from x32?

If it's not missing nvapi, let me know what error you are seeing. I used VS2013 for this, because I have a full license for it. I don't think this is dependent on anything funny like an old DirectX SDK, and would expect it to compile and build with VS2015.


I don't think the projection matrix modification will solve the reflection problem directly, but if we can narrow down the actual matrix (named or even just a raw cb(n) buffer), then we can apply that formula to it.

The projection matrix modification is doing the same thing your off-center matrix is doing, just modifying an existing matrix instead. It's equivalent to multiplying the starting matrix by an x/y/z/w translation matrix with only x=separation.

However, if we can catch their starting matrix early enough and modify it to suit, then that modification would presumably percolate to the reflection matrix.


I'd say we keep things simple for now and focus on DX11. I think most of the notable DX9 unity games have already been fixed.
Sounds good, no DX9 variant until we need it.


This all seems pretty doable. If it's ok with you once we finish the fix via the injector I would like to create a native unity plugin and release it for free in the Asset store. I'm hoping that a simple solution will entice more developers to use 3d vision and keep the tech alive.
That sounds great. Please feel free to build and release anything you like with it. I set it to a very permissive MIT license because that's what the Microsoft code used, and also to support people doing anything they like.


Please let me know if you'd like my help with other pieces, or the setup for the nvapi. I'm pretty stoked that we'll have the ability to do 3D Vision Direct.

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

#35
Posted 05/09/2017 05:55 AM   
[quote="bo3b"][quote="sgsrules"]Great stuff. I quickly browsed through the code and it mostly makes sense, i'll let you know if i have any questions. I'll wait for you to finish up before i dive in. Btw I'm getting linking errors when trying to build it in VS. Am i missing a library? I also spotted some code to modify the projection matrix for stereo use. I normally just create off center matrices from scratch, but this could potentially resolve the issues i'm having with INSIDE's reflection cameras. [/quote]Are you trying to build for x32? I didn't add the nvapi library, under the assumption that we probably don't need an x32 target for DX11 games. Does that seem reasonable, or are there Unity games that can benefit from x32? If it's not missing nvapi, let me know what error you are seeing. I used VS2013 for this, because I have a full license for it. I don't think this is dependent on anything funny like an old DirectX SDK, and would expect it to compile and build with VS2015. [/quote] oh crap, i didn't check the build target so that's probably it, now i feel a little sheepish. I'll check tomorrow. [quote="bo3b"] I don't think the projection matrix modification will solve the reflection problem directly, but if we can narrow down the actual matrix (named or even just a raw cb(n) buffer), then we can apply that formula to it. The projection matrix modification is doing the same thing your off-center matrix is doing, just modifying an existing matrix instead. It's equivalent to multiplying the starting matrix by an x/y/z/w translation matrix with only x=separation. However, if we can catch their starting matrix early enough and modify it to suit, then that modification would presumably percolate to the reflection matrix. [/quote] I already know exactly which matrix it is. The issue is that the game is calculating an oblique matrix for the reflections. when it calculates the oblique matrix it recalculates the projection matrix first only using the fov value and some default values for the near and far clip planes, it completely over writes the off center projection I've made. After it creates this standard projection matrix it calculates a clipping plane and generates an oblique matrix with it and the new projection matrix. If i knew what the clipping plane values were i could apply the oblique transformation to my off center projection, but unfortunately i don't. I also can't just use the regular off center projection since it doesn't properly clip the objects at the waters surface. That's why i figured that it would maybe be possible to simply modify the oblique projection that the game is calculating. Does that make sense? [quote="bo3b"] Please let me know if you'd like my help with other pieces, or the setup for the nvapi. I'm pretty stoked that we'll have the ability to do 3D Vision Direct.[/quote] stoked as well, thanks for the help!
bo3b said:
sgsrules said:Great stuff. I quickly browsed through the code and it mostly makes sense, i'll let you know if i have any questions. I'll wait for you to finish up before i dive in.

Btw I'm getting linking errors when trying to build it in VS. Am i missing a library?

I also spotted some code to modify the projection matrix for stereo use. I normally just create off center matrices from scratch, but this could potentially resolve the issues i'm having with INSIDE's reflection cameras.
Are you trying to build for x32? I didn't add the nvapi library, under the assumption that we probably don't need an x32 target for DX11 games.

Does that seem reasonable, or are there Unity games that can benefit from x32?

If it's not missing nvapi, let me know what error you are seeing. I used VS2013 for this, because I have a full license for it. I don't think this is dependent on anything funny like an old DirectX SDK, and would expect it to compile and build with VS2015.

oh crap, i didn't check the build target so that's probably it, now i feel a little sheepish. I'll check tomorrow.
bo3b said:
I don't think the projection matrix modification will solve the reflection problem directly, but if we can narrow down the actual matrix (named or even just a raw cb(n) buffer), then we can apply that formula to it.

The projection matrix modification is doing the same thing your off-center matrix is doing, just modifying an existing matrix instead. It's equivalent to multiplying the starting matrix by an x/y/z/w translation matrix with only x=separation.

However, if we can catch their starting matrix early enough and modify it to suit, then that modification would presumably percolate to the reflection matrix.

I already know exactly which matrix it is. The issue is that the game is calculating an oblique matrix for the reflections. when it calculates the oblique matrix it recalculates the projection matrix first only using the fov value and some default values for the near and far clip planes, it completely over writes the off center projection I've made. After it creates this standard projection matrix it calculates a clipping plane and generates an oblique matrix with it and the new projection matrix. If i knew what the clipping plane values were i could apply the oblique transformation to my off center projection, but unfortunately i don't. I also can't just use the regular off center projection since it doesn't properly clip the objects at the waters surface. That's why i figured that it would maybe be possible to simply modify the oblique projection that the game is calculating. Does that make sense?

bo3b said:
Please let me know if you'd like my help with other pieces, or the setup for the nvapi. I'm pretty stoked that we'll have the ability to do 3D Vision Direct.


stoked as well, thanks for the help!

Like my work? You can send a donation via Paypal to sgs.rules@gmail.com

Windows 7 Pro 64x - Nvidia Driver 398.82 - EVGA 980Ti SC - Optoma HD26 with Edid override - 3D Vision 2 - i7-8700K CPU at 5.0Ghz - ASROCK Z370 Ext 4 Motherboard - 32 GB RAM Corsair Vengeance - 512 GB Samsung SSD 850 Pro - Creative Sound Blaster Z

#36
Posted 05/09/2017 06:36 AM   
[quote="sgsrules"][quote="bo3b"][quote="sgsrules"]Great stuff. I quickly browsed through the code and it mostly makes sense, i'll let you know if i have any questions. I'll wait for you to finish up before i dive in. Btw I'm getting linking errors when trying to build it in VS. Am i missing a library? I also spotted some code to modify the projection matrix for stereo use. I normally just create off center matrices from scratch, but this could potentially resolve the issues i'm having with INSIDE's reflection cameras. [/quote]Are you trying to build for x32? I didn't add the nvapi library, under the assumption that we probably don't need an x32 target for DX11 games. Does that seem reasonable, or are there Unity games that can benefit from x32? If it's not missing nvapi, let me know what error you are seeing. I used VS2013 for this, because I have a full license for it. I don't think this is dependent on anything funny like an old DirectX SDK, and would expect it to compile and build with VS2015.[/quote]oh crap, i didn't check the build target so that's probably it, now i feel a little sheepish. I'll check tomorrow.[/quote]I'll go ahead and make it build for x32 as well, since it's an example program. Other people might need that build target. [quote="sgsrules"][quote="bo3b"]I don't think the projection matrix modification will solve the reflection problem directly, but if we can narrow down the actual matrix (named or even just a raw cb(n) buffer), then we can apply that formula to it. The projection matrix modification is doing the same thing your off-center matrix is doing, just modifying an existing matrix instead. It's equivalent to multiplying the starting matrix by an x/y/z/w translation matrix with only x=separation. However, if we can catch their starting matrix early enough and modify it to suit, then that modification would presumably percolate to the reflection matrix. [/quote]I already know exactly which matrix it is. The issue is that the game is calculating an oblique matrix for the reflections. when it calculates the oblique matrix it recalculates the projection matrix first only using the fov value and some default values for the near and far clip planes, it completely over writes the off center projection I've made. After it creates this standard projection matrix it calculates a clipping plane and generates an oblique matrix with it and the new projection matrix. If i knew what the clipping plane values were i could apply the oblique transformation to my off center projection, but unfortunately i don't. I also can't just use the regular off center projection since it doesn't properly clip the objects at the waters surface. That's why i figured that it would maybe be possible to simply modify the oblique projection that the game is calculating. Does that make sense?[/quote]For this case, it does sound like using the _31 style += will solve the problem. Because that's a simple translation, the original matrix will still have the fov, clipping, and oblique transformation. (not oblique after translated, but you know.) Should work to use this approach instead of rebuilding the matrix from scratch. Because the off-center translation would be the same for both the original projection matrix and these oblique reflection matrices. If you can get access to the reflection matrix after it's built, but before it's used, I think this will work. Left/right clipping might still be a problem, because we are making it non-oblique, and we can get scissor clipping effects. Might need a tweak of some form for this.
sgsrules said:
bo3b said:
sgsrules said:Great stuff. I quickly browsed through the code and it mostly makes sense, i'll let you know if i have any questions. I'll wait for you to finish up before i dive in.

Btw I'm getting linking errors when trying to build it in VS. Am i missing a library?

I also spotted some code to modify the projection matrix for stereo use. I normally just create off center matrices from scratch, but this could potentially resolve the issues i'm having with INSIDE's reflection cameras.
Are you trying to build for x32? I didn't add the nvapi library, under the assumption that we probably don't need an x32 target for DX11 games.

Does that seem reasonable, or are there Unity games that can benefit from x32?

If it's not missing nvapi, let me know what error you are seeing. I used VS2013 for this, because I have a full license for it. I don't think this is dependent on anything funny like an old DirectX SDK, and would expect it to compile and build with VS2015.
oh crap, i didn't check the build target so that's probably it, now i feel a little sheepish. I'll check tomorrow.
I'll go ahead and make it build for x32 as well, since it's an example program. Other people might need that build target.


sgsrules said:
bo3b said:I don't think the projection matrix modification will solve the reflection problem directly, but if we can narrow down the actual matrix (named or even just a raw cb(n) buffer), then we can apply that formula to it.

The projection matrix modification is doing the same thing your off-center matrix is doing, just modifying an existing matrix instead. It's equivalent to multiplying the starting matrix by an x/y/z/w translation matrix with only x=separation.

However, if we can catch their starting matrix early enough and modify it to suit, then that modification would presumably percolate to the reflection matrix.
I already know exactly which matrix it is. The issue is that the game is calculating an oblique matrix for the reflections. when it calculates the oblique matrix it recalculates the projection matrix first only using the fov value and some default values for the near and far clip planes, it completely over writes the off center projection I've made. After it creates this standard projection matrix it calculates a clipping plane and generates an oblique matrix with it and the new projection matrix. If i knew what the clipping plane values were i could apply the oblique transformation to my off center projection, but unfortunately i don't. I also can't just use the regular off center projection since it doesn't properly clip the objects at the waters surface. That's why i figured that it would maybe be possible to simply modify the oblique projection that the game is calculating. Does that make sense?
For this case, it does sound like using the _31 style += will solve the problem. Because that's a simple translation, the original matrix will still have the fov, clipping, and oblique transformation. (not oblique after translated, but you know.)

Should work to use this approach instead of rebuilding the matrix from scratch. Because the off-center translation would be the same for both the original projection matrix and these oblique reflection matrices. If you can get access to the reflection matrix after it's built, but before it's used, I think this will work.

Left/right clipping might still be a problem, because we are making it non-oblique, and we can get scissor clipping effects. Might need a tweak of some form for this.

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

#37
Posted 05/10/2017 03:49 AM   
@Bo3b, I've gone over the code and got it running... yep, i had forgotten to change the build target. I don't see any issues calling most of these functions from within Unity via it's native plugin interface. The initialization stuff that needs to be called before device creation e.g. NvAPI_Initialize and NvAPI_Stereo_SetDriverMode would be a problem since Unity doesn't expose any of that stuff. The other issue is setting the swap chain resolution to be 2x Wide. As far as i know, Unity doesn't allow non-standard resolutions. I'm sure these two issues could be addressed by using some sort of hook but that is beyond my current skills.
@Bo3b, I've gone over the code and got it running... yep, i had forgotten to change the build target. I don't see any issues calling most of these functions from within Unity via it's native plugin interface. The initialization stuff that needs to be called before device creation e.g. NvAPI_Initialize and NvAPI_Stereo_SetDriverMode would be a problem since Unity doesn't expose any of that stuff. The other issue is setting the swap chain resolution to be 2x Wide. As far as i know, Unity doesn't allow non-standard resolutions. I'm sure these two issues could be addressed by using some sort of hook but that is beyond my current skills.

Like my work? You can send a donation via Paypal to sgs.rules@gmail.com

Windows 7 Pro 64x - Nvidia Driver 398.82 - EVGA 980Ti SC - Optoma HD26 with Edid override - 3D Vision 2 - i7-8700K CPU at 5.0Ghz - ASROCK Z370 Ext 4 Motherboard - 32 GB RAM Corsair Vengeance - 512 GB Samsung SSD 850 Pro - Creative Sound Blaster Z

#38
Posted 05/10/2017 05:22 AM   
@Bo3b, I tried fixing the reflections by modifying the projection. It didn't work but at least now i know exactly what the issue is. I posted on the INSIDE thread since it's more relevant there.
@Bo3b, I tried fixing the reflections by modifying the projection. It didn't work but at least now i know exactly what the issue is. I posted on the INSIDE thread since it's more relevant there.

Like my work? You can send a donation via Paypal to sgs.rules@gmail.com

Windows 7 Pro 64x - Nvidia Driver 398.82 - EVGA 980Ti SC - Optoma HD26 with Edid override - 3D Vision 2 - i7-8700K CPU at 5.0Ghz - ASROCK Z370 Ext 4 Motherboard - 32 GB RAM Corsair Vengeance - 512 GB Samsung SSD 850 Pro - Creative Sound Blaster Z

#39
Posted 05/10/2017 09:59 PM   
Hey Bo3b, I was looking over your code and i noticed that you don't modify the view matrix at all. All your stereo calculations are done using just the projection matrix, the separation is set my shifting the projection._13 value and the convergence is applied by setting projection._14 to -separation * convergence. This is what tripped me up. I've always set up my stereo pairs by adjusting both my view matrix and projection matrix. I offset my view matrix along it's x axis by half the separation value for each eye a and then do a separate calculation for my projection matrix: [code] public void GetStereoProjectionMatrix(bool isRightEye, Camera camera) { float fovRadians = Mathf.Deg2Rad * camera.fieldOfView; float w = camera.nearClipPlane * Mathf.Tan(fovRadians / 2); float dist = Separation * .5f; if (!isRightEye) dist *= -1; float top = w; float bottom = -w; float left = -camera.aspect * w - dist * camera.nearClipPlane / FocalDistance; float right = camera.aspect * w - dist * camera.nearClipPlane / FocalDistance; camera.projectionMatrix = PerspectiveOffCenter(left, right, bottom, top, camera.nearClipPlane, camera.farClipPlane); } static Matrix4x4 PerspectiveOffCenter(float left, float right, float bottom, float top, float near, float far) { Matrix4x4 m = new Matrix4x4(); m[0, 0] = 2f * near / (right - left); m[0, 1] = 0; m[0, 2] = (right + left) / (right - left); m[0, 3] = 0; m[1, 0] = 0; m[1, 1] = 2f * near / (top - bottom); m[1, 2] = (top + bottom) / (top - bottom); m[1, 3] = 0; m[2, 0] = 0; m[2, 1] = 0; m[2, 2] = -(far + near) / (far - near); m[2, 3] = -(2f * far * near) / (far - near); m[3, 0] = 0; m[3, 1] = 0; m[3, 2] = -1f; m[3, 3] = 0; return m; } [/code] So when i was trying to shift the projection matrix i was applying extra separation since this had already been done in the view matrix. I used your method in INSIDE, which is definitely simpler and the reflections work straight off the bat, except that all the lighting and shadows get broken. Unity's view matrix need to be updated for a lot of it's effects to render properly so simply changing the projection matrix isn't an option.
Hey Bo3b, I was looking over your code and i noticed that you don't modify the view matrix at all. All your stereo calculations are done using just the projection matrix, the separation is set my shifting the projection._13 value and the convergence is applied by setting projection._14 to -separation * convergence. This is what tripped me up. I've always set up my stereo pairs by adjusting both my view matrix and projection matrix. I offset my view matrix along it's x axis by half the separation value for each eye a and then do a separate calculation for my projection matrix:
public void GetStereoProjectionMatrix(bool isRightEye, Camera camera)
{
float fovRadians = Mathf.Deg2Rad * camera.fieldOfView;
float w = camera.nearClipPlane * Mathf.Tan(fovRadians / 2);

float dist = Separation * .5f;
if (!isRightEye) dist *= -1;

float top = w;
float bottom = -w;

float left = -camera.aspect * w - dist * camera.nearClipPlane / FocalDistance;
float right = camera.aspect * w - dist * camera.nearClipPlane / FocalDistance;

camera.projectionMatrix = PerspectiveOffCenter(left, right, bottom, top, camera.nearClipPlane, camera.farClipPlane);
}

static Matrix4x4 PerspectiveOffCenter(float left, float right, float bottom, float top, float near, float far)
{
Matrix4x4 m = new Matrix4x4();
m[0, 0] = 2f * near / (right - left);
m[0, 1] = 0;
m[0, 2] = (right + left) / (right - left);
m[0, 3] = 0;
m[1, 0] = 0;
m[1, 1] = 2f * near / (top - bottom);
m[1, 2] = (top + bottom) / (top - bottom);
m[1, 3] = 0;
m[2, 0] = 0;
m[2, 1] = 0;
m[2, 2] = -(far + near) / (far - near);
m[2, 3] = -(2f * far * near) / (far - near);
m[3, 0] = 0;
m[3, 1] = 0;
m[3, 2] = -1f;
m[3, 3] = 0;
return m;
}

So when i was trying to shift the projection matrix i was applying extra separation since this had already been done in the view matrix.

I used your method in INSIDE, which is definitely simpler and the reflections work straight off the bat, except that all the lighting and shadows get broken. Unity's view matrix need to be updated for a lot of it's effects to render properly so simply changing the projection matrix isn't an option.

Like my work? You can send a donation via Paypal to sgs.rules@gmail.com

Windows 7 Pro 64x - Nvidia Driver 398.82 - EVGA 980Ti SC - Optoma HD26 with Edid override - 3D Vision 2 - i7-8700K CPU at 5.0Ghz - ASROCK Z370 Ext 4 Motherboard - 32 GB RAM Corsair Vengeance - 512 GB Samsung SSD 850 Pro - Creative Sound Blaster Z

#40
Posted 05/12/2017 01:30 AM   
Very interesting. Also good to know that just applying the _31/_41 correction works for reflections, I couldn't explain why it didn't work before. For doing the correction, I think it doesn't matter where in the pipeline we do this. In fact, I just tried the sample program, and moved the correction code to use the cbNeverChanges which holds the mView instead. When I change the code to be: [code]... cbNever.mView = g_View; cbNever.mView._31 -= separation; cbNever.mView._41 = convergence; cbNever.mView = XMMatrixTranspose(cbNever.mView); g_pImmediateContext->UpdateSubresource(g_pCBNeverChanges, 0, nullptr, &cbNever, 0, 0); [/code] This still works. The cube is drawn in the right location, and varies with convergence and separation like usual. It's also worth noting that I tried the same experiment with the cbAlways.mWorld matrix, and it also works correctly. So we can modify this at Model, View, or Projection. So for the INSIDE Unity case, I'd say it looks like you can put the correction where it makes the most sense for you. If I'm reading this correctly, that would be on the View matrix, which would work for both the shadows and reflections, because the starting view matrix for projection would already be adjusted, and hence the reflection projection should still work. It seems to me that you'd be better off to just modify the view matrix that Unity is providing, instead of creating a new one from scratch. If you create it from scratch without applying the convergence, then convergence changes won't modify the shadows, which would take away the ability to do toyification. Does that seem right?
Very interesting. Also good to know that just applying the _31/_41 correction works for reflections, I couldn't explain why it didn't work before.


For doing the correction, I think it doesn't matter where in the pipeline we do this.

In fact, I just tried the sample program, and moved the correction code to use the cbNeverChanges which holds the mView instead. When I change the code to be:

...
cbNever.mView = g_View;
cbNever.mView._31 -= separation;
cbNever.mView._41 = convergence;
cbNever.mView = XMMatrixTranspose(cbNever.mView);
g_pImmediateContext->UpdateSubresource(g_pCBNeverChanges, 0, nullptr, &cbNever, 0, 0);


This still works. The cube is drawn in the right location, and varies with convergence and separation like usual.

It's also worth noting that I tried the same experiment with the cbAlways.mWorld matrix, and it also works correctly. So we can modify this at Model, View, or Projection.


So for the INSIDE Unity case, I'd say it looks like you can put the correction where it makes the most sense for you. If I'm reading this correctly, that would be on the View matrix, which would work for both the shadows and reflections, because the starting view matrix for projection would already be adjusted, and hence the reflection projection should still work.

It seems to me that you'd be better off to just modify the view matrix that Unity is providing, instead of creating a new one from scratch. If you create it from scratch without applying the convergence, then convergence changes won't modify the shadows, which would take away the ability to do toyification.

Does that seem right?

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

#41
Posted 05/12/2017 05:55 AM   
Bo3b, Unfortunately applying the changes to the view matrix instead of the projection matrix does not work. It works in your test case as is, but try changing the camera's view vector, shift the target to the left or right. Then when you try to apply the same offset to the view matrix instead of the projection matrix you'll get completely different results. The offset only works properly once you have things in view space (or if everything is nicely aligned). I would really like to understand how these values change the matrix. Then i could apply the same fix to the projection matrix while taking into account the lateral shift that I'm applying to the view matrix. I'm also applying these changes in different coordinate spaces. Using the method in your code, I think the offset values are being measured in clip space, so they're going to be universal. I'm applying the offset in view space to shift the view matrix, the same separation value is then converted to clip space using the equation i posted and then used to shift the projection matrix. I think what would end up working would be to adjust the view matrix and projection matrix like i did before for the regular cameras, this way lighting and shadows and other effects would still work. For the reflective cameras, since they're copying the values from the modified stereo camera, i would shift them back to the original state and then apply the _31, and _41 (in my case _13 and _14 since i don't need to transpose the matrix) projection offsets that i would have to convert from view space to clip space values. I'll try this later tonight. The toe-in method works, but the reflections are still slightly off and cause a bit of eye strain... or at least i think they are... it's getting hard to tell at this point.
Bo3b, Unfortunately applying the changes to the view matrix instead of the projection matrix does not work. It works in your test case as is, but try changing the camera's view vector, shift the target to the left or right. Then when you try to apply the same offset to the view matrix instead of the projection matrix you'll get completely different results. The offset only works properly once you have things in view space (or if everything is nicely aligned).

I would really like to understand how these values change the matrix. Then i could apply the same fix to the projection matrix while taking into account the lateral shift that I'm applying to the view matrix.

I'm also applying these changes in different coordinate spaces. Using the method in your code, I think the offset values are being measured in clip space, so they're going to be universal. I'm applying the offset in view space to shift the view matrix, the same separation value is then converted to clip space using the equation i posted and then used to shift the projection matrix.

I think what would end up working would be to adjust the view matrix and projection matrix like i did before for the regular cameras, this way lighting and shadows and other effects would still work. For the reflective cameras, since they're copying the values from the modified stereo camera, i would shift them back to the original state and then apply the _31, and _41 (in my case _13 and _14 since i don't need to transpose the matrix) projection offsets that i would have to convert from view space to clip space values.

I'll try this later tonight. The toe-in method works, but the reflections are still slightly off and cause a bit of eye strain... or at least i think they are... it's getting hard to tell at this point.

Like my work? You can send a donation via Paypal to sgs.rules@gmail.com

Windows 7 Pro 64x - Nvidia Driver 398.82 - EVGA 980Ti SC - Optoma HD26 with Edid override - 3D Vision 2 - i7-8700K CPU at 5.0Ghz - ASROCK Z370 Ext 4 Motherboard - 32 GB RAM Corsair Vengeance - 512 GB Samsung SSD 850 Pro - Creative Sound Blaster Z

#42
Posted 05/12/2017 06:04 PM   
OK, good to know. I kinda figured there would be something wrong with that in general, because our fixes for world space corrections are different from view space corrections. That also makes more sense from the math standpoint. Doing a change at the wrong spot in the matrix, or in the wrong order, always changes the end result. (unless some parameters are zero of course) A translation after a rotation is different from a rotation before a translation. For the first parameter, that is simply a translation in X. There is no difference between that code and doing a normal translation. If I do this: [code]... cbChangesOnResize.mProjection = cbChangesOnResize.mProjection * XMMatrixTranslation(-separation, 0, 0); //cbChangesOnResize.mProjection._31 -= separation; [/code] I get the same results. The second parameter of _41, I don't understand. Any idea what that parameter is normally used for in matrices? I'd much rather do more clear matrix math than this hacky direct setting. I think that in principle we should be able to use normal matrix manipulations, not this direct slamming of a specific entry. But I don't know what that _41 entry is used for normally, or what the normal math would be. That sounds right to de-shift the reflective cameras, and then apply the separation and convergence there. The only complexity I see there is that your Separation value is a fixed value, not the actual NVidia separation. Unless I'm missing something, that would =1, because we set separation as low as it can be, to disable Automatic. You would presumably use your fixed Separation value, but then I'm not clear on what convergence to use. Having it at a fixed separation is OK, but will be problematic for projector users. We don't presently have a good way of sharing any keypress settings to other code like this injector. We can share with shaders, but not other code.
OK, good to know. I kinda figured there would be something wrong with that in general, because our fixes for world space corrections are different from view space corrections.

That also makes more sense from the math standpoint. Doing a change at the wrong spot in the matrix, or in the wrong order, always changes the end result. (unless some parameters are zero of course) A translation after a rotation is different from a rotation before a translation.


For the first parameter, that is simply a translation in X. There is no difference between that code and doing a normal translation. If I do this:

...
cbChangesOnResize.mProjection = cbChangesOnResize.mProjection * XMMatrixTranslation(-separation, 0, 0);
//cbChangesOnResize.mProjection._31 -= separation;


I get the same results.


The second parameter of _41, I don't understand. Any idea what that parameter is normally used for in matrices? I'd much rather do more clear matrix math than this hacky direct setting.

I think that in principle we should be able to use normal matrix manipulations, not this direct slamming of a specific entry. But I don't know what that _41 entry is used for normally, or what the normal math would be.


That sounds right to de-shift the reflective cameras, and then apply the separation and convergence there.

The only complexity I see there is that your Separation value is a fixed value, not the actual NVidia separation. Unless I'm missing something, that would =1, because we set separation as low as it can be, to disable Automatic. You would presumably use your fixed Separation value, but then I'm not clear on what convergence to use.

Having it at a fixed separation is OK, but will be problematic for projector users. We don't presently have a good way of sharing any keypress settings to other code like this injector. We can share with shaders, but not other code.

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

#43
Posted 05/12/2017 11:44 PM   
[quote="bo3b"] The second parameter of _41, I don't understand. Any idea what that parameter is normally used for in matrices? I'd much rather do more clear matrix math than this hacky direct setting. I think that in principle we should be able to use normal matrix manipulations, not this direct slamming of a specific entry. But I don't know what that _41 entry is used for normally, or what the normal math would be. [/quote] when multiplying a point by a matrix the x value is calculated like: [code] float newX = x * m.11 + x * m.12+ x * m.13 + x * m.14; // in column major [/code] In a normal matrix the _41 value (or its transpose _14 if column major) is used to translate a point along the x axis, so it's applying a x translation as well that accounts for the convergence. I know that much but i don't really know how it applies to the rest of the projection matrix. I've done all my code using normal matrix manipulations, since it's easier to wrap my head around. A lot of the code i'm using is from stuff i used in a a Opengl engine i made. I could never get 3d vision to work with it but i was able to render videos to disk to view later in a stereoscopic player, I mostly made abstract art and trippy stuff like 3d fractals. I wanted to get "perfect" 3d in my app, so I would input my IPD, screen size, and viewing distance, as well as a world scaling value and it would render everything perfectly for my man cave. Since then I've ported a lot of it over to Unity since i could get 3dVision to work with it. it would be amazing if 3d vision direct would work as well but the current workaround of 3dmigoto and setting the separation in automatic to 0 works for now. [quote="bo3b"] That sounds right to de-shift the reflective cameras, and then apply the separation and convergence there. The only complexity I see there is that your Separation value is a fixed value, not the actual NVidia separation. Unless I'm missing something, that would =1, because we set separation as low as it can be, to disable Automatic. You would presumably use your fixed Separation value, but then I'm not clear on what convergence to use. Having it at a fixed separation is OK, but will be problematic for projector users. We don't presently have a good way of sharing any keypress settings to other code like this injector. We can share with shaders, but not other code.[/quote] I'm actually a projector user, I've got a H5360 as well :) I've already added hotkey's to adjust convergence and separation. My values are in no way related to nvidia's.
bo3b said:

The second parameter of _41, I don't understand. Any idea what that parameter is normally used for in matrices? I'd much rather do more clear matrix math than this hacky direct setting.


I think that in principle we should be able to use normal matrix manipulations, not this direct slamming of a specific entry. But I don't know what that _41 entry is used for normally, or what the normal math would be.

when multiplying a point by a matrix the x value is calculated like:
float newX = x * m.11 + x * m.12+ x * m.13 + x * m.14; // in column major

In a normal matrix the _41 value (or its transpose _14 if column major) is used to translate a point along the x axis, so it's applying a x translation as well that accounts for the convergence. I know that much but i don't really know how it applies to the rest of the projection matrix. I've done all my code using normal matrix manipulations, since it's easier to wrap my head around.

A lot of the code i'm using is from stuff i used in a a Opengl engine i made. I could never get 3d vision to work with it but i was able to render videos to disk to view later in a stereoscopic player, I mostly made abstract art and trippy stuff like 3d fractals. I wanted to get "perfect" 3d in my app, so I would input my IPD, screen size, and viewing distance, as well as a world scaling value and it would render everything perfectly for my man cave. Since then I've ported a lot of it over to Unity since i could get 3dVision to work with it. it would be amazing if 3d vision direct would work as well but the current workaround of 3dmigoto and setting the separation in automatic to 0 works for now.
bo3b said:

That sounds right to de-shift the reflective cameras, and then apply the separation and convergence there.

The only complexity I see there is that your Separation value is a fixed value, not the actual NVidia separation. Unless I'm missing something, that would =1, because we set separation as low as it can be, to disable Automatic. You would presumably use your fixed Separation value, but then I'm not clear on what convergence to use.

Having it at a fixed separation is OK, but will be problematic for projector users. We don't presently have a good way of sharing any keypress settings to other code like this injector. We can share with shaders, but not other code.


I'm actually a projector user, I've got a H5360 as well :) I've already added hotkey's to adjust convergence and separation. My values are in no way related to nvidia's.

Like my work? You can send a donation via Paypal to sgs.rules@gmail.com

Windows 7 Pro 64x - Nvidia Driver 398.82 - EVGA 980Ti SC - Optoma HD26 with Edid override - 3D Vision 2 - i7-8700K CPU at 5.0Ghz - ASROCK Z370 Ext 4 Motherboard - 32 GB RAM Corsair Vengeance - 512 GB Samsung SSD 850 Pro - Creative Sound Blaster Z

#44
Posted 05/13/2017 01:00 AM   
[quote="sgsrules"][quote="bo3b"]The second parameter of _41, I don't understand. Any idea what that parameter is normally used for in matrices? I'd much rather do more clear matrix math than this hacky direct setting. I think that in principle we should be able to use normal matrix manipulations, not this direct slamming of a specific entry. But I don't know what that _41 entry is used for normally, or what the normal math would be. [/quote]when multiplying a point by a matrix the x value is calculated like: [code] float newX = x * m.11 + x * m.12+ x * m.13 + x * m.14; // in column major [/code] In a normal matrix the _41 value (or its transpose _14 if column major) is used to translate a point along the x axis, so it's applying a x translation as well that accounts for the convergence. I know that much but i don't really know how it applies to the rest of the projection matrix. I've done all my code using normal matrix manipulations, since it's easier to wrap my head around. [/quote]Ah, very interesting. I'll dig into this a bit because I want to understand this _41 thing. [quote="sgsrules"]A lot of the code i'm using is from stuff i used in a a Opengl engine i made. I could never get 3d vision to work with it but i was able to render videos to disk to view later in a stereoscopic player, I mostly made abstract art and trippy stuff like 3d fractals. I wanted to get "perfect" 3d in my app, so I would input my IPD, screen size, and viewing distance, as well as a world scaling value and it would render everything perfectly for my man cave. Since then I've ported a lot of it over to Unity since i could get 3dVision to work with it. it would be amazing if 3d vision direct would work as well but the current workaround of 3dmigoto and setting the separation in automatic to 0 works for now.[/quote] I'll have to check out your fractal drawing. A very long time ago, I made some of the first fractal and Mandelbrot apps for Macs. On a 640x480 screen, it would take only 25 minutes to calculate a Mandelbrot set, because I wrote the inner loop in assembly. I also made a wallpaper variant, that would output to a color printer ($3/page) to do wall sized paper. Something like full 3 weeks of computation. Having that in 3D sounds killer. [quote="sgsrules"][quote="bo3b"]That sounds right to de-shift the reflective cameras, and then apply the separation and convergence there. The only complexity I see there is that your Separation value is a fixed value, not the actual NVidia separation. Unless I'm missing something, that would =1, because we set separation as low as it can be, to disable Automatic. You would presumably use your fixed Separation value, but then I'm not clear on what convergence to use. Having it at a fixed separation is OK, but will be problematic for projector users. We don't presently have a good way of sharing any keypress settings to other code like this injector. We can share with shaders, but not other code.[/quote]I'm actually a projector user, I've got a H5360 as well :) I've already added hotkey's to adjust convergence and separation. My values are in no way related to nvidia's.[/quote]Ah right-o, forgot that you are embedded into Unity, so you can do whatever it supports as well. If you want to set nvidia separation to exactly zero, one trick I've used is to map a normal game key, like left/right in this game, to a key override and set it there. It will get set at each user action, but won't cause any problems. This is nice because it makes it transparent to the user, without having to hit anything specific to start. However, if you think it makes sense to add this 3Dmigoto, I think we can possibly do that. Changing the d3dx.ini setting of force_stereo=2 would be mostly reasonable for this. Setting the driver mode to Direct is easy, and doubling the backbuffer is also easy. Doubling the other pieces like the ViewPorts, Stencil, and RenderTargetViews is not as clear to me, but might be as simple as just doubling whatever we see passed in. Then any shaders still get their normal sizes, so nothing else needs to change.
sgsrules said:
bo3b said:The second parameter of _41, I don't understand. Any idea what that parameter is normally used for in matrices? I'd much rather do more clear matrix math than this hacky direct setting.


I think that in principle we should be able to use normal matrix manipulations, not this direct slamming of a specific entry. But I don't know what that _41 entry is used for normally, or what the normal math would be.
when multiplying a point by a matrix the x value is calculated like:
float newX = x * m.11 + x * m.12+ x * m.13 + x * m.14; // in column major

In a normal matrix the _41 value (or its transpose _14 if column major) is used to translate a point along the x axis, so it's applying a x translation as well that accounts for the convergence. I know that much but i don't really know how it applies to the rest of the projection matrix. I've done all my code using normal matrix manipulations, since it's easier to wrap my head around.
Ah, very interesting. I'll dig into this a bit because I want to understand this _41 thing.


sgsrules said:A lot of the code i'm using is from stuff i used in a a Opengl engine i made. I could never get 3d vision to work with it but i was able to render videos to disk to view later in a stereoscopic player, I mostly made abstract art and trippy stuff like 3d fractals. I wanted to get "perfect" 3d in my app, so I would input my IPD, screen size, and viewing distance, as well as a world scaling value and it would render everything perfectly for my man cave. Since then I've ported a lot of it over to Unity since i could get 3dVision to work with it. it would be amazing if 3d vision direct would work as well but the current workaround of 3dmigoto and setting the separation in automatic to 0 works for now.
I'll have to check out your fractal drawing. A very long time ago, I made some of the first fractal and Mandelbrot apps for Macs. On a 640x480 screen, it would take only 25 minutes to calculate a Mandelbrot set, because I wrote the inner loop in assembly. I also made a wallpaper variant, that would output to a color printer ($3/page) to do wall sized paper. Something like full 3 weeks of computation.

Having that in 3D sounds killer.


sgsrules said:
bo3b said:That sounds right to de-shift the reflective cameras, and then apply the separation and convergence there.

The only complexity I see there is that your Separation value is a fixed value, not the actual NVidia separation. Unless I'm missing something, that would =1, because we set separation as low as it can be, to disable Automatic. You would presumably use your fixed Separation value, but then I'm not clear on what convergence to use.

Having it at a fixed separation is OK, but will be problematic for projector users. We don't presently have a good way of sharing any keypress settings to other code like this injector. We can share with shaders, but not other code.
I'm actually a projector user, I've got a H5360 as well :) I've already added hotkey's to adjust convergence and separation. My values are in no way related to nvidia's.
Ah right-o, forgot that you are embedded into Unity, so you can do whatever it supports as well.

If you want to set nvidia separation to exactly zero, one trick I've used is to map a normal game key, like left/right in this game, to a key override and set it there. It will get set at each user action, but won't cause any problems. This is nice because it makes it transparent to the user, without having to hit anything specific to start.


However, if you think it makes sense to add this 3Dmigoto, I think we can possibly do that. Changing the d3dx.ini setting of force_stereo=2 would be mostly reasonable for this.

Setting the driver mode to Direct is easy, and doubling the backbuffer is also easy.

Doubling the other pieces like the ViewPorts, Stencil, and RenderTargetViews is not as clear to me, but might be as simple as just doubling whatever we see passed in. Then any shaders still get their normal sizes, so nothing else needs to change.

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

#45
Posted 05/13/2017 01:49 AM   
  3 / 8    
Scroll To Top