[OpenGL] 3D Vision Wrapper - Enabling 3D Vision in OpenGL apps

What game should I fix next ?

The Chronicles of Riddick: Assault on Dark Athena & Escape from Butcher Bay
Neverwinter Nights
Penumbra: Requiem
Penumbra: Overture
Penumbra: Black Plague
Return to Castle Wolfenstein
Rage
Star Wars Knights of the Old Republic (Kotor 1 & 2)
Half Life 1 (Series)
  49 / 92    
[quote="murilladas"][quote="dekgol"]Hi, Any thoughts of making a wrapper for THE EVIL WITHIN ? someone? Second question: Can we use Wolfenstein wrapper in order to activate The evil within in 3D Vision (same engine , IDtech 5) Thanks,[/quote] I hate this engine, I do not understand how people can play in 3D with no eyes-synchronized. I get seasick in 10 seconds :([/quote] Luckily not everyone is you:)) and technically you can't get SEAsick...but rather 3D-sick hahaha ^_^
murilladas said:
dekgol said:Hi,
Any thoughts of making a wrapper for THE EVIL WITHIN ? someone?

Second question: Can we use Wolfenstein wrapper in order to activate The evil within in 3D Vision (same engine , IDtech 5)

Thanks,


I hate this engine, I do not understand how people can play in 3D with no eyes-synchronized. I get seasick in 10 seconds :(


Luckily not everyone is you:)) and technically you can't get SEAsick...but rather 3D-sick hahaha ^_^

1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc


My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com

(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)

Posted 09/15/2014 06:09 PM   
[quote="helifax"] Luckily not everyone is you:)) and technically you can't get SEAsick...but rather 3D-sick hahaha ^_^ [/quote] xD sorry for bad english!! (is fault of google translator :)) ) PD: With this game i will make a sacrifice and if you can fix it, i will play, however :)....i will send you the hospital invoice xDDDD.
helifax said: Luckily not everyone is you:)) and technically you can't get SEAsick...but rather 3D-sick hahaha ^_^


xD sorry for bad english!! (is fault of google translator :)) )
PD: With this game i will make a sacrifice and if you can fix it, i will play, however :)....i will send you the hospital invoice xDDDD.

i7 4970k@4.5Ghz, SLI GTX1080Ti Aorus Gigabyte Xtreme, 16GB G Skill 2400hrz, 3*PG258Q in 3D surround.

Posted 09/15/2014 07:09 PM   
[quote="murilladas"][quote="helifax"] Luckily not everyone is you:)) and technically you can't get SEAsick...but rather 3D-sick hahaha ^_^ [/quote] xD sorry for bad english!! (is fault of google translator :)) ) PD: With this game i will make a sacrifice and if you can fix it, i will play, however :)....i will send you the hospital invoice xDDDD. [/quote] Hahahaha:))
murilladas said:
helifax said: Luckily not everyone is you:)) and technically you can't get SEAsick...but rather 3D-sick hahaha ^_^


xD sorry for bad english!! (is fault of google translator :)) )
PD: With this game i will make a sacrifice and if you can fix it, i will play, however :)....i will send you the hospital invoice xDDDD.


Hahahaha:))

1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc


My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com

(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)

Posted 09/15/2014 08:33 PM   
Just a thought, no idea if this makes any sense or not. The Vireio driver was rewritten from scratch for this very same problem. The original version made by CyberReality would alternate eyes. Vireio is open-source, and is now doing both eyes at once. It's presently Rift SDK specific, but at the end of the chain it's gotta draw a side-by-side buffer. Maybe, maybe, you can snag that and make an OpenGL version or interface to your wrapper so that you can get the double-draw calls. At a minimum, maybe it's worth taking a look at their code to see if you get some ideas.
Just a thought, no idea if this makes any sense or not.

The Vireio driver was rewritten from scratch for this very same problem. The original version made by CyberReality would alternate eyes. Vireio is open-source, and is now doing both eyes at once. It's presently Rift SDK specific, but at the end of the chain it's gotta draw a side-by-side buffer.

Maybe, maybe, you can snag that and make an OpenGL version or interface to your wrapper so that you can get the double-draw calls.

At a minimum, maybe it's worth taking a look at their code to see if you get some ideas.

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

Posted 09/16/2014 01:24 AM   
[quote="bo3b"]Just a thought, no idea if this makes any sense or not. The Vireio driver was rewritten from scratch for this very same problem. The original version made by CyberReality would alternate eyes. Vireio is open-source, and is now doing both eyes at once. It's presently Rift SDK specific, but at the end of the chain it's gotta draw a side-by-side buffer. Maybe, maybe, you can snag that and make an OpenGL version or interface to your wrapper so that you can get the double-draw calls. At a minimum, maybe it's worth taking a look at their code to see if you get some ideas.[/quote] I did look at the Vireio before and couldn't find anything related to the duplication of the draw calls. I expect the draw calls duplications is done in the DK2 SDK ? Not a clue ... I sent an e-mail to the guy who wrote it... Hopefully he will come back and give a hand...
bo3b said:Just a thought, no idea if this makes any sense or not.

The Vireio driver was rewritten from scratch for this very same problem. The original version made by CyberReality would alternate eyes. Vireio is open-source, and is now doing both eyes at once. It's presently Rift SDK specific, but at the end of the chain it's gotta draw a side-by-side buffer.

Maybe, maybe, you can snag that and make an OpenGL version or interface to your wrapper so that you can get the double-draw calls.

At a minimum, maybe it's worth taking a look at their code to see if you get some ideas.



I did look at the Vireio before and couldn't find anything related to the duplication of the draw calls. I expect the draw calls duplications is done in the DK2 SDK ? Not a clue ... I sent an e-mail to the guy who wrote it... Hopefully he will come back and give a hand...

1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc


My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com

(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)

Posted 09/16/2014 06:07 PM   
Here's the piece I was sort of looking for. I looked at this a bit previously because in the very worst-case scenario, we can always create our own stereoscopic driver. It's not nearly as complicated as I originally thought, but of course I've take some good beatings while getting 3Dmigoto running. https://github.com/cybereality/Perception/blob/70a42b1ce9ebaa6d1ea2248ad2814f73f821bbdf/DxProxy/DxProxy/D3DProxyDevice.cpp [code] /** * Applies all dirty shader registers, draws both stereo sides if switchDrawingSide() agrees. * @see switchDrawingSide() ***/ HRESULT WINAPI D3DProxyDevice::DrawPrimitive(D3DPRIMITIVETYPE PrimitiveType,UINT StartVertex,UINT PrimitiveCount) { #ifdef SHOW_CALLS OutputDebugString("called DrawPrimitive"); #endif m_spManagedShaderRegisters->ApplyAllDirty(m_currentRenderingSide); HRESULT result; if (SUCCEEDED(result = BaseDirect3DDevice9::DrawPrimitive(PrimitiveType, StartVertex, PrimitiveCount))) { if (switchDrawingSide()) BaseDirect3DDevice9::DrawPrimitive(PrimitiveType, StartVertex, PrimitiveCount); } return result; } [/code] I think that the basic premise that automatic mode uses is that it creates a second buffer for both front and back buffers, for left and right. So, instead of two buffers, we have four buffers instead. Then, like this code, when a given Draw call is hit, [olist] [.]it shifts the view left, [/.] [.]draws once in the left eye, [/.] [.]shifts the view to right, [/.] [.]swaps to right eye buffer, [/.] [.]then calls the draw again. [/.] [/olist] Since Draw calls cannot save state of any form, this doesn't hurt anything. So, the only complexity is wrapping every possible Draw call (lots of variants like DrawIndexed). I think. Not sure without trying it of course. BTW: I don't think the Rift SDK does any sort of automatic mode at all. Their basic premise is that games need to be built up with VR in mind, and don't like the idea of retrofitting old stuff.
Here's the piece I was sort of looking for.

I looked at this a bit previously because in the very worst-case scenario, we can always create our own stereoscopic driver. It's not nearly as complicated as I originally thought, but of course I've take some good beatings while getting 3Dmigoto running.

https://github.com/cybereality/Perception/blob/70a42b1ce9ebaa6d1ea2248ad2814f73f821bbdf/DxProxy/DxProxy/D3DProxyDevice.cpp

/**
* Applies all dirty shader registers, draws both stereo sides if switchDrawingSide() agrees.
* @see switchDrawingSide()
***/
HRESULT WINAPI D3DProxyDevice::DrawPrimitive(D3DPRIMITIVETYPE PrimitiveType,UINT StartVertex,UINT PrimitiveCount)
{
#ifdef SHOW_CALLS
OutputDebugString("called DrawPrimitive");
#endif
m_spManagedShaderRegisters->ApplyAllDirty(m_currentRenderingSide);

HRESULT result;
if (SUCCEEDED(result = BaseDirect3DDevice9::DrawPrimitive(PrimitiveType, StartVertex, PrimitiveCount))) {
if (switchDrawingSide())
BaseDirect3DDevice9::DrawPrimitive(PrimitiveType, StartVertex, PrimitiveCount);
}

return result;
}


I think that the basic premise that automatic mode uses is that it creates a second buffer for both front and back buffers, for left and right. So, instead of two buffers, we have four buffers instead.

Then, like this code, when a given Draw call is hit,
  1. it shifts the view left,
  2. draws once in the left eye,
  3. shifts the view to right,
  4. swaps to right eye buffer,
  5. then calls the draw again.

Since Draw calls cannot save state of any form, this doesn't hurt anything.

So, the only complexity is wrapping every possible Draw call (lots of variants like DrawIndexed). I think. Not sure without trying it of course.


BTW: I don't think the Rift SDK does any sort of automatic mode at all. Their basic premise is that games need to be built up with VR in mind, and don't like the idea of retrofitting old stuff.

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

Posted 09/17/2014 01:44 AM   
So for the noobs like me , is it more complicated than the regular or not?
So for the noobs like me , is it more complicated than the regular or not?

CPU: Intel Core i7 4770 @ 3.4GHz (8 CPUs) 8MB cache Motherboard: Gigabyte GA-B85M Memory: 8GB DDR3 1600MHz RAM Operation System : Windows 7 Ultimate 64-bit English Video card : Geforce GTX 680 Zotac 2048MB Monitor : Alienware AW2310 Power-supply: Seasonic SSP-650RT 650W Active PFC

Posted 09/17/2014 06:55 AM   
[quote="bo3b"]Here's the piece I was sort of looking for. I looked at this a bit previously because in the very worst-case scenario, we can always create our own stereoscopic driver. It's not nearly as complicated as I originally thought, but of course I've take some good beatings while getting 3Dmigoto running. https://github.com/cybereality/Perception/blob/70a42b1ce9ebaa6d1ea2248ad2814f73f821bbdf/DxProxy/DxProxy/D3DProxyDevice.cpp [code] /** * Applies all dirty shader registers, draws both stereo sides if switchDrawingSide() agrees. * @see switchDrawingSide() ***/ HRESULT WINAPI D3DProxyDevice::DrawPrimitive(D3DPRIMITIVETYPE PrimitiveType,UINT StartVertex,UINT PrimitiveCount) { #ifdef SHOW_CALLS OutputDebugString("called DrawPrimitive"); #endif m_spManagedShaderRegisters->ApplyAllDirty(m_currentRenderingSide); HRESULT result; if (SUCCEEDED(result = BaseDirect3DDevice9::DrawPrimitive(PrimitiveType, StartVertex, PrimitiveCount))) { if (switchDrawingSide()) BaseDirect3DDevice9::DrawPrimitive(PrimitiveType, StartVertex, PrimitiveCount); } return result; } [/code] I think that the basic premise that automatic mode uses is that it creates a second buffer for both front and back buffers, for left and right. So, instead of two buffers, we have four buffers instead. Then, like this code, when a given Draw call is hit, [olist] [.]it shifts the view left, [/.] [.]draws once in the left eye, [/.] [.]shifts the view to right, [/.] [.]swaps to right eye buffer, [/.] [.]then calls the draw again. [/.] [/olist] Since Draw calls cannot save state of any form, this doesn't hurt anything. So, the only complexity is wrapping every possible Draw call (lots of variants like DrawIndexed). I think. Not sure without trying it of course. BTW: I don't think the Rift SDK does any sort of automatic mode at all. Their basic premise is that games need to be built up with VR in mind, and don't like the idea of retrofitting old stuff.[/quote] I understand exactly what you mean and I did look through the code quite a bit before. Before I get to the actual topic I want to tell you what I've tried so far: CASE1. - Create a stack mechanism that contains the pointers to all the Draw Functions (and store the parameters for each function). - Before glSwapBuffers() (which shows the end of a frame and swaps the back & front buffers) RUN all the stored functions (from the stack) with the saved parameters. - In theory this should work except one MAJOR flaw...The content of the Vertex Array Objects, Vertex Buffer Objects etc will be changed during the execution of the draw loop the 1st time. - In practice I managed to get some very basic decent results and most of the times it will just crash (trying to access a vertex array object that is no longer present or so on) CASE2. Is exactly what you are saying. Make a list of ALL THE DRAW functions, hook them and DUPLICATE them for each eye. The only problem is that OpenGL has hundreds of draw call functions and variants... [url=http://www.opengl.org/sdk/docs/man/]http://www.opengl.org/sdk/docs/man/[/url] - just look for functions that start with gl (and ignore the shader ones) to see what I am talking about. And this is only the CORE without the _EXT and _ARB variants... (which are sometimes used if the engine uses something proprietary to a vendor like nvidia) I don't say is impossible, but it will definitely require a big TIME investment... I always hoped that somebody would jump the boat at some point and help me out with this aspect (and also with the ARB Shader support for legacy games) but so far no one said anything:( So back to the topic, yes I agree duplicating the draw call when they happen is the way to go and insures you generate the exact frame two times. I was looking for a somehow automatic way of doing this but apparently you cannot...(or I don't know about it). The D3D9 part of the wrapper is already created in the CORRECT format (with 4 buffers) + 3D texture + nvidia flag for stereo (that's why 3D Vision kicks in and so on). The eye latency doesn't come from the D3D9 rendering but rather from the OpenGL rendering. Is there that things needs to be fixed;)) I plan to prototype one duplications of the draw calls on a very simple app and see what results I can obtain;))
bo3b said:Here's the piece I was sort of looking for.

I looked at this a bit previously because in the very worst-case scenario, we can always create our own stereoscopic driver. It's not nearly as complicated as I originally thought, but of course I've take some good beatings while getting 3Dmigoto running.

https://github.com/cybereality/Perception/blob/70a42b1ce9ebaa6d1ea2248ad2814f73f821bbdf/DxProxy/DxProxy/D3DProxyDevice.cpp

/**
* Applies all dirty shader registers, draws both stereo sides if switchDrawingSide() agrees.
* @see switchDrawingSide()
***/
HRESULT WINAPI D3DProxyDevice::DrawPrimitive(D3DPRIMITIVETYPE PrimitiveType,UINT StartVertex,UINT PrimitiveCount)
{
#ifdef SHOW_CALLS
OutputDebugString("called DrawPrimitive");
#endif
m_spManagedShaderRegisters->ApplyAllDirty(m_currentRenderingSide);

HRESULT result;
if (SUCCEEDED(result = BaseDirect3DDevice9::DrawPrimitive(PrimitiveType, StartVertex, PrimitiveCount))) {
if (switchDrawingSide())
BaseDirect3DDevice9::DrawPrimitive(PrimitiveType, StartVertex, PrimitiveCount);
}

return result;
}


I think that the basic premise that automatic mode uses is that it creates a second buffer for both front and back buffers, for left and right. So, instead of two buffers, we have four buffers instead.

Then, like this code, when a given Draw call is hit,
  1. it shifts the view left,
  2. draws once in the left eye,
  3. shifts the view to right,
  4. swaps to right eye buffer,
  5. then calls the draw again.

Since Draw calls cannot save state of any form, this doesn't hurt anything.

So, the only complexity is wrapping every possible Draw call (lots of variants like DrawIndexed). I think. Not sure without trying it of course.


BTW: I don't think the Rift SDK does any sort of automatic mode at all. Their basic premise is that games need to be built up with VR in mind, and don't like the idea of retrofitting old stuff.



I understand exactly what you mean and I did look through the code quite a bit before.
Before I get to the actual topic I want to tell you what I've tried so far:

CASE1.
- Create a stack mechanism that contains the pointers to all the Draw Functions (and store the parameters for each function).
- Before glSwapBuffers() (which shows the end of a frame and swaps the back & front buffers) RUN all the stored functions (from the stack) with the saved parameters.
- In theory this should work except one MAJOR flaw...The content of the Vertex Array Objects, Vertex Buffer Objects etc will be changed during the execution of the draw loop the 1st time.
- In practice I managed to get some very basic decent results and most of the times it will just crash (trying to access a vertex array object that is no longer present or so on)

CASE2.
Is exactly what you are saying. Make a list of ALL THE DRAW functions, hook them and DUPLICATE them for each eye.
The only problem is that OpenGL has hundreds of draw call functions and variants...
http://www.opengl.org/sdk/docs/man/ - just look for functions that start with gl (and ignore the shader ones) to see what I am talking about. And this is only the CORE without the _EXT and _ARB variants... (which are sometimes used if the engine uses something proprietary to a vendor like nvidia)

I don't say is impossible, but it will definitely require a big TIME investment...
I always hoped that somebody would jump the boat at some point and help me out with this aspect (and also with the ARB Shader support for legacy games) but so far no one said anything:(

So back to the topic, yes I agree duplicating the draw call when they happen is the way to go and insures you generate the exact frame two times. I was looking for a somehow automatic way of doing this but apparently you cannot...(or I don't know about it).
The D3D9 part of the wrapper is already created in the CORRECT format (with 4 buffers) + 3D texture + nvidia flag for stereo (that's why 3D Vision kicks in and so on).

The eye latency doesn't come from the D3D9 rendering but rather from the OpenGL rendering. Is there that things needs to be fixed;))

I plan to prototype one duplications of the draw calls on a very simple app and see what results I can obtain;))

1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc


My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com

(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)

Posted 09/17/2014 10:00 AM   
Well, keeping in mind that I don't know jack about OpenGL.... Looking at the documentation you referenced there, it doesn't look to be that large a list to me. There is a lot of gl* stuff there that you would not want or need to wrap, especially all the getters glGet*, none of the bind stuff glBind*, no glProgram*, no glSampler*, no glTex*, that sort of stuff. A lot of that stuff is just setting up the pipeline to draw, by setting all the data required. The pipeline would still be valid for a 2nd buffer. Only the actual draw or buffer erase instructions would be necessary to wrap. Also probably more to the point, when I watch DX11 games go by with all the calls wrapped, there is always a small subset of calls that are used by everybody, and a whole list of others that are never used. In the spec, but not in the wild if you will. So, it seems like it might be worth some experiments to wrap just a few select calls that you know for sure are used, and do the double calls on that small subset. If there wind up still being a lot of calls to wrap, it might make sense to use regular expressions or write a small text parser to auto-generate some code around the calls that is the same. Sort of like that example above, that sequence is duplicated for all the ::Draw* calls. The only thing that changes is the function name and the input parameters. For what it's worth, in DX11 there are only about 6 ::Draw* call variants, of which I've only ever seen two called. Don't get me wrong though, none of this stuff is that easy to do. I'm not suggesting it's not a ton of work, regardless.
Well, keeping in mind that I don't know jack about OpenGL....


Looking at the documentation you referenced there, it doesn't look to be that large a list to me. There is a lot of gl* stuff there that you would not want or need to wrap, especially all the getters glGet*, none of the bind stuff glBind*, no glProgram*, no glSampler*, no glTex*, that sort of stuff.

A lot of that stuff is just setting up the pipeline to draw, by setting all the data required. The pipeline would still be valid for a 2nd buffer.

Only the actual draw or buffer erase instructions would be necessary to wrap. Also probably more to the point, when I watch DX11 games go by with all the calls wrapped, there is always a small subset of calls that are used by everybody, and a whole list of others that are never used. In the spec, but not in the wild if you will.

So, it seems like it might be worth some experiments to wrap just a few select calls that you know for sure are used, and do the double calls on that small subset.

If there wind up still being a lot of calls to wrap, it might make sense to use regular expressions or write a small text parser to auto-generate some code around the calls that is the same. Sort of like that example above, that sequence is duplicated for all the ::Draw* calls. The only thing that changes is the function name and the input parameters.

For what it's worth, in DX11 there are only about 6 ::Draw* call variants, of which I've only ever seen two called.


Don't get me wrong though, none of this stuff is that easy to do. I'm not suggesting it's not a ton of work, regardless.

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

Posted 09/17/2014 01:54 PM   
[quote="bo3b"]Well, keeping in mind that I don't know jack about OpenGL.... Looking at the documentation you referenced there, it doesn't look to be that large a list to me. There is a lot of gl* stuff there that you would not want or need to wrap, especially all the getters glGet*, none of the bind stuff glBind*, no glProgram*, no glSampler*, no glTex*, that sort of stuff. A lot of that stuff is just setting up the pipeline to draw, by setting all the data required. The pipeline would still be valid for a 2nd buffer. Only the actual draw or buffer erase instructions would be necessary to wrap. Also probably more to the point, when I watch DX11 games go by with all the calls wrapped, there is always a small subset of calls that are used by everybody, and a whole list of others that are never used. In the spec, but not in the wild if you will. So, it seems like it might be worth some experiments to wrap just a few select calls that you know for sure are used, and do the double calls on that small subset. If there wind up still being a lot of calls to wrap, it might make sense to use regular expressions or write a small text parser to auto-generate some code around the calls that is the same. Sort of like that example above, that sequence is duplicated for all the ::Draw* calls. The only thing that changes is the function name and the input parameters. For what it's worth, in DX11 there are only about 6 ::Draw* call variants, of which I've only ever seen two called. Don't get me wrong though, none of this stuff is that easy to do. I'm not suggesting it's not a ton of work, regardless.[/quote] Just discard what I said in the previous post:)) You will soon find out what I talk about:)) I just missed one obvious LITTLE thing.... God I swear to God... fixing the HARD part is actually easier than fixing the EASY part....I sometimes tend to COMPLICATE things so much that from a simple lock design I build an Enterprise StarShip:)))
bo3b said:Well, keeping in mind that I don't know jack about OpenGL....


Looking at the documentation you referenced there, it doesn't look to be that large a list to me. There is a lot of gl* stuff there that you would not want or need to wrap, especially all the getters glGet*, none of the bind stuff glBind*, no glProgram*, no glSampler*, no glTex*, that sort of stuff.

A lot of that stuff is just setting up the pipeline to draw, by setting all the data required. The pipeline would still be valid for a 2nd buffer.

Only the actual draw or buffer erase instructions would be necessary to wrap. Also probably more to the point, when I watch DX11 games go by with all the calls wrapped, there is always a small subset of calls that are used by everybody, and a whole list of others that are never used. In the spec, but not in the wild if you will.

So, it seems like it might be worth some experiments to wrap just a few select calls that you know for sure are used, and do the double calls on that small subset.

If there wind up still being a lot of calls to wrap, it might make sense to use regular expressions or write a small text parser to auto-generate some code around the calls that is the same. Sort of like that example above, that sequence is duplicated for all the ::Draw* calls. The only thing that changes is the function name and the input parameters.

For what it's worth, in DX11 there are only about 6 ::Draw* call variants, of which I've only ever seen two called.


Don't get me wrong though, none of this stuff is that easy to do. I'm not suggesting it's not a ton of work, regardless.



Just discard what I said in the previous post:)) You will soon find out what I talk about:))
I just missed one obvious LITTLE thing....
God I swear to God... fixing the HARD part is actually easier than fixing the EASY part....I sometimes tend to COMPLICATE things so much that from a simple lock design I build an Enterprise StarShip:)))

1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc


My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com

(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)

Posted 09/17/2014 06:11 PM   
[NEW ALPHA STAGE] So back to the actual topic. I managed to remove the eye-sync completely by duplicating the draw calls. (Just as described above). The only game that works with the wrapper is Broken Age. I chose this game since it has a simplistic engine with just a few draw calls and doesn't deal with multiple frame objects and frame buffers. So no ID5 engine JUST yet... The idea is that I will update the wrapper to support all the current game fixes that I released and future ones as well:). So, if anyone has Broken Age and wants to give it a try here is the ALPHA Stage of the wrapper. One thing to notice (except that the eyes are in synch is the framerate which most of the times sticks around 60fps == game fps lock, which is what you normally expect from 3D Vision). If anyone tests this let me know what you find. PS: install the original wrapper from http://3dsurroundgaming.com/OpenGL3DVisionGames.html Download this archive and overwrite the files from the archive: http://3dsurroundgaming.com/OGL3DVision/Releases/Alpha_BrokenAge.rar
[NEW ALPHA STAGE]

So back to the actual topic.

I managed to remove the eye-sync completely by duplicating the draw calls. (Just as described above).
The only game that works with the wrapper is Broken Age. I chose this game since it has a simplistic engine with just a few draw calls and doesn't deal with multiple frame objects and frame buffers.
So no ID5 engine JUST yet... The idea is that I will update the wrapper to support all the current game fixes that I released and future ones as well:).

So, if anyone has Broken Age and wants to give it a try here is the ALPHA Stage of the wrapper.
One thing to notice (except that the eyes are in synch is the framerate which most of the times sticks around 60fps == game fps lock, which is what you normally expect from 3D Vision).

If anyone tests this let me know what you find.
PS: install the original wrapper from http://3dsurroundgaming.com/OpenGL3DVisionGames.html
Download this archive and overwrite the files from the archive: http://3dsurroundgaming.com/OGL3DVision/Releases/Alpha_BrokenAge.rar

1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc


My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com

(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)

Posted 09/17/2014 06:50 PM   
Ohhh, this sounds good... Is Minecraft the next least complicated one before iD Tech5....?
Ohhh, this sounds good... Is Minecraft the next least complicated one before iD Tech5....?

Rig: Intel i7-8700K @4.7GHz, 16Gb Ram, SSD, GTX 1080Ti, Win10x64, Asus VG278

Posted 09/17/2014 07:03 PM   
[quote="helifax"][NEW ALPHA STAGE] So back to the actual topic. I managed to remove the eye-sync completely by duplicating the draw calls. (Just as described above). The only game that works with the wrapper is Broken Age. I chose this game since it has a simplistic engine with just a few draw calls and doesn't deal with multiple frame objects and frame buffers. So no ID5 engine JUST yet... The idea is that I will update the wrapper to support all the current game fixes that I released and future ones as well:). So, if anyone has Broken Age and wants to give it a try here is the ALPHA Stage of the wrapper. One thing to notice (except that the eyes are in synch is the framerate which most of the times sticks around 60fps == game fps lock, which is what you normally expect from 3D Vision). If anyone tests this let me know what you find. PS: install the original wrapper from http://3dsurroundgaming.com/OpenGL3DVisionGames.html Download this archive and overwrite the files from the archive: http://3dsurroundgaming.com/OGL3DVision/Releases/Alpha_BrokenAge.rar[/quote] OMG!!! helifax sounds promising
helifax said:[NEW ALPHA STAGE]

So back to the actual topic.

I managed to remove the eye-sync completely by duplicating the draw calls. (Just as described above).
The only game that works with the wrapper is Broken Age. I chose this game since it has a simplistic engine with just a few draw calls and doesn't deal with multiple frame objects and frame buffers.
So no ID5 engine JUST yet... The idea is that I will update the wrapper to support all the current game fixes that I released and future ones as well:).

So, if anyone has Broken Age and wants to give it a try here is the ALPHA Stage of the wrapper.
One thing to notice (except that the eyes are in synch is the framerate which most of the times sticks around 60fps == game fps lock, which is what you normally expect from 3D Vision).

If anyone tests this let me know what you find.
PS: install the original wrapper from http://3dsurroundgaming.com/OpenGL3DVisionGames.html

Download this archive and overwrite the files from the archive: http://3dsurroundgaming.com/OGL3DVision/Releases/Alpha_BrokenAge.rar



OMG!!! helifax sounds promising

i7 4970k@4.5Ghz, SLI GTX1080Ti Aorus Gigabyte Xtreme, 16GB G Skill 2400hrz, 3*PG258Q in 3D surround.

Posted 09/17/2014 07:25 PM   
[quote="mike_ar69"]Ohhh, this sounds good... Is Minecraft the next least complicated one before iD Tech5....?[/quote] Hmm I don't really know what to say:)) Honestly:)) At the moment I am not wrapping all the variants nor all the drawing functions just yet;)) I remember MineCraft uses an ARB extension variant for the shaders. It might be possible it uses ARB variants for the draw functions as well... But this is not the problem... Both Minecraft and ID5 use multiple frameBuffer objects and buffers so I need to duplicate these as well I guess... and this is the next big step;)) I need to think on how I am going to tackle this (code-wise) so is optimum and hassle-free and easy maintainable;)) Once this is done basically supporting different extensions is not hard. So, if an ID5 game or minecraft will work it will be pretty much an universal thingy;)) But rest assured they are all UP on the priority list;)) I also want to see Minecraft working with that Seus Pack in perfect 3D Vision heehehe ^_^. Problem is to find the time to actually work on it (today I was @ home sick with a damn tooth infection so I had time to dig up/in on this ^_^ ).
mike_ar69 said:Ohhh, this sounds good... Is Minecraft the next least complicated one before iD Tech5....?


Hmm I don't really know what to say:)) Honestly:)) At the moment I am not wrapping all the variants nor all the drawing functions just yet;)) I remember MineCraft uses an ARB extension variant for the shaders. It might be possible it uses ARB variants for the draw functions as well...
But this is not the problem...
Both Minecraft and ID5 use multiple frameBuffer objects and buffers so I need to duplicate these as well I guess... and this is the next big step;)) I need to think on how I am going to tackle this (code-wise) so is optimum and hassle-free and easy maintainable;))
Once this is done basically supporting different extensions is not hard. So, if an ID5 game or minecraft will work it will be pretty much an universal thingy;))
But rest assured they are all UP on the priority list;)) I also want to see Minecraft working with that Seus Pack in perfect 3D Vision heehehe ^_^.
Problem is to find the time to actually work on it (today I was @ home sick with a damn tooth infection so I had time to dig up/in on this ^_^ ).

1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc


My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com

(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)

Posted 09/17/2014 07:34 PM   
Good News: I managed to make ID5 Engine (Rage) render in perfect 3D Vision. (Since I don't want to pay hospital bills ^_^) Bad News: I get 1 frame per second :)))))) Need to do some profiling to see what is what.... Edit: Well not the whole game just yet... just some parts of it are rendering properly.. rest is still black hehehe
Good News: I managed to make ID5 Engine (Rage) render in perfect 3D Vision. (Since I don't want to pay hospital bills ^_^)
Bad News: I get 1 frame per second :)))))) Need to do some profiling to see what is what....

Edit: Well not the whole game just yet... just some parts of it are rendering properly.. rest is still black hehehe

1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc


My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com

(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)

Posted 09/17/2014 09:58 PM   
  49 / 92    
Scroll To Top