Basically in every game it is visible more or less. In FPS type games like Amnesia or fast paced games it is more visible. The tearing can really RIP you eyes apart:|
I am still investigating stuff. However in all the papers that I read (including Nvidia Presentation and Nvidia Best Practices and etc) the way to do it is to DUPLICATE the draw calls. However they didn't say exactly how they do it except that is done by the driver heuristics.
So I guess they are doing exactly what I started to do.... create a Database in which you store all the draw calls and all the args passed to that function.
Left Eye:
- Activate Left Buffer
- Let the game call all the draw calls
- Capture in the database all the draw calls
Swap buffers 1st time:
- Activate Right Buffer
- Render again using the stored callbacks + argv[]
- Once finished clear the database
- JUMP to Swap buffers
Swap buffers 2nd time;
- Do the actual SWAP (using the NVIDIA DX9 mechanism)
- Activate Left buffer and resume.
The problem with this approach is that I have to RECORD/HOOK/PUT in the Drawback Database all the drawcalls including standard OGL + EXT + ARB + NV specific ones, and there are quite a few calls (here I need some investigation how to do that since manually doing it... I will grow old)
Trying to freeze the time between frame render calls is another idea but based on some testing will actually render 2 consecutive frames nevertheless with even a BIGGER difference. Another downside is that is game specific.... (finding the internal tick delta time and modify that).
Currently I started with the 1st approach and was able to get some results so far.
Keep you posted.
Basically in every game it is visible more or less. In FPS type games like Amnesia or fast paced games it is more visible. The tearing can really RIP you eyes apart:|
I am still investigating stuff. However in all the papers that I read (including Nvidia Presentation and Nvidia Best Practices and etc) the way to do it is to DUPLICATE the draw calls. However they didn't say exactly how they do it except that is done by the driver heuristics.
So I guess they are doing exactly what I started to do.... create a Database in which you store all the draw calls and all the args passed to that function.
Left Eye:
- Activate Left Buffer
- Let the game call all the draw calls
- Capture in the database all the draw calls
Swap buffers 1st time:
- Activate Right Buffer
- Render again using the stored callbacks + argv[]
- Once finished clear the database
- JUMP to Swap buffers
Swap buffers 2nd time;
- Do the actual SWAP (using the NVIDIA DX9 mechanism)
- Activate Left buffer and resume.
The problem with this approach is that I have to RECORD/HOOK/PUT in the Drawback Database all the drawcalls including standard OGL + EXT + ARB + NV specific ones, and there are quite a few calls (here I need some investigation how to do that since manually doing it... I will grow old)
Trying to freeze the time between frame render calls is another idea but based on some testing will actually render 2 consecutive frames nevertheless with even a BIGGER difference. Another downside is that is game specific.... (finding the internal tick delta time and modify that).
Currently I started with the 1st approach and was able to get some results so far.
Keep you posted.
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
Maybe there is a way to render the two viewports side-by-side in one big single frame? And split them afterwards into the left and right buffer?
Just an idea. I have no knowledge about those things... :(
Maybe there is a way to render the two viewports side-by-side in one big single frame? And split them afterwards into the left and right buffer?
Just an idea. I have no knowledge about those things... :(
Desktop-PC
i7 870 @ 3.8GHz + MSI GTX1070 Gaming X + 16GB RAM + Win10 64Bit Home + AW2310+3D-Vision
I'm not sure I understand all the details. Are you trying to replace what the 3D Vision driver does, or augment it? If augment, it seems like 3D Vision will still do the vertex correction for stereo, and do a double pass of drawing for the 2nd eye. In the formula, it does the +/- on the .x parameter.
If you are trying to replace what it does, you might be able to use 3Dmigoto, because it has the ability to disable automatic, and also add that eye-correction code to every Vertex Shader seen.
Lastly, if you are trying to replace the 3D driver, you might look at Vireio. It's open source, and while it's VR centric now, you should be able to leverage its new ability to draw two eyes at the same moment.
I wouldn't expect you to need to record all parameters, but maybe that's the only spot you can hook.
I would expect it to be more like:
[code]patched drawcall (input params):
Left eye: - on .x
Set left buffer
Draw(input params)
Right eye: + on .x
Set right buffer
Draw(input params)
[/code]
(Page 15 of: [url]http://developer.download.nvidia.com/whitepapers/2010/3D_Vision_Best_Practices_Guide.pdf[/url])
I'm not sure I understand all the details. Are you trying to replace what the 3D Vision driver does, or augment it? If augment, it seems like 3D Vision will still do the vertex correction for stereo, and do a double pass of drawing for the 2nd eye. In the formula, it does the +/- on the .x parameter.
If you are trying to replace what it does, you might be able to use 3Dmigoto, because it has the ability to disable automatic, and also add that eye-correction code to every Vertex Shader seen.
Lastly, if you are trying to replace the 3D driver, you might look at Vireio. It's open source, and while it's VR centric now, you should be able to leverage its new ability to draw two eyes at the same moment.
I wouldn't expect you to need to record all parameters, but maybe that's the only spot you can hook.
I would expect it to be more like:
patched drawcall (input params):
Left eye: - on .x
Set left buffer
Draw(input params)
Right eye: + on .x
Set right buffer
Draw(input params)
[quote="BazzaLB"]I run dolphin in stereoscopc 3D so why can't you? DK Country returns and many others look marvellous in 3D.[/quote]
I'm not using an old version :P
[quote="bo3b"]I'm not sure I understand all the details. Are you trying to replace what the 3D Vision driver does, or augment it? If augment, it seems like 3D Vision will still do the vertex correction for stereo, and do a double pass of drawing for the 2nd eye. In the formula, it does the +/- on the .x parameter.
If you are trying to replace what it does, you might be able to use 3Dmigoto, because it has the ability to disable automatic, and also add that eye-correction code to every Vertex Shader seen.
Lastly, if you are trying to replace the 3D driver, you might look at Vireio. It's open source, and while it's VR centric now, you should be able to leverage its new ability to draw two eyes at the same moment.
I wouldn't expect you to need to record all parameters, but maybe that's the only spot you can hook.
I would expect it to be more like:
[code]patched drawcall (input params):
Left eye: - on .x
Set left buffer
Draw(input params)
Right eye: + on .x
Set right buffer
Draw(input params)
[/code]
(Page 15 of: [url]http://developer.download.nvidia.com/whitepapers/2010/3D_Vision_Best_Practices_Guide.pdf[/url])[/quote]
Hey,
I apply the stereo correction in every vertex. That part is perfect and works as expected. The problem that I am facing is to replicate the same frame in the right eye.
Basically, I need to render the same game state (time) in both eyes: one in left and one in right.
Currently I am rendering left: frame1, right frame2, left frame3, right frame4...
What I need to do is: left: frame1, right: frame1, left: frame2. right: frame2 and so on.
The way I do it it gives proper stereoscopic but on fast movement the eyes get out of sync. The lower the framerate the more visible it gets.
I was able to replicate all the drawcalls however this doesn't work since the program thinks he finished rendering the current frame and waits to start the next one.
In the paper...if you look nvidia says duplicate draw calls and gives a small pseudocode. however 3d vision automatic does that (I expect they do some kind of hooking in the drivers or something).
The other approach would be to find the internal thick clock of the application and
render left, save delta time, activate right, put the delta on the same value from left eye, render the same scene again in the right buffer, resume the normal course.
Another approach I tried was to save all the uniforms(like MVP matrixes and positions) values that are sent to the shader between the left and right rendering. This worked to some extent, but I am also sending uniforms to perform stereoscopy and is hard to say exactly what uniforms I need to alter and which not...
I'll give a look at Vireio;)) thx for the tip:D
bo3b said:I'm not sure I understand all the details. Are you trying to replace what the 3D Vision driver does, or augment it? If augment, it seems like 3D Vision will still do the vertex correction for stereo, and do a double pass of drawing for the 2nd eye. In the formula, it does the +/- on the .x parameter.
If you are trying to replace what it does, you might be able to use 3Dmigoto, because it has the ability to disable automatic, and also add that eye-correction code to every Vertex Shader seen.
Lastly, if you are trying to replace the 3D driver, you might look at Vireio. It's open source, and while it's VR centric now, you should be able to leverage its new ability to draw two eyes at the same moment.
I wouldn't expect you to need to record all parameters, but maybe that's the only spot you can hook.
I would expect it to be more like:
patched drawcall (input params):
Left eye: - on .x
Set left buffer
Draw(input params)
Right eye: + on .x
Set right buffer
Draw(input params)
I apply the stereo correction in every vertex. That part is perfect and works as expected. The problem that I am facing is to replicate the same frame in the right eye.
Basically, I need to render the same game state (time) in both eyes: one in left and one in right.
Currently I am rendering left: frame1, right frame2, left frame3, right frame4...
What I need to do is: left: frame1, right: frame1, left: frame2. right: frame2 and so on.
The way I do it it gives proper stereoscopic but on fast movement the eyes get out of sync. The lower the framerate the more visible it gets.
I was able to replicate all the drawcalls however this doesn't work since the program thinks he finished rendering the current frame and waits to start the next one.
In the paper...if you look nvidia says duplicate draw calls and gives a small pseudocode. however 3d vision automatic does that (I expect they do some kind of hooking in the drivers or something).
The other approach would be to find the internal thick clock of the application and
render left, save delta time, activate right, put the delta on the same value from left eye, render the same scene again in the right buffer, resume the normal course.
Another approach I tried was to save all the uniforms(like MVP matrixes and positions) values that are sent to the shader between the left and right rendering. This worked to some extent, but I am also sending uniforms to perform stereoscopy and is hard to say exactly what uniforms I need to alter and which not...
I'll give a look at Vireio;)) thx for the tip:D
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
[quote="Shinra358"][quote="BazzaLB"]I run dolphin in stereoscopc 3D so why can't you? DK Country returns and many others look marvellous in 3D.[/quote]
I'm not using an old version :P[/quote]
I'm using the latest version (4.0.2) so I'm still confused tbh. Oh well, works for me.there's a thread about it somewhere around here. Just to clarify, I'm using Chiri's version for a couple of games but latest version for everything else with 3D running fine via 3DTV Play.
BazzaLB said:I run dolphin in stereoscopc 3D so why can't you? DK Country returns and many others look marvellous in 3D.
I'm not using an old version :P
I'm using the latest version (4.0.2) so I'm still confused tbh. Oh well, works for me.there's a thread about it somewhere around here. Just to clarify, I'm using Chiri's version for a couple of games but latest version for everything else with 3D running fine via 3DTV Play.
That version is old. Chiri's version is still broken just like the original version as far as improper shutdowns and what not. The latest version is dolphin-master-4.0-862. Not the one they have on the main site. Newest version no longer has dx9. Only opengl and dx11. Also opencl is gone.
That version is old. Chiri's version is still broken just like the original version as far as improper shutdowns and what not. The latest version is dolphin-master-4.0-862. Not the one they have on the main site. Newest version no longer has dx9. Only opengl and dx11. Also opencl is gone.
Model: Clevo P570WM Laptop
GPU: GeForce GTX 980M ~8GB GDDR5
CPU: Intel Core i7-4960X CPU +4.2GHz (12 CPUs)
Memory: 32GB Corsair Vengeance DDR3L 1600MHz, 4x8gb
OS: Microsoft Windows 7 Ultimate
[quote="Shinra358"]That version is old. Chiri's version is still broken just like the original version as far as improper shutdowns and what not. The latest version is dolphin-master-4.0-862. Not the one they have on the main site. Newest version no longer has dx9. Only opengl and dx11. Also opencl is gone.[/quote]
Ah, right you are. My apologies :)
Shinra358 said:That version is old. Chiri's version is still broken just like the original version as far as improper shutdowns and what not. The latest version is dolphin-master-4.0-862. Not the one they have on the main site. Newest version no longer has dx9. Only opengl and dx11. Also opencl is gone.
Right,
Time for [b]BROKEN AGE[/b] - 3D Vision Patch release !!!
[url=http://www.iforce.co.nz/View.aspx?i=sqnjlez4.vpd.png][img]http://iforce.co.nz/i/sqnjlez4.vpd.png[/img][/url]
Fix at: [url=http://3dsurroundgaming.com/Xmas/3DVisionFixes/BrokenAge-3D%20Vision.rar]Broken Age - 3D Vision Patch[/url]
Please read the Readme.txt that is in the archive:)
Like I said there, this fix uses some code injection (in order to try and maximize the rendering efficiency) stuff so do not even attempt to try it on another game as it will crash it:)
Hope you enjoy the game and the fix;))
Please read the Readme.txt that is in the archive:)
Like I said there, this fix uses some code injection (in order to try and maximize the rendering efficiency) stuff so do not even attempt to try it on another game as it will crash it:)
Hope you enjoy the game and the fix;))
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
Wow, that's amazing! Did you solve the timing issue?
I'm still holding out for Act 2 before playing, but I'm so happy to see I'll be able to play it in 3d!
Thank you.
There appears to be something weird when run on a single GPU machine in fullscreen. A big Out of memory appears and 3D vision doesn't kick in, althought works perfectly fine on SLI configurations...
I am investigating this. I managed to solve the sync issue to some degree by playing with some internal timers of the app. So as long as your machine can maintain a good FPS you will see nothing wrong. Then again who uses 3d Vision have good GPUs and the game is not really GPU hungry:)
Another thing I forgot to mention. in nvidia Inspector add BrokenAge.exe to a profile like world of warcraft of Chrome:)
There appears to be something weird when run on a single GPU machine in fullscreen. A big Out of memory appears and 3D vision doesn't kick in, althought works perfectly fine on SLI configurations...
I am investigating this. I managed to solve the sync issue to some degree by playing with some internal timers of the app. So as long as your machine can maintain a good FPS you will see nothing wrong. Then again who uses 3d Vision have good GPUs and the game is not really GPU hungry:)
Another thing I forgot to mention. in nvidia Inspector add BrokenAge.exe to a profile like world of warcraft of Chrome:)
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
I am still investigating stuff. However in all the papers that I read (including Nvidia Presentation and Nvidia Best Practices and etc) the way to do it is to DUPLICATE the draw calls. However they didn't say exactly how they do it except that is done by the driver heuristics.
So I guess they are doing exactly what I started to do.... create a Database in which you store all the draw calls and all the args passed to that function.
Left Eye:
- Activate Left Buffer
- Let the game call all the draw calls
- Capture in the database all the draw calls
Swap buffers 1st time:
- Activate Right Buffer
- Render again using the stored callbacks + argv[]
- Once finished clear the database
- JUMP to Swap buffers
Swap buffers 2nd time;
- Do the actual SWAP (using the NVIDIA DX9 mechanism)
- Activate Left buffer and resume.
The problem with this approach is that I have to RECORD/HOOK/PUT in the Drawback Database all the drawcalls including standard OGL + EXT + ARB + NV specific ones, and there are quite a few calls (here I need some investigation how to do that since manually doing it... I will grow old)
Trying to freeze the time between frame render calls is another idea but based on some testing will actually render 2 consecutive frames nevertheless with even a BIGGER difference. Another downside is that is game specific.... (finding the internal tick delta time and modify that).
Currently I started with the 1st approach and was able to get some results so far.
Keep you posted.
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
Just an idea. I have no knowledge about those things... :(
Desktop-PC
i7 870 @ 3.8GHz + MSI GTX1070 Gaming X + 16GB RAM + Win10 64Bit Home + AW2310+3D-Vision
Model: Clevo P570WM Laptop
GPU: GeForce GTX 980M ~8GB GDDR5
CPU: Intel Core i7-4960X CPU +4.2GHz (12 CPUs)
Memory: 32GB Corsair Vengeance DDR3L 1600MHz, 4x8gb
OS: Microsoft Windows 7 Ultimate
If you are trying to replace what it does, you might be able to use 3Dmigoto, because it has the ability to disable automatic, and also add that eye-correction code to every Vertex Shader seen.
Lastly, if you are trying to replace the 3D driver, you might look at Vireio. It's open source, and while it's VR centric now, you should be able to leverage its new ability to draw two eyes at the same moment.
I wouldn't expect you to need to record all parameters, but maybe that's the only spot you can hook.
I would expect it to be more like:
(Page 15 of: http://developer.download.nvidia.com/whitepapers/2010/3D_Vision_Best_Practices_Guide.pdf)
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers
I'm not using an old version :P
Model: Clevo P570WM Laptop
GPU: GeForce GTX 980M ~8GB GDDR5
CPU: Intel Core i7-4960X CPU +4.2GHz (12 CPUs)
Memory: 32GB Corsair Vengeance DDR3L 1600MHz, 4x8gb
OS: Microsoft Windows 7 Ultimate
Hey,
I apply the stereo correction in every vertex. That part is perfect and works as expected. The problem that I am facing is to replicate the same frame in the right eye.
Basically, I need to render the same game state (time) in both eyes: one in left and one in right.
Currently I am rendering left: frame1, right frame2, left frame3, right frame4...
What I need to do is: left: frame1, right: frame1, left: frame2. right: frame2 and so on.
The way I do it it gives proper stereoscopic but on fast movement the eyes get out of sync. The lower the framerate the more visible it gets.
I was able to replicate all the drawcalls however this doesn't work since the program thinks he finished rendering the current frame and waits to start the next one.
In the paper...if you look nvidia says duplicate draw calls and gives a small pseudocode. however 3d vision automatic does that (I expect they do some kind of hooking in the drivers or something).
The other approach would be to find the internal thick clock of the application and
render left, save delta time, activate right, put the delta on the same value from left eye, render the same scene again in the right buffer, resume the normal course.
Another approach I tried was to save all the uniforms(like MVP matrixes and positions) values that are sent to the shader between the left and right rendering. This worked to some extent, but I am also sending uniforms to perform stereoscopy and is hard to say exactly what uniforms I need to alter and which not...
I'll give a look at Vireio;)) thx for the tip:D
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
I'm using the latest version (4.0.2) so I'm still confused tbh. Oh well, works for me.there's a thread about it somewhere around here. Just to clarify, I'm using Chiri's version for a couple of games but latest version for everything else with 3D running fine via 3DTV Play.
Model: Clevo P570WM Laptop
GPU: GeForce GTX 980M ~8GB GDDR5
CPU: Intel Core i7-4960X CPU +4.2GHz (12 CPUs)
Memory: 32GB Corsair Vengeance DDR3L 1600MHz, 4x8gb
OS: Microsoft Windows 7 Ultimate
Ah, right you are. My apologies :)
Time for BROKEN AGE - 3D Vision Patch release !!!
Fix at: Broken Age - 3D Vision Patch
Please read the Readme.txt that is in the archive:)
Like I said there, this fix uses some code injection (in order to try and maximize the rendering efficiency) stuff so do not even attempt to try it on another game as it will crash it:)
Hope you enjoy the game and the fix;))
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
I'm still holding out for Act 2 before playing, but I'm so happy to see I'll be able to play it in 3d!
There appears to be something weird when run on a single GPU machine in fullscreen. A big Out of memory appears and 3D vision doesn't kick in, althought works perfectly fine on SLI configurations...
I am investigating this. I managed to solve the sync issue to some degree by playing with some internal timers of the app. So as long as your machine can maintain a good FPS you will see nothing wrong. Then again who uses 3d Vision have good GPUs and the game is not really GPU hungry:)
Another thing I forgot to mention. in nvidia Inspector add BrokenAge.exe to a profile like world of warcraft of Chrome:)
1x Palit RTX 2080Ti Pro Gaming OC(watercooled and overclocked to hell)
3x 3D Vision Ready Asus VG278HE monitors (5760x1080).
Intel i9 9900K (overclocked to 5.3 and watercooled ofc).
Asus Maximus XI Hero Mobo.
16 GB Team Group T-Force Dark Pro DDR4 @ 3600.
Lots of Disks:
- Raid 0 - 256GB Sandisk Extreme SSD.
- Raid 0 - WD Black - 2TB.
- SanDisk SSD PLUS 480 GB.
- Intel 760p 256GB M.2 PCIe NVMe SSD.
Creative Sound Blaster Z.
Windows 10 x64 Pro.
etc
My website with my fixes and OpenGL to 3D Vision wrapper:
http://3dsurroundgaming.com
(If you like some of the stuff that I've done and want to donate something, you can do it with PayPal at tavyhome@gmail.com)
I can't wait to play some of those OpenGL games that I passed on due to the lack of Stereoscopic support.