3D Vision CPU Bottelneck: Gathering Information thread.
  18 / 22    
Got an awsome update on performance on call of pribyat. 56 on the scene i got under 30 with my 2500k sandyB... if i only would get the fix properly working with windows 10. does anyone know what causes profile not to save but when hitting save setting, you get a saved message but at the same time values go to default ?
Got an awsome update on performance on call of pribyat. 56 on the scene i got under 30 with my 2500k sandyB...
if i only would get the fix properly working with windows 10.

does anyone know what causes profile not to save but when hitting save setting, you get a saved message but
at the same time values go to default ?

CoreX9 Custom watercooling (valkswagen polo radiator)
I7-8700k@stock
TitanX pascal with shitty stock cooler
Win7/10
Video: Passive 3D fullhd 3D@60hz/channel Denon x1200w /Hc5 x 2 Geobox501->eeColorBoxes->polarizers/omega filttersCustom made silverscreen
Ocupation: Enterprenior.Painting/surfacing/constructions
Interests/skills:
3D gaming,3D movies, 3D printing,Drums, Bass and guitar.
Suomi - FINLAND - perkele

Posted 12/12/2017 11:11 PM   
Got an awsome update on performance on call of pribyat. 56 on the scene i got under 30 with my 2500k sandyB... if i only would get the fix properly working with windows 10. does anyone know what causes profile not to save but when hitting save setting, you get a saved message but at the same time values go to default ?
Got an awsome update on performance on call of pribyat. 56 on the scene i got under 30 with my 2500k sandyB...
if i only would get the fix properly working with windows 10.

does anyone know what causes profile not to save but when hitting save setting, you get a saved message but
at the same time values go to default ?

CoreX9 Custom watercooling (valkswagen polo radiator)
I7-8700k@stock
TitanX pascal with shitty stock cooler
Win7/10
Video: Passive 3D fullhd 3D@60hz/channel Denon x1200w /Hc5 x 2 Geobox501->eeColorBoxes->polarizers/omega filttersCustom made silverscreen
Ocupation: Enterprenior.Painting/surfacing/constructions
Interests/skills:
3D gaming,3D movies, 3D printing,Drums, Bass and guitar.
Suomi - FINLAND - perkele

Posted 12/12/2017 11:11 PM   
Set "StereoProfile" to "Yes" in the game profile, or "StereoProfile = 1" in "d3dx.ini". That's what makes games load non default values.
Set "StereoProfile" to "Yes" in the game profile, or "StereoProfile = 1" in "d3dx.ini".

That's what makes games load non default values.

CPU: Intel Core i7 7700K @ 4.9GHz
Motherboard: Gigabyte Aorus GA-Z270X-Gaming 5
RAM: GSKILL Ripjaws Z 16GB 3866MHz CL18
GPU: Gainward Phoenix 1080 GLH
Monitor: Asus PG278QR
Speakers: Logitech Z506
Donations account: masterotakusuko@gmail.com

Posted 12/13/2017 05:42 AM   
[quote="masterotaku"]Set "StereoProfile" to "Yes" in the game profile, or "StereoProfile = 1" in "d3dx.ini". That's what makes games load non default values.[/quote] y-es it worked. thanks masterotaku :) Do you know what causes the other eye is darker like clipped black error with call of pribyat fix? why in the inspector there is like 3 or 4 different profiles for stalker games, once you delete one of them you cant import it back as the inspector says you can´t import this profile as some other profile already has the exe associated.
masterotaku said:Set "StereoProfile" to "Yes" in the game profile, or "StereoProfile = 1" in "d3dx.ini".

That's what makes games load non default values.


y-es it worked. thanks masterotaku :)

Do you know what causes the other eye is darker like clipped black error with call of pribyat fix?

why in the inspector there is like 3 or 4 different profiles for stalker games, once you delete one of them
you cant import it back as the inspector says you can´t import this profile as some other profile already has the exe associated.

CoreX9 Custom watercooling (valkswagen polo radiator)
I7-8700k@stock
TitanX pascal with shitty stock cooler
Win7/10
Video: Passive 3D fullhd 3D@60hz/channel Denon x1200w /Hc5 x 2 Geobox501->eeColorBoxes->polarizers/omega filttersCustom made silverscreen
Ocupation: Enterprenior.Painting/surfacing/constructions
Interests/skills:
3D gaming,3D movies, 3D printing,Drums, Bass and guitar.
Suomi - FINLAND - perkele

Posted 12/13/2017 10:55 AM   
Update from our good man Ray: [color="green"]Hi Shahzad, Sorry for the delay. I did get an update from the development manager and I will try to convey his message best I can. First of all he wants to clarify that the recent fix wasn't meant to address all issue, in fact as you know it was meant to cover a specific case. I now have more details on the recent fix so I thought it would be helpful to share some details of what we actually fixed. In the case of SLI, with stereo enable we do cross GPU data transfer. With DSR enable (Say we are rendering at 4K, but scan out happens only at 1080p), before fixing the issue we were transferring data for rendering resolution (i.e. for 4K ), even though we finally scan out for actual display resolution (i.e 1080p). We fixed the issue : instead of transferring data for complete rendering resolution, we are transferring only for scan out resolution by doing scale down operation before transfer. This fix wasn't meant as a fix for all scenario. I should have made this more clear to you when I informed you the bug was fixed. It wasn't apparent to me at that time so sorry for the confusion. According to development the recent fix is what we consider low-hanging fruit, something we realize can be improved quickly after analyzing the data. While we believe we can offer more improvement in the future this problem is not a simple bug fix unfortunately. We need considerable amount of effort to go through complete stereo code and also outside of the stereo code to understand the reasons why CPU/GPU usage is low. There are valid reasons why CPU/GPU usage becoming less (like extra syncs which are required only in case of stereo). Before concluding we can or cannot improve further, we will need to analyze and list down what are all the reasons for this reduction in CPU/GPU usage and is there any possibility to remove/reduce these reasons, so that we can improve the FPS. Having said that the stereo team is working against various priorities at the moment so may not get to this right away. This new effort is being tracked in a separate bug that will need to be prioritized against other stereo work. While this is not something they can start immediately the development manager assure he will look into this based on the prioritization of this and other works they have. I realize this doesn't offer much in terms of schedule but I don't think we know at this point until development can actually start the effort to analyzing the stereo code. Best regards, Ray [/color] Our reply: [color="orange"]Hi Ray, Thank you for the email. It is good to see that this is being tracked as a separate bug now. I must admit that I had assumed it already was - I was disheartened to learn that the issue seemed to have been closed. It's good that this new effort is being tracked in a separate bug now, and we all hope that it shall be prioritised not so low that there is little to no progress :) Did development find out why we saw no improvement? Going from a ~60% performance drop to a ~40% performance drop would be a substantial improvement, if it was seen in the fix, and we would appreciate that a great deal. Unfortunately, we saw no such change (or any change at all). I would also like to point out that you guys seem to have debunked the old myth that 3D Vision reduces performance 50% (and by extension the myth that 50% performance drop is acceptable) - apparent nVidia testing has showed that even with this preliminary "fix", performance drop was only ~40%. I hope this goes a long way to show that even a 50% drop in performance is not something that we all ought to expect as being natural with nVidia's Stereo3D technologies - the myth that requiring double the processing power because the observer is seeing from two perspectives simply does not hold true, nor should it.. Intriguingly, during the Pascal launch and PR campaign, nVidia showed something called "single pass stereo" for VR which had little to no performance impact compared to high GPU performance requirements for both VR and Stereo3D (3D Vision) as processed traditionally. Could single pass stereo be incorporated as a part of 3D vision, thereby reducing the GPU workload significantly as well as the CPU workload to nominal values? nVidia already seems to have the technology for this, it would be a shame to have it go un-utilised. As you say - an investigation and fix might take a some time whereas some resources invested into single pass stereo for 3D Vision ought to yield far more significant, and dare I say it - more future-proof results? https://developer.nvidia.com/vrworks/graphics/singlepassstereo Kind regards, -- Shahzad[/color]
Update from our good man Ray:

Hi Shahzad,

Sorry for the delay. I did get an update from the development manager and I will try to convey his message best I can. First of all he wants to clarify that the recent fix wasn't meant to address all issue, in fact as you know it was meant to cover a specific case. I now have more details on the recent fix so I thought it would be helpful to share some details of what we actually fixed.

In the case of SLI, with stereo enable we do cross GPU data transfer. With DSR enable (Say we are rendering at 4K, but scan out happens only at 1080p), before fixing the issue we were transferring data for rendering resolution (i.e. for 4K ), even though we finally scan out for actual display resolution (i.e 1080p). We fixed the issue : instead of transferring data for complete rendering resolution, we are transferring only for scan out resolution by doing scale down operation before transfer. This fix wasn't meant as a fix for all scenario. I should have made this more clear to you when I informed you the bug was fixed. It wasn't apparent to me at that time so sorry for the confusion.

According to development the recent fix is what we consider low-hanging fruit, something we realize can be improved quickly after analyzing the data. While we believe we can offer more improvement in the future this problem is not a simple bug fix unfortunately. We need considerable amount of effort to go through complete stereo code and also outside of the stereo code to understand the reasons why CPU/GPU usage is low. There are valid reasons why CPU/GPU usage becoming less (like extra syncs which are required only in case of stereo). Before concluding we can or cannot improve further, we will need to analyze and list down what are all the reasons for this reduction in CPU/GPU usage and is there any possibility to remove/reduce these reasons, so that we can improve the FPS.

Having said that the stereo team is working against various priorities at the moment so may not get to this right away. This new effort is being tracked in a separate bug that will need to be prioritized against other stereo work. While this is not something they can start immediately the development manager assure he will look into this based on the prioritization of this and other works they have. I realize this doesn't offer much in terms of schedule but I don't think we know at this point until development can actually start the effort to analyzing the stereo code.


Best regards,
Ray




Our reply:



Hi Ray,

Thank you for the email. It is good to see that this is being tracked as a separate bug now. I must admit that I had assumed it already was - I was disheartened to learn that the issue seemed to have been closed. It's good that this new effort is being tracked in a separate bug now, and we all hope that it shall be prioritised not so low that there is little to no progress :)

Did development find out why we saw no improvement? Going from a ~60% performance drop to a ~40% performance drop would be a substantial improvement, if it was seen in the fix, and we would appreciate that a great deal. Unfortunately, we saw no such change (or any change at all).

I would also like to point out that you guys seem to have debunked the old myth that 3D Vision reduces performance 50% (and by extension the myth that 50% performance drop is acceptable) - apparent nVidia testing has showed that even with this preliminary "fix", performance drop was only ~40%. I hope this goes a long way to show that even a 50% drop in performance is not something that we all ought to expect as being natural with nVidia's Stereo3D technologies - the myth that requiring double the processing power because the observer is seeing from two perspectives simply does not hold true, nor should it..

Intriguingly, during the Pascal launch and PR campaign, nVidia showed something called "single pass stereo" for VR which had little to no performance impact compared to high GPU performance requirements for both VR and Stereo3D (3D Vision) as processed traditionally. Could single pass stereo be incorporated as a part of 3D vision, thereby reducing the GPU workload significantly as well as the CPU workload to nominal values? nVidia already seems to have the technology for this, it would be a shame to have it go un-utilised. As you say - an investigation and fix might take a some time whereas some resources invested into single pass stereo for 3D Vision ought to yield far more significant, and dare I say it - more future-proof results?


https://developer.nvidia.com/vrworks/graphics/singlepassstereo


Kind regards,
-- Shahzad

Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.

Posted 12/13/2017 06:56 PM   
This has been a bit massive now, tnank you very much for staying at the forefront
This has been a bit massive now, tnank you very much for staying at the forefront

Posted 12/13/2017 09:01 PM   
Yes indeed, thanks for keeping after this, and letting us know. I'd caution against some of your statements however. The bottom line is that test case/scenario is extremely important. It's not possible to make generalized conclusions that are always true. 1) Their fix was for PCI bandwidth bottleneck in SLI case. That scenario can easily show underutilized GPU and CPU at the same time, because the data transfer blocks further work. If your/our test scenario was not in that same scenario, then we would not see it improve our results. That doesn't mean they are wrong, it's just a different test case. Our tools like 3Dmigoto can introduce this. DarkStarSword fixed a specific SLI transfer problem that would show this. 2) Frame drops in half when using 3D. This can definitely be the case in some scenarios. If you are GPU bound, and you double your GPU load, you will halve your frame rate. It seems to me that more nuance is required before stating unequivocally that the drop is less than 50%. Longer ago, when GPU was always the bottleneck, this was likely true. Today when CPU performance has stalled, there are clear scenarios that are not GPU bound. Test case always matters. For them to reproduce the problem and care, test case is critical. I'm not enthusiastic about the GTA 5 test case, because that also includes potential R* madness in coding. They do their own Direct 3D, not using Automatic. They could be blowing it in ways that NVidia has no control over. For CPU bottleneck testing, we should be using low resolutions, even 720p is not unreasonable, to ensure that the GPU is not a bottleneck. I don't have a good test case in mind for this, I've lost track of which games demonstrate 3 core use. I think you could get more traction on this if we have a more compelling/clear test case.
Yes indeed, thanks for keeping after this, and letting us know.


I'd caution against some of your statements however. The bottom line is that test case/scenario is extremely important. It's not possible to make generalized conclusions that are always true.

1) Their fix was for PCI bandwidth bottleneck in SLI case. That scenario can easily show underutilized GPU and CPU at the same time, because the data transfer blocks further work. If your/our test scenario was not in that same scenario, then we would not see it improve our results. That doesn't mean they are wrong, it's just a different test case. Our tools like 3Dmigoto can introduce this. DarkStarSword fixed a specific SLI transfer problem that would show this.

2) Frame drops in half when using 3D. This can definitely be the case in some scenarios. If you are GPU bound, and you double your GPU load, you will halve your frame rate. It seems to me that more nuance is required before stating unequivocally that the drop is less than 50%. Longer ago, when GPU was always the bottleneck, this was likely true. Today when CPU performance has stalled, there are clear scenarios that are not GPU bound.


Test case always matters. For them to reproduce the problem and care, test case is critical.

I'm not enthusiastic about the GTA 5 test case, because that also includes potential R* madness in coding. They do their own Direct 3D, not using Automatic. They could be blowing it in ways that NVidia has no control over.

For CPU bottleneck testing, we should be using low resolutions, even 720p is not unreasonable, to ensure that the GPU is not a bottleneck.

I don't have a good test case in mind for this, I've lost track of which games demonstrate 3 core use. I think you could get more traction on this if we have a more compelling/clear test case.

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

Posted 12/13/2017 11:47 PM   
Hi bo3b, If you would kindly consider reading through the benchmarking results of this thread (and others), I hope you will be confident that we all go out of our way to ensure that we are not GPU bound. This is why we are careful to not use 'benchmarks' because there, GPU load varies depending on scene and is uncontrolled. Rather, we carefully choose to go to static locations in games such as GTA5 and TW3 where GPU usage can be controlled to ensure there are no GPU bottlenecks. Indeed, you will note from our screenshots that Metal-O-Holic and I have been careful to benchmark in 720p to ensure GPU power has no impact on our testing. We as a community have 3 main points: 1. In all scenarios where there is [u]no[/u] GPU power bottleneck, ie. GPU usage is well below 90% usage - lower the better, the CPU bottleneck should not be producing a 50% drop in performance (and this should never been seen as "acceptable" inherent side-effect of Stereo technologies). Ideally, it shouldn't produce any CPU related performance degradation at all, (but we understand and appreciate that there will be some, due to 3D Vision sync etc). 2. Granted, GPU power requirements double in 3D Vision - (this is a separate issue, see point 3 below) - we WANT them to double because this positively indicates that the GPUs are being properly utilised. The whole problem is that 3D Vision causes a reduction in CPU utilisation, thereby also preventing GPUs from being utilised to their full potential, thus causing an FPS reduction while 3D Vision is active. Example: [color="gray"] 3D Vision [u]OFF[/u]: CPU Usage: 50% (4 core CPU, 2 threaded game, fully utilising and maxing out 2 out of 4 cores) GPU usage: 40% FPS = 60[/color] [color="green"] Ideal scenario in which 3D Vision works perfectly: 3D Vision [u]ON[/u]: CPU Usage: 50% <-- No decrease in CPU usage GPU usage: 80% <-- Double GPU usage for 3D Vision requiring double the processing power FPS = 60 <-- Perfect [/color] [color="orange"] In reality, the current 3D Vision causing CPU bottlenecked Scenereo: 3D Vision [u]ON[/u]: CPU Usage: 35% <-- If anything, CPU usage should increase to >50%, not decrease. GPU usage: 60% <-- Curtailed GPU usage FPS = 40 <-- Severe drop in performance [/color] 3. Using single pass Stereo, if it can indeed be implemented for 3D Vision, a fixed 3D Vision driver might look like this (Much closer to Compatibility Mode performance hit, where there is no manifestation of 3D Vision CPU bottleneck): [color="green"] 3D Vision [u]ON[/u]: CPU Usage: 50% GPU usage: 40% <-- Single Pass Stereo causing no significant GPU usage increase FPS = 60 FPS <-- Perfect [/color] Granted, Green are absolutely ideal scenarios, but they do hopefully get the point across in this somewhat complex problem. == Re: '3-core bottleneck'. The problem isn't that 3D Vision driver makes the game become locked to 3 cores. That is a gross over-simplification of the manifestation of the problem in GTA5 only. A more accurate universal description is that the 3D vision driver causes a significant reduction in CPU usage (CPU bottleneck) which leads to a significant reduction in GPU usage (assuming no GPU bottleneck) which leads a severe reduction in 3D Vision FPS. The number of cores used depends entirely on the game, and how well it was designed to handle multi-threading. This is proven by Metal-O-Holic's The Witcher 3 3D Vision tests which show that 3D Vision is even able to utilise 6 cores / 12 threads - there is no "3 core limit". Furthermore, the problem also occurs in games which use less than 3 threads/3 cores, further proving that there is no 3-core limit per-se. If you would kindly care to read through the many games benchmarked here, you will note that they all suffer from the CPU bottleneck problem (all tested by first ensuring there is no GPU bottleneck), all to some significant degree - not just GTA5. GTA5 is a good case to work with as it is popular and makes good use for the CPU, and we can be more confident in R*'s coding capability than most other dev houses. If we fix GTA5, chances are that we fix other games too. Indeed, we have been asking nVidia to kindly look into general fixes, not those concerning GTA5 only, as explained in my email on the previous page. I think you might grow to understand and appreciate the problem and possible solutions in more depth than anyone else once you maybe get 2x SLi Volta 1180 Ti's in your system next year :) All the best.
Hi bo3b,

If you would kindly consider reading through the benchmarking results of this thread (and others), I hope you will be confident that we all go out of our way to ensure that we are not GPU bound. This is why we are careful to not use 'benchmarks' because there, GPU load varies depending on scene and is uncontrolled. Rather, we carefully choose to go to static locations in games such as GTA5 and TW3 where GPU usage can be controlled to ensure there are no GPU bottlenecks.

Indeed, you will note from our screenshots that Metal-O-Holic and I have been careful to benchmark in 720p to ensure GPU power has no impact on our testing.

We as a community have 3 main points:

1. In all scenarios where there is no GPU power bottleneck, ie. GPU usage is well below 90% usage - lower the better, the CPU bottleneck should not be producing a 50% drop in performance (and this should never been seen as "acceptable" inherent side-effect of Stereo technologies). Ideally, it shouldn't produce any CPU related performance degradation at all, (but we understand and appreciate that there will be some, due to 3D Vision sync etc).

2. Granted, GPU power requirements double in 3D Vision - (this is a separate issue, see point 3 below) - we WANT them to double because this positively indicates that the GPUs are being properly utilised. The whole problem is that 3D Vision causes a reduction in CPU utilisation, thereby also preventing GPUs from being utilised to their full potential, thus causing an FPS reduction while 3D Vision is active.

Example:

3D Vision OFF:
CPU Usage: 50% (4 core CPU, 2 threaded game, fully utilising and maxing out 2 out of 4 cores)
GPU usage: 40%
FPS = 60


Ideal scenario in which 3D Vision works perfectly:

3D Vision ON:
CPU Usage: 50% <-- No decrease in CPU usage
GPU usage: 80% <-- Double GPU usage for 3D Vision requiring double the processing power
FPS = 60 <-- Perfect


In reality, the current 3D Vision causing CPU bottlenecked Scenereo:

3D Vision ON:
CPU Usage: 35% <-- If anything, CPU usage should increase to >50%, not decrease.
GPU usage: 60% <-- Curtailed GPU usage
FPS = 40 <-- Severe drop in performance


3. Using single pass Stereo, if it can indeed be implemented for 3D Vision, a fixed 3D Vision driver might look like this (Much closer to Compatibility Mode performance hit, where there is no manifestation of 3D Vision CPU bottleneck):


3D Vision ON:
CPU Usage: 50%
GPU usage: 40% <-- Single Pass Stereo causing no significant GPU usage increase
FPS = 60 FPS <-- Perfect


Granted, Green are absolutely ideal scenarios, but they do hopefully get the point across in this somewhat complex problem.

==

Re: '3-core bottleneck'. The problem isn't that 3D Vision driver makes the game become locked to 3 cores. That is a gross over-simplification of the manifestation of the problem in GTA5 only.

A more accurate universal description is that the 3D vision driver causes a significant reduction in CPU usage (CPU bottleneck) which leads to a significant reduction in GPU usage (assuming no GPU bottleneck) which leads a severe reduction in 3D Vision FPS. The number of cores used depends entirely on the game, and how well it was designed to handle multi-threading.

This is proven by Metal-O-Holic's The Witcher 3 3D Vision tests which show that 3D Vision is even able to utilise 6 cores / 12 threads - there is no "3 core limit". Furthermore, the problem also occurs in games which use less than 3 threads/3 cores, further proving that there is no 3-core limit per-se.

If you would kindly care to read through the many games benchmarked here, you will note that they all suffer from the CPU bottleneck problem (all tested by first ensuring there is no GPU bottleneck), all to some significant degree - not just GTA5.

GTA5 is a good case to work with as it is popular and makes good use for the CPU, and we can be more confident in R*'s coding capability than most other dev houses. If we fix GTA5, chances are that we fix other games too. Indeed, we have been asking nVidia to kindly look into general fixes, not those concerning GTA5 only, as explained in my email on the previous page.

I think you might grow to understand and appreciate the problem and possible solutions in more depth than anyone else once you maybe get 2x SLi Volta 1180 Ti's in your system next year :)

All the best.

Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.

Posted 12/14/2017 01:40 AM   
Thanks for persisting with this everyone, Rage in particular.
Thanks for persisting with this everyone, Rage in particular.

Posted 12/14/2017 11:19 AM   
Thanks again RAGEdemon, chance of a possible fix looks more realistic now.
Thanks again RAGEdemon, chance of a possible fix looks more realistic now.

Asus Deluxe Gen3, Core i7 2700k@4.5Ghz, GTX 1080Ti, 16 GB RAM, Win 7 64bit
Samsung Pro 250 GB SSD, 4 TB WD Black (games)
Benq XL2720Z

Posted 12/14/2017 12:39 PM   
Hi Guys, Thank you to everyone helping doing tests, giving insight, opinions, and encouragement. This is a community effort spanning many people and many threads/messages/emails to one another - we can only do this together. Thank you rustyk21 for doing the tests regarding the "fixed" driver that our man Ray mentions below in his latest message... [color="green"]Hi Shahzad, The development manager had committed to investing more efforts to further look into improvement, and while it was on his project roadmap he didn't submit a bug to track the efforts until he can better scope out the efforts needed and which release to target these efforts. The delay in actually submitting the bug wouldn't have any impact in terms of priority or scheduling. Just wanted to make clear this wasn't something they just decided to do now. This was always on his roadmap when I first communicated that they do plan to invest more effort to look for further improvement. I did also request more clarification on the recent fix, and why customers in the field are not seeing the benefit we measured internally. I provided the benchmark results from "rustyk21" with SLI + DSR for development to review but have not heard back yet. I have requested more details of our internal benchmark and any guideline that may help. I'll update once I hear back. Best regards, Ray[/color]
Hi Guys,

Thank you to everyone helping doing tests, giving insight, opinions, and encouragement. This is a community effort spanning many people and many threads/messages/emails to one another - we can only do this together.

Thank you rustyk21 for doing the tests regarding the "fixed" driver that our man Ray mentions below in his latest message...


Hi Shahzad,

The development manager had committed to investing more efforts to further look into improvement, and while it was on his project roadmap he didn't submit a bug to track the efforts until he can better scope out the efforts needed and which release to target these efforts. The delay in actually submitting the bug wouldn't have any impact in terms of priority or scheduling. Just wanted to make clear this wasn't something they just decided to do now. This was always on his roadmap when I first communicated that they do plan to invest more effort to look for further improvement.

I did also request more clarification on the recent fix, and why customers in the field are not seeing the benefit we measured internally. I provided the benchmark results from "rustyk21" with SLI + DSR for development to review but have not heard back yet. I have requested more details of our internal benchmark and any guideline that may help. I'll update once I hear back.


Best regards,
Ray

Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.

Posted 12/18/2017 12:20 AM   
[quote="RAGEdemon"] I do not believe he has those games. I can compare 4 core 7700K to my old 6 core Xeon x5660/i7-980X: [color="green"] [b]Re: The Witcher 3:[/b][/color] Ultra preset Resolution:720p VSync OFF All Hairworks OFF. Novigrad Square signpost, look to the right, up the road. [color="green"]4-core[/color] 7700K @5GHz, 4.7GHz Cache/Uncore. AVX Offset 0. MCE ON. 3600MHz 15-15-15-35 DDR4 Memory: 3D Vision ON: FPS [color="green"]73fps[/color] [color="green"]6-core[/color] i7-980X / x5660 @ 4.4GHz, 1600MHz DDR3 memory: 3D Vision ON: FPS [color="green"]41fps[/color] [color="green"]Conclusion:[/color] x5660 @4.4 > 7700K @5GHz IPC x Clock performance increase (Cinebench Single Thread): [color="green"]35%[/color] 3D Vision performance increase: (73-41)/41 = [color="green"]78%[/color] [/quote] Remember these tests you made, where I also got 73fps? Well, I'm trying again and with the latest 3Dmigoto I'm getting 89fps there in 3D at 720p. With the 3Dmigoto build of the fix, I get 2 or 3 less fps. Without 3Dmigoto, around 102fps. With the latest 3Dmigoto (it shows 89.7fps): [img]http://i.cubeupload.com/46tJ1M.jpg[/img] Either I was testing in the wrong place that other time, or something has improved. And I'm using the recent Intel fix for the recently discovered security bug. Can you test that place again, RAGEdemon? Just to be sure. I'm using the 388.31 drivers, my CPU at 4.9GHz, and RAM at 3866MHz. Note that this isn't the most demanding view of the game. In that same plaza I can get barely 60fps when I look at all the people there, with the latest 3Dmigoto (a bit under 60fps with the build of the fix). Edit: with grass distance at maximum and shadows at ultra, I get 80-82fps. With just shadows at ultra and grass distance at high, I get 86fps. But sometimes it varies per run and I get around 86fps with my usual graphics configuration. Still better than those 73fps I got before.
RAGEdemon said:
I do not believe he has those games. I can compare 4 core 7700K to my old 6 core Xeon x5660/i7-980X:

Re: The Witcher 3:

Ultra preset
Resolution:720p
VSync OFF
All Hairworks OFF.
Novigrad Square signpost, look to the right, up the road.


4-core 7700K @5GHz, 4.7GHz Cache/Uncore. AVX Offset 0. MCE ON. 3600MHz 15-15-15-35 DDR4 Memory:
3D Vision ON:
FPS 73fps

6-core i7-980X / x5660 @ 4.4GHz, 1600MHz DDR3 memory:
3D Vision ON:
FPS 41fps

Conclusion:
x5660 @4.4 > 7700K @5GHz IPC x Clock performance increase (Cinebench Single Thread): 35%

3D Vision performance increase: (73-41)/41 = 78%



Remember these tests you made, where I also got 73fps? Well, I'm trying again and with the latest 3Dmigoto I'm getting 89fps there in 3D at 720p. With the 3Dmigoto build of the fix, I get 2 or 3 less fps. Without 3Dmigoto, around 102fps.

With the latest 3Dmigoto (it shows 89.7fps):

Image

Either I was testing in the wrong place that other time, or something has improved. And I'm using the recent Intel fix for the recently discovered security bug.


Can you test that place again, RAGEdemon? Just to be sure. I'm using the 388.31 drivers, my CPU at 4.9GHz, and RAM at 3866MHz.

Note that this isn't the most demanding view of the game. In that same plaza I can get barely 60fps when I look at all the people there, with the latest 3Dmigoto (a bit under 60fps with the build of the fix).


Edit: with grass distance at maximum and shadows at ultra, I get 80-82fps. With just shadows at ultra and grass distance at high, I get 86fps. But sometimes it varies per run and I get around 86fps with my usual graphics configuration. Still better than those 73fps I got before.

CPU: Intel Core i7 7700K @ 4.9GHz
Motherboard: Gigabyte Aorus GA-Z270X-Gaming 5
RAM: GSKILL Ripjaws Z 16GB 3866MHz CL18
GPU: Gainward Phoenix 1080 GLH
Monitor: Asus PG278QR
Speakers: Logitech Z506
Donations account: masterotakusuko@gmail.com

Posted 01/07/2018 09:52 AM   
That's very interesting masterotaku. Forgive me but when you talk about the latest 3Dmigoto, I don't know what you mean - I am ever grateful to those who are, but I am unfortunately not a shader fixer and don't tend to ba able to follow developments. I downloaded the latest witcher 3 fix from helixmod, v1.33 to try now. Driver: 388.59. Here are my results: 3D Vision OFF in CP: 149 fps 3D Vision ON in CP but toggled OFF: 142 fps 3D Vision ON: 79 fps. Indeed the performance has increased from 73 to 79 fps. If you can guide me on installing the latest 3Dmigoto, I shall gladly give you the results you require. Kind regards, -- Shahzad
That's very interesting masterotaku. Forgive me but when you talk about the latest 3Dmigoto, I don't know what you mean - I am ever grateful to those who are, but I am unfortunately not a shader fixer and don't tend to ba able to follow developments.

I downloaded the latest witcher 3 fix from helixmod, v1.33 to try now.

Driver: 388.59.

Here are my results:
3D Vision OFF in CP: 149 fps
3D Vision ON in CP but toggled OFF: 142 fps
3D Vision ON: 79 fps.

Indeed the performance has increased from 73 to 79 fps.

If you can guide me on installing the latest 3Dmigoto, I shall gladly give you the results you require.

Kind regards,
-- Shahzad

Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.

Posted 01/08/2018 02:31 PM   
[quote="RAGEdemon"]Forgive me but when you talk about the latest 3Dmigoto, I don't know what you mean - I am ever grateful to those who are, but I am unfortunately not a shader fixer and don't tend to ba able to follow developments. If you can guide me on installing the latest 3Dmigoto, I shall gladly give you the results you require. [/quote] No problem :). Download the latest 3Dmigoto here (https://github.com/bo3b/3Dmigoto/releases), which is now "3Dmigoto-1.2.68.zip". Paste the files that are inside "x64" into the "The Witcher 3 - Wild Hunt\bin\x64" folder, overwriting whatever it asks you to overwrite. And then, use my "d3dx.ini" from here: https://s3.amazonaws.com/masterotaku/The+Witcher+3/d3dx.ini To update 3Dmigoto for any game, it usually consists of seeing what custom things "d3dx.ini" has and then copying them into the new file. The fix for this game was a bunch of versions ago, so I had to pay good attention to all custom resources and stuff that don't align well when using WinMerge. If you want to do the same for other games, remember that 3Dmigoto 1.2.68 doesn't work right with many games and we'll have to wait for a new version. But with the games it works, it has better CPU performance than the previous version. Edit: now about the test you have done: when did we get better performance, I wonder?
RAGEdemon said:Forgive me but when you talk about the latest 3Dmigoto, I don't know what you mean - I am ever grateful to those who are, but I am unfortunately not a shader fixer and don't tend to ba able to follow developments.


If you can guide me on installing the latest 3Dmigoto, I shall gladly give you the results you require.


No problem :). Download the latest 3Dmigoto here (https://github.com/bo3b/3Dmigoto/releases), which is now "3Dmigoto-1.2.68.zip". Paste the files that are inside "x64" into the "The Witcher 3 - Wild Hunt\bin\x64" folder, overwriting whatever it asks you to overwrite. And then, use my "d3dx.ini" from here: https://s3.amazonaws.com/masterotaku/The+Witcher+3/d3dx.ini

To update 3Dmigoto for any game, it usually consists of seeing what custom things "d3dx.ini" has and then copying them into the new file. The fix for this game was a bunch of versions ago, so I had to pay good attention to all custom resources and stuff that don't align well when using WinMerge.

If you want to do the same for other games, remember that 3Dmigoto 1.2.68 doesn't work right with many games and we'll have to wait for a new version. But with the games it works, it has better CPU performance than the previous version.


Edit: now about the test you have done: when did we get better performance, I wonder?

CPU: Intel Core i7 7700K @ 4.9GHz
Motherboard: Gigabyte Aorus GA-Z270X-Gaming 5
RAM: GSKILL Ripjaws Z 16GB 3866MHz CL18
GPU: Gainward Phoenix 1080 GLH
Monitor: Asus PG278QR
Speakers: Logitech Z506
Donations account: masterotakusuko@gmail.com

Posted 01/08/2018 04:48 PM   
Yeah I don't think I have ever seen a game be 50% worse performance in 3d, but it varies game by game. I think the real heartening thing that's come to light through all this is that they actually still have a "stereo" team.
Yeah I don't think I have ever seen a game be 50% worse performance in 3d, but it varies game by game.
I think the real heartening thing that's come to light through all this is that they actually still have a "stereo" team.

i7-4790K CPU 4.8Ghz stable overclock.
16 GB RAM Corsair
EVGA 1080TI SLI
Samsung SSD 840Pro
ASUS Z97-WS
3D Surround ASUS Rog Swift PG278Q(R), 2x PG278Q (yes it works)
Obutto R3volution.
Windows 10 pro 64x (Windows 7 Dual boot)

Posted 01/08/2018 10:53 PM   
  18 / 22    
Scroll To Top