NVIDIA Working on Something Big!
  13 / 15    
OK, back for more. Here is what I think should be a definitive test for Arkham Asylum. I set the resolution to 1080p@120Hz, which is downscaled by the driver to my 1280x720 display. This is the way I can set the resolution larger to load the GPU. It definitely draws as 1080p, all on screen text is much smaller and it definitely impacts performance. (also smooths out jaggies, which was the original point.) Same spot in game, same view, made sure to see 99% GPU usage in both cases so we are limited by GPU, not frame caps or CPU. Single GTX 580. All game settings at max, Physx at max, but on dedicated 580 so unlikely to impact. Driver 331.65. 3D DISABLED, after reboot (not just off): [img]http://bo3b.net/bataa/3D_disabled_1080p.JPG[/img] 3D ON, no reboot (after medical test): [img]http://bo3b.net/bataa/3D_on_1080p.JPG[/img] In game images, with EVGA overlay. Please note that these snapshots are 1080p in size. DISABLED: [img]http://bo3b.net/bataa/ShippingPC-BmGame_2013_12_22_16_55_35_498.jpg[/img] ON: [img]http://bo3b.net/bataa/ShippingPC-BmGame01_85.jpg[/img] Summary: 3D Disabled: 47 fps 3D On: 22 fps
OK, back for more. Here is what I think should be a definitive test for Arkham Asylum.

I set the resolution to 1080p@120Hz, which is downscaled by the driver to my 1280x720 display. This is the way I can set the resolution larger to load the GPU. It definitely draws as 1080p, all on screen text is much smaller and it definitely impacts performance. (also smooths out jaggies, which was the original point.)

Same spot in game, same view, made sure to see 99% GPU usage in both cases so we are limited by GPU, not frame caps or CPU. Single GTX 580. All game settings at max, Physx at max, but on dedicated 580 so unlikely to impact. Driver 331.65.


3D DISABLED, after reboot (not just off):

Image


3D ON, no reboot (after medical test):

Image


In game images, with EVGA overlay. Please note that these snapshots are 1080p in size.

DISABLED:

Image


ON:

Image


Summary:

3D Disabled: 47 fps
3D On: 22 fps

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

Posted 12/23/2013 01:22 AM   
[url]http://3dvision-blog.com/9196-geforce-gtx-780-ti-game-benchmarks-in-stereo-3d-and-120hz-2d-mode/[/url] [url]http://3dvision-blog.com/7896-playing-with-nvidia-geforce-gtx-690-in-stereoscopic-3d-mode/[/url]

Co-founder of helixmod.blog.com

If you like one of my helixmod patches and want to donate. Can send to me through paypal - eqzitara@yahoo.com

Posted 12/23/2013 03:31 AM   
[quote="Volnaiskra"][quote="bo3b"][quote="RAGEdemon"][i]Once again, if you have a powerful enough GPU that is able to attain FPS at the refresh rate set, you will see no difference with GSync.[/i] [/quote]You seem to be confusing us with other people. No one here is arguing this point.[/quote]I am. And I have no doubt that time will prove me right. I think people are right to be skeptical about just how much of an improvement it will make. But I haven't read any compelling arguments on this thread to convince me that it won't make any difference at all. If gsync truly won't make any difference in a 60fps & 60Hz environment, then all those glowing reviews from Carmack et al, from Anandtech and other review sites - from basically everyone who's ever seen it in action - are completely bogus: something akin to mass hallucination. Those people all said they saw unprecendented smoothness. And as I understand it, they were watching a 60fps scenario[/quote]Ah, OK, sorry for misinterpreting that. From my understanding, RageDemon is right here. If you can [i]sustain[/i] higher than the frame rate, then there will be no difference with GSync. Remember, GSync can [i]delay [/i]a frame, but it cannot speed one up. Taking a closer look at your diagram, shows a couple of things that don't seem right to me. Part way down, you show a regular 10Hz screen with the balls moving forwards and backwards. This cannot be, the shown ball cannot go backwards in time. That frame is already drawn, gone. Whatever was on the screen is what you saw. Also, in the case when triple-buffering is disabled, which is what we want to improve latency, you still have a single frame buffering, because all drawing is done to a backbuffer, then swapped at refresh time. After the backbuffer is drawn, it waits for the refresh, because it cannot draw overtop either the visible frontbuffer, and it would waste resources to start drawing the backbuffer again. So the far right 3 balls would never happen, because the backbuffer is done, and it would stall instead. Next, in your GSync diagram at the very bottom, you show it drawing whenever it is ready. This also seems wrong to me, because GSync can delay a frame, but cannot advance a frame. So for example, 2nd ball from the left would show up on the refresh line, not at that time shown. 3rd ball would be a GSync case, where the lines would expand to delay that frame and it would show up as shown. 4th ball would have to delay for a full .1 second though. 5th ball would likely show up properly because of the time delay earlier. It's not possible to make the monitor refresh faster than its native rate, and NVidia does not claim to do that. They specifically say that they do not change the fastest timing to avoid a lot of complications with LCD refresh times themselves.
Volnaiskra said:
bo3b said:
RAGEdemon said:Once again, if you have a powerful enough GPU that is able to attain FPS at the refresh rate set, you will see no difference with GSync.
You seem to be confusing us with other people. No one here is arguing this point.
I am. And I have no doubt that time will prove me right. I think people are right to be skeptical about just how much of an improvement it will make. But I haven't read any compelling arguments on this thread to convince me that it won't make any difference at all.

If gsync truly won't make any difference in a 60fps & 60Hz environment, then all those glowing reviews from Carmack et al, from Anandtech and other review sites - from basically everyone who's ever seen it in action - are completely bogus: something akin to mass hallucination. Those people all said they saw unprecendented smoothness. And as I understand it, they were watching a 60fps scenario
Ah, OK, sorry for misinterpreting that.

From my understanding, RageDemon is right here. If you can sustain higher than the frame rate, then there will be no difference with GSync. Remember, GSync can delay a frame, but it cannot speed one up.

Taking a closer look at your diagram, shows a couple of things that don't seem right to me. Part way down, you show a regular 10Hz screen with the balls moving forwards and backwards. This cannot be, the shown ball cannot go backwards in time. That frame is already drawn, gone. Whatever was on the screen is what you saw.

Also, in the case when triple-buffering is disabled, which is what we want to improve latency, you still have a single frame buffering, because all drawing is done to a backbuffer, then swapped at refresh time. After the backbuffer is drawn, it waits for the refresh, because it cannot draw overtop either the visible frontbuffer, and it would waste resources to start drawing the backbuffer again. So the far right 3 balls would never happen, because the backbuffer is done, and it would stall instead.


Next, in your GSync diagram at the very bottom, you show it drawing whenever it is ready. This also seems wrong to me, because GSync can delay a frame, but cannot advance a frame. So for example, 2nd ball from the left would show up on the refresh line, not at that time shown. 3rd ball would be a GSync case, where the lines would expand to delay that frame and it would show up as shown. 4th ball would have to delay for a full .1 second though. 5th ball would likely show up properly because of the time delay earlier.

It's not possible to make the monitor refresh faster than its native rate, and NVidia does not claim to do that. They specifically say that they do not change the fastest timing to avoid a lot of complications with LCD refresh times themselves.

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

Posted 12/23/2013 09:02 AM   
Some fair points there. So the crux of the matter then is that the grey lines in my diagram are able to get further apart, but not closer together. So gsync would still help with, say, a game that is capped at 60fps on a 120hz monitor, and possibly in a 3Dvision scenario. But not so in the case of 120fps on a 120hz monitor - or 144 on a 144hz one - it doesn't matter. (though really who cares - like you've said before, no one's getting a rock-steady 144fps in their games unless they're playing games from 2005). But something still doesn't add up though. All of these eye witnesses - tech-savvy people from the industry and tech reviewers - reported seeing unprecedented smoothness. I read report after report where they raved about it. But you're saying is that what they were seeing was *no better* than a game running at a max framerate on today's hardware. In other words, these highly tech savvy people (some of whom benchmark games on high-end hardware for a living) must never before have seen a game run at 120fps on a 120hz monitor....allowing them to be blown away when they saw a gsync monitor, even though what they were seeing was merely equivalent or inferior (but never superior) to 120fps on a 120hz monitor with vsync. Don't you find that hard to believe? Perhaps there are other factors that we're not understanding. I'll propose one: CURRENTLY A regular monitor refreshes every xxx microseconds. Once you turn the monitor on, you start the cycle, and it then becomes a regular rhythm: kind of like a clock that keeps ticking at regular intervals. It will continue ticking (refreshing) at that interval ad nauseum. When a GPU creates a frame, it needs to wait for the next 'tick' to come around. The GPU's rhythm has no relation to the monitor's rhythm, so chances are that this frame will be partway through the cycle, and the frame will be gathering dust and becoming out of sync with the action while it waits for the next tick. Eventually, the tick comes around and the frame gets displayed. Now it's time for the next frame. Either this is a new frame, or it's one that's been pre-prepared while the GPU was twiddling its thumbs a moment ago (ie. a double- or triple-buffered frame). If it's a double- or triple-buffered frame, then it's already significantly out of sync, since it originated during the previous refresh cycle. If it's a new frame, then it won't be as out of sync. But it will still have to wait for the next tick. This dance continues, and even though our GPU manages to pump out a maximum framerate, the frames are being pumped out to a different rhythm to the rhythm the monitor is showing them at. Even if the frames are coming out like clockwork, they are still all slightly out of sync with the refresh rate (albeit all out of sync by the same amount). WITH GSYNC: The GPU can not force the monitor to refresh faster than its native refresh rate. But it CAN tell it when to start the refresh cycle. Previously, the refresh cycle started whenever the monitor initiated, and continued independently to the GPU. Now, the GPU is able to pause the refresh rate until it's ready to start, kind of how a starting pistol is used in a race (waaaaaaaaaait - bang - go!) So, the GPU produces a frame. Once it decides it's ready, it tells the monitor "Go! display it now!". From here on in, the process looks much the same to how it did pre-gsync. But now, the rhythm of the monitor and the rhythm of the GPU are closer aligned. The GPU is still spitting out frames as fast as before, and the screen is still displaying them as fast as before, but the latency between the two as been minimised, rather than simply left to chance. In other words, we get the same framerate but with reduced input lag. There's a second factor that will come into play here too. The Gsync system has the ability to autocorrect itself. Under the old vsync system, any aberrant frames that take longer than average to produce will likely throw the whole rhythm out. So, if all the frames have a latency of 3ms, and then one comes along that has a latency of 5ms, the 2ms delay may cause all subsequent frames to be shown with a latency of 5ms, because the monitor is going at its own rhythm so will not wait for the GPU to catch up. On Gsync, however, the system can autocorrect for this one abberant frame. So if the frames are all shown at 3ms, then one bad boy comes along at 5ms, the GPU can tell the screen to pause for 2ms, so that all subsequent frames continue to be shown at 3ms. The result will be a smoother experience. On paper, it wouldn't look like an improvement, and in fact fraps may report this particular second as dropping to 59fps instead of 60fps. But the end result will still be smoother, as that one tiny delay allowed the other 99% of the frames to display at a lower latency. Does that make sense?
Some fair points there. So the crux of the matter then is that the grey lines in my diagram are able to get further apart, but not closer together. So gsync would still help with, say, a game that is capped at 60fps on a 120hz monitor, and possibly in a 3Dvision scenario. But not so in the case of 120fps on a 120hz monitor - or 144 on a 144hz one - it doesn't matter. (though really who cares - like you've said before, no one's getting a rock-steady 144fps in their games unless they're playing games from 2005).

But something still doesn't add up though. All of these eye witnesses - tech-savvy people from the industry and tech reviewers - reported seeing unprecedented smoothness. I read report after report where they raved about it. But you're saying is that what they were seeing was *no better* than a game running at a max framerate on today's hardware. In other words, these highly tech savvy people (some of whom benchmark games on high-end hardware for a living) must never before have seen a game run at 120fps on a 120hz monitor....allowing them to be blown away when they saw a gsync monitor, even though what they were seeing was merely equivalent or inferior (but never superior) to 120fps on a 120hz monitor with vsync. Don't you find that hard to believe?

Perhaps there are other factors that we're not understanding. I'll propose one:

CURRENTLY

A regular monitor refreshes every xxx microseconds. Once you turn the monitor on, you start the cycle, and it then becomes a regular rhythm: kind of like a clock that keeps ticking at regular intervals. It will continue ticking (refreshing) at that interval ad nauseum.

When a GPU creates a frame, it needs to wait for the next 'tick' to come around. The GPU's rhythm has no relation to the monitor's rhythm, so chances are that this frame will be partway through the cycle, and the frame will be gathering dust and becoming out of sync with the action while it waits for the next tick.

Eventually, the tick comes around and the frame gets displayed. Now it's time for the next frame. Either this is a new frame, or it's one that's been pre-prepared while the GPU was twiddling its thumbs a moment ago (ie. a double- or triple-buffered frame). If it's a double- or triple-buffered frame, then it's already significantly out of sync, since it originated during the previous refresh cycle. If it's a new frame, then it won't be as out of sync. But it will still have to wait for the next tick.

This dance continues, and even though our GPU manages to pump out a maximum framerate, the frames are being pumped out to a different rhythm to the rhythm the monitor is showing them at. Even if the frames are coming out like clockwork, they are still all slightly out of sync with the refresh rate (albeit all out of sync by the same amount).


WITH GSYNC:

The GPU can not force the monitor to refresh faster than its native refresh rate. But it CAN tell it when to start the refresh cycle. Previously, the refresh cycle started whenever the monitor initiated, and continued independently to the GPU. Now, the GPU is able to pause the refresh rate until it's ready to start, kind of how a starting pistol is used in a race (waaaaaaaaaait - bang - go!)

So, the GPU produces a frame. Once it decides it's ready, it tells the monitor "Go! display it now!". From here on in, the process looks much the same to how it did pre-gsync. But now, the rhythm of the monitor and the rhythm of the GPU are closer aligned. The GPU is still spitting out frames as fast as before, and the screen is still displaying them as fast as before, but the latency between the two as been minimised, rather than simply left to chance. In other words, we get the same framerate but with reduced input lag.

There's a second factor that will come into play here too. The Gsync system has the ability to autocorrect itself. Under the old vsync system, any aberrant frames that take longer than average to produce will likely throw the whole rhythm out. So, if all the frames have a latency of 3ms, and then one comes along that has a latency of 5ms, the 2ms delay may cause all subsequent frames to be shown with a latency of 5ms, because the monitor is going at its own rhythm so will not wait for the GPU to catch up.

On Gsync, however, the system can autocorrect for this one abberant frame. So if the frames are all shown at 3ms, then one bad boy comes along at 5ms, the GPU can tell the screen to pause for 2ms, so that all subsequent frames continue to be shown at 3ms. The result will be a smoother experience. On paper, it wouldn't look like an improvement, and in fact fraps may report this particular second as dropping to 59fps instead of 60fps. But the end result will still be smoother, as that one tiny delay allowed the other 99% of the frames to display at a lower latency.

Does that make sense?

ImageVolnaPC.com - Tips, tweaks, performance comparisons (PhysX card, SLI scaling, etc)

Posted 12/23/2013 10:58 AM   
[quote="Volnaiskra"] Does that make sense?[/quote] Well yes and no. 1st off we have to understand that to demonstrate the G-Sync technology to everyone that praised it Nvidia had to create a setup in which stuttering happens so that simply enabling G-Sync smoothed it out. I think everyone agrees that there are two situations where this can happen (with different degrees of visual 'stuttering') the 1st being not maintaining 60 FPS on a 60hz monitor and the other not maintaining 120 FPS on a 120hz monitor. Both situations stuttering can easily be produced by having to much "eye candy" turned on a underpowered GPU. Given the former is more jarring then the latter my guess is that Nvidia demoed a less than ideal setup driving a 60hz monitor. Since a modern LCD display 'stores' the last frame given to it Nvidia's G-Sync delays the blanking signal so that the display continues to show the last frame until the next frame is ready. As noted in the article shown above the delaying of the VBLANK is not unlimited but may be enough for the GPU to catch up with another frame and create a "smoother experience". However all the above can be avoided by having "eye candy" turned down and/or enough GPU power to maintain a 60hz FPS refresh rate to match your monitor.
Volnaiskra said:
Does that make sense?


Well yes and no.

1st off we have to understand that to demonstrate the G-Sync technology to everyone that praised it Nvidia had to create a setup in which stuttering happens so that simply enabling G-Sync smoothed it out.

I think everyone agrees that there are two situations where this can happen (with different degrees of visual 'stuttering') the 1st being not maintaining 60 FPS on a 60hz monitor and the other not maintaining 120 FPS on a 120hz monitor. Both situations stuttering can easily be produced by having to much "eye candy" turned on a underpowered GPU. Given the former is more jarring then the latter my guess is that Nvidia demoed a less than ideal setup driving a 60hz monitor.

Since a modern LCD display 'stores' the last frame given to it Nvidia's G-Sync delays the blanking signal so that the display continues to show the last frame until the next frame is ready. As noted in the article shown above the delaying of the VBLANK is not unlimited but may be enough for the GPU to catch up with another frame and create a "smoother experience".

However all the above can be avoided by having "eye candy" turned down and/or enough GPU power to maintain a 60hz FPS refresh rate to match your monitor.

i7-2600K-4.5Ghz/Corsair H100i/8GB/GTX780SC-SLI/Win7-64/1200W-PSU/Samsung 840-500GB SSD/Coolermaster-Tower/Benq 1080ST @ 100"

Posted 12/23/2013 03:17 PM   
[quote="mbloof"] I think everyone agrees that there are two situations where this can happen (with different degrees of visual 'stuttering') the 1st being not maintaining 60 FPS on a 60hz monitor and the other not maintaining 120 FPS on a 120hz monitor. Both situations stuttering can easily be produced by having to much "eye candy" turned on a underpowered GPU. Given the former is more jarring then the latter my guess is that Nvidia demoed a less than ideal setup driving a 60hz monitor. [/quote]Like I said, for the audience/reviewers to be sufficiently wowed by that, it basically suggests that they had never before seen a game running at 120fps before. Or that they have, but they had forgotten how smooth it was. Then, when they gsync, they were amazed at the smoothness, not realising that it was merely the same level of smoothness they could get on a 120hz monitor any day of the week. Either way, it suggests a level of gullibility across the board that I'm not willing to accept. Contrary to popular opinion, most people aren't stupid.
mbloof said:
I think everyone agrees that there are two situations where this can happen (with different degrees of visual 'stuttering') the 1st being not maintaining 60 FPS on a 60hz monitor and the other not maintaining 120 FPS on a 120hz monitor. Both situations stuttering can easily be produced by having to much "eye candy" turned on a underpowered GPU. Given the former is more jarring then the latter my guess is that Nvidia demoed a less than ideal setup driving a 60hz monitor.
Like I said, for the audience/reviewers to be sufficiently wowed by that, it basically suggests that they had never before seen a game running at 120fps before. Or that they have, but they had forgotten how smooth it was. Then, when they gsync, they were amazed at the smoothness, not realising that it was merely the same level of smoothness they could get on a 120hz monitor any day of the week.

Either way, it suggests a level of gullibility across the board that I'm not willing to accept. Contrary to popular opinion, most people aren't stupid.

ImageVolnaPC.com - Tips, tweaks, performance comparisons (PhysX card, SLI scaling, etc)

Posted 12/24/2013 02:38 AM   
[quote="Volnaiskra"] Either way, it suggests a level of gullibility across the board that I'm not willing to accept. Contrary to popular opinion, most people aren't stupid. [/quote] No they are not. They don't have to be to see the difference 1st hand. Take some stuttering game play - enable your invention and watch smooth(er) game play. The thing is that this will help at 120hz and 3D as well. It just may not be as visible. I'm also trying to reverse engineer why Nvidia did not implement the entire thing on the GPU card itself. In the old days the "GPU" was just a Motorola programmable video controller (just a fancy word for a state machine) which simply read data between two programmable memory locations, translated the data into RGB values (actually an off GPU "Pallet chip took care of that) and fed the analog RGB values with V/H sync signals to drive a CRT monitor. Early PC programs waited for the vertical blanking period and then tried to rewrite the whole whooping 64KB video buffer with a new frame during the blanking period. This was rather problematic. Then a game developer came along and revolutionized PC gaming by simply rewriting the memory pointers to another 64KB of RAM thereby giving them the majority of the 16ms to create a new frame and then do 2-4 I/O writes to swap the frame buffer during the blanking period. GPU/Displays have operated with dual buffers since the early 90's. While one frame is being read out to the screen the other is being drawn/painted. However there's always an issue when the the page being drawn is finished before its time to display it or the draw time takes longer than the refresh rate. The game designer has to make a decision - ether throw out the frame and draw a new one (and risk missing the blanking signal and having to retransmit/repeat the current frame) or slow the game engine down and wait to switch buffers. In ether case we get stuttering. Some programs would use multiple buffers in a so called ring (or circular, this is akin to our modern day so called 'triple buffering') it however introduced up to 1-3 frame delay between operator input and what was displayed on the screen. With a variable frame rate there needed to be a way to make the display rate variable. Considering modern LCD's are effectively a 'frame buffer' the key to G-Sync is to get the display to 'sample and hold for a while' and off load a frame (when needed) to the device on the back of the monitor. (recall that the device that gets fitted to the back of the monitor has enough memory to act as a buffer in itself) My theory is the 'rubber banding' of the GPU's variable frame draw rate is hence buffered by holding the currently displayed image longer and allowing the GPU to offload a frame to the attached ASIC+Memory on the attached device. By dynamically using the device's memory as an extra buffer along with dynamically slowing down the sync rate G-Sync could smooth out the game play experience for not only those with slow or underpowered GPU's but those with high powered rigs driving 120hz monitors. Could G-Sync smooth out 3DVision? While that depends on how much GPU you have to toss at it, tests have shown that if there's plenty of GPU power in reserve the performance hit might only be ~35%. However if the rig is GPU limited running in 2D switching on 3DVision might cost more than 50% of the FPS to drive it! While most of us still question the value of G-Sync when using top-of-the-line GPU's and systems however all the industry experts and press (along with myself) would be fairly amazed that G-Sync could make a low end GPU card provide a smoother and playable game play experience. I'm still of the opinion that making their high end chips/cards unneeded is kinda like 'shooting themselves in the foot' it starts to make sense when you consider mobile GPU's and laptops and how useless they currently are for 3DVision let alone playing 2D games. I'm expecting Nvidia to reintroduce 3DVision with G-Sync on ultrabook/mbp type devices.
Volnaiskra said:
Either way, it suggests a level of gullibility across the board that I'm not willing to accept. Contrary to popular opinion, most people aren't stupid.


No they are not. They don't have to be to see the difference 1st hand. Take some stuttering game play - enable your invention and watch smooth(er) game play.

The thing is that this will help at 120hz and 3D as well. It just may not be as visible. I'm also trying to reverse engineer why Nvidia did not implement the entire thing on the GPU card itself.

In the old days the "GPU" was just a Motorola programmable video controller (just a fancy word for a state machine) which simply read data between two programmable memory locations, translated the data into RGB values (actually an off GPU "Pallet chip took care of that) and fed the analog RGB values with V/H sync signals to drive a CRT monitor. Early PC programs waited for the vertical blanking period and then tried to rewrite the whole whooping 64KB video buffer with a new frame during the blanking period. This was rather problematic. Then a game developer came along and revolutionized PC gaming by simply rewriting the memory pointers to another 64KB of RAM thereby giving them the majority of the 16ms to create a new frame and then do 2-4 I/O writes to swap the frame buffer during the blanking period.

GPU/Displays have operated with dual buffers since the early 90's. While one frame is being read out to the screen the other is being drawn/painted. However there's always an issue when the the page being drawn is finished before its time to display it or the draw time takes longer than the refresh rate. The game designer has to make a decision - ether throw out the frame and draw a new one (and risk missing the blanking signal and having to retransmit/repeat the current frame) or slow the game engine down and wait to switch buffers. In ether case we get stuttering.

Some programs would use multiple buffers in a so called ring (or circular, this is akin to our modern day so called 'triple buffering') it however introduced up to 1-3 frame delay between operator input and what was displayed on the screen.

With a variable frame rate there needed to be a way to make the display rate variable.

Considering modern LCD's are effectively a 'frame buffer' the key to G-Sync is to get the display to 'sample and hold for a while' and off load a frame (when needed) to the device on the back of the monitor. (recall that the device that gets fitted to the back of the monitor has enough memory to act as a buffer in itself)

My theory is the 'rubber banding' of the GPU's variable frame draw rate is hence buffered by holding the currently displayed image longer and allowing the GPU to offload a frame to the attached ASIC+Memory on the attached device. By dynamically using the device's memory as an extra buffer along with dynamically slowing down the sync rate G-Sync could smooth out the game play experience for not only those with slow or underpowered GPU's but those with high powered rigs driving 120hz monitors.

Could G-Sync smooth out 3DVision? While that depends on how much GPU you have to toss at it, tests have shown that if there's plenty of GPU power in reserve the performance hit might only be ~35%. However if the rig is GPU limited running in 2D switching on 3DVision might cost more than 50% of the FPS to drive it!

While most of us still question the value of G-Sync when using top-of-the-line GPU's and systems however all the industry experts and press (along with myself) would be fairly amazed that G-Sync could make a low end GPU card provide a smoother and playable game play experience.

I'm still of the opinion that making their high end chips/cards unneeded is kinda like 'shooting themselves in the foot' it starts to make sense when you consider mobile GPU's and laptops and how useless they currently are for 3DVision let alone playing 2D games.

I'm expecting Nvidia to reintroduce 3DVision with G-Sync on ultrabook/mbp type devices.

i7-2600K-4.5Ghz/Corsair H100i/8GB/GTX780SC-SLI/Win7-64/1200W-PSU/Samsung 840-500GB SSD/Coolermaster-Tower/Benq 1080ST @ 100"

Posted 12/24/2013 04:59 AM   
Very informative post mbloof! Just a thought, couldn't G-sync work as well in 3D as it does in 2D if we're talking frame packed or SBS delivery on a passive or glasses-free display?
Very informative post mbloof!

Just a thought, couldn't G-sync work as well in 3D as it does in 2D if we're talking frame packed or SBS delivery on a passive or glasses-free display?

Posted 12/24/2013 05:27 AM   
@Volnaiskra: Regarding the reviewers seeing it smoother. Yes definitely! There is no way that people as sharp as Anand and Carmack are making an amateur mistake here. If they say it's the best thing they've ever seen, I believe them. I'm never going to argue with Carmack about frame rates, and refresh, and vsync. The man essentially invented the modern gaming engines. I think the part that you might be missing is that all the demos were done on 60Hz monitors in 2D. Specifically tuned to demonstrate a worst case scenario that actually happens. If the game engine is putting out 55 fps on average, then you get wicked, worst case stutter. Going to a 120Hz screen helps, but only because you shrank the stutter time. Some people are still badly susceptible to stutter at this level. When it stutters, you are seeing a frame held for another blanking interval. 17ms at 60Hz, 8ms at 120Hz. In the review I referenced above Anand himself said: [i]What’s interesting to me about this last situation is if 120/144Hz reduces tearing enough to the point where you’re ok with it, G-Sync may be a solution to a problem you no longer care about. If you’re hyper sensitive to tearing however, there’s still value in G-Sync even at these high refresh rates.[/i] It helps to go faster, but is still not as good as GSync. We are not saying these people are gullible, it's just that people are wildly different in how much stutter and/or tearing annoys them. Also directly to the point, you can still get nasty stutter at 120Hz. If your game engine is putting out 115 fps on average, and your monitor is set to 120Hz, you will still be getting a 60Hz experience. This is why I keep emphasizing that [i]minimum [/i]frame rate is what matters. If you ever drop below your refresh rate you get these problems. If you can turn down the settings so you never drop below refresh, then GSync is not needed.
@Volnaiskra: Regarding the reviewers seeing it smoother. Yes definitely! There is no way that people as sharp as Anand and Carmack are making an amateur mistake here. If they say it's the best thing they've ever seen, I believe them.

I'm never going to argue with Carmack about frame rates, and refresh, and vsync. The man essentially invented the modern gaming engines.

I think the part that you might be missing is that all the demos were done on 60Hz monitors in 2D. Specifically tuned to demonstrate a worst case scenario that actually happens. If the game engine is putting out 55 fps on average, then you get wicked, worst case stutter.

Going to a 120Hz screen helps, but only because you shrank the stutter time. Some people are still badly susceptible to stutter at this level. When it stutters, you are seeing a frame held for another blanking interval. 17ms at 60Hz, 8ms at 120Hz.

In the review I referenced above Anand himself said:

What’s interesting to me about this last situation is if 120/144Hz reduces tearing enough to the point where you’re ok with it, G-Sync may be a solution to a problem you no longer care about. If you’re hyper sensitive to tearing however, there’s still value in G-Sync even at these high refresh rates.

It helps to go faster, but is still not as good as GSync.

We are not saying these people are gullible, it's just that people are wildly different in how much stutter and/or tearing annoys them.

Also directly to the point, you can still get nasty stutter at 120Hz. If your game engine is putting out 115 fps on average, and your monitor is set to 120Hz, you will still be getting a 60Hz experience. This is why I keep emphasizing that minimum frame rate is what matters. If you ever drop below your refresh rate you get these problems. If you can turn down the settings so you never drop below refresh, then GSync is not needed.

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

Posted 12/24/2013 07:23 AM   
[quote="bo3b"]OK, back for more. Here is what I think should be a definitive test for Arkham Asylum. I set the resolution to 1080p@120Hz, which is downscaled by the driver to my 1280x720 display. This is the way I can set the resolution larger to load the GPU. It definitely draws as 1080p, all on screen text is much smaller and it definitely impacts performance. (also smooths out jaggies, which was the original point.) Same spot in game, same view, made sure to see 99% GPU usage in both cases so we are limited by GPU, not frame caps or CPU. Single GTX 580. All game settings at max, Physx at max, but on dedicated 580 so unlikely to impact. Driver 331.65. 3D Disabled: 47 fps 3D On: 22 fps[/quote] I am currently visiting family for the holidays so can no longer test my rig. But, from your results and the links that eqzitara has provided, it would seem that there is quite a variance from one game to the next. If we take the performance hit from game to game including your results (not counting games which have obvious fps limits in or out of s3D), we have: [(2Dfps-3Dfps)/2Dfps]*100 = % drop in performance due to S3D GTX580 Batman Arkham Asylum: 53% drop GTX780 Ti Arma 3: 24% BF4: 50% CoD Ghosts: 42% Shadow Warrior: 47% WRC 4 FIA: 52% Splinter Cell Blacklist: 50% Total War Rome 2: 39% GTX 690 SLI: Batman Arkham City: 26% Skyrim: 27% Witcher 2: 54%+ From the results, it seems that modern games vary a great deal when using the 3D vision driver. What is interesting is that going SLI seems to significantly decrease the performance hit. It is a curious find. Perhaps one should look into going SLI over getting a better single card if the price is the same, comparatively speaking. bo3b, I am curious... would you enable SLi and increase your resolution to the max you can (downsampling), and post results of 3D on vs. 3D off? If your cards aren't being maxed, maybe try SGSSAA etc. On an unrelated note, but perhaps more related to the topic of this thread: I don't know if it has been mentioned before but nVidia have been working on this for a while. I wonder if this has something to do with the announcement? http://www.engadget.com/2013/07/24/nvidia-research-near-eye-light-field-display-prototype/ https://research.nvidia.com/publication/near-eye-light-field-displays [img]http://research.nvidia.com/sites/default/files/webimg.png?1373065524[/img]
bo3b said:OK, back for more. Here is what I think should be a definitive test for Arkham Asylum.

I set the resolution to 1080p@120Hz, which is downscaled by the driver to my 1280x720 display. This is the way I can set the resolution larger to load the GPU. It definitely draws as 1080p, all on screen text is much smaller and it definitely impacts performance. (also smooths out jaggies, which was the original point.)

Same spot in game, same view, made sure to see 99% GPU usage in both cases so we are limited by GPU, not frame caps or CPU. Single GTX 580. All game settings at max, Physx at max, but on dedicated 580 so unlikely to impact. Driver 331.65.

3D Disabled: 47 fps
3D On: 22 fps


I am currently visiting family for the holidays so can no longer test my rig.

But, from your results and the links that eqzitara has provided, it would seem that there is quite a variance from one game to the next.

If we take the performance hit from game to game including your results (not counting games which have obvious fps limits in or out of s3D), we have:

[(2Dfps-3Dfps)/2Dfps]*100 = % drop in performance due to S3D

GTX580
Batman Arkham Asylum: 53% drop

GTX780 Ti
Arma 3: 24%
BF4: 50%
CoD Ghosts: 42%
Shadow Warrior: 47%
WRC 4 FIA: 52%
Splinter Cell Blacklist: 50%
Total War Rome 2: 39%

GTX 690 SLI:
Batman Arkham City: 26%
Skyrim: 27%
Witcher 2: 54%+

From the results, it seems that modern games vary a great deal when using the 3D vision driver. What is interesting is that going SLI seems to significantly decrease the performance hit.

It is a curious find. Perhaps one should look into going SLI over getting a better single card if the price is the same, comparatively speaking.

bo3b, I am curious... would you enable SLi and increase your resolution to the max you can (downsampling), and post results of 3D on vs. 3D off? If your cards aren't being maxed, maybe try SGSSAA etc.

On an unrelated note, but perhaps more related to the topic of this thread:
I don't know if it has been mentioned before but nVidia have been working on this for a while. I wonder if this has something to do with the announcement?

http://www.engadget.com/2013/07/24/nvidia-research-near-eye-light-field-display-prototype/

https://research.nvidia.com/publication/near-eye-light-field-displays

Image

Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.

Posted 12/24/2013 06:06 PM   
[url]https://forums.geforce.com/default/topic/648008/3d-vision/should-i-just-buy-amd-for-3d-lol/post/4045271/#4045271[/url] If I had to guess one thing thats definitely what I would guess. It would make me laugh if it is and they made it a low res HDMI 1.4 device.
https://forums.geforce.com/default/topic/648008/3d-vision/should-i-just-buy-amd-for-3d-lol/post/4045271/#4045271
If I had to guess one thing thats definitely what I would guess.

It would make me laugh if it is and they made it a low res HDMI 1.4 device.

Co-founder of helixmod.blog.com

If you like one of my helixmod patches and want to donate. Can send to me through paypal - eqzitara@yahoo.com

Posted 12/24/2013 09:04 PM   
@RAGEdemon - I think we all can agree that some games are more demanding in 2D than others, I also think that this may translate loosely to how much more demanding (or a hit on FPS) they are in 3D. The test that Bob posted shows what I recall typically happening with underpowered hardware. If your GPU is struggling (>90% utilization) to get a non-vsync limited FPS number in 2D, your 3D results will be 1/2 or less then that. While many of the results that Eqzitara posted links to I'd discount as unsupported/unplayable in 3DVision and some of the others appear to show signs of a vsync limit the other results actually show a high power GPU (top of the consumer line for Nvidia in the case of the 780ti) taking way more than a 35% performance hit when 3DVision was enabled. I think I've almost always used SLI with 3DVision, at the time I added a 2nd GTX460 to play TombRaider: Underworld and Crysis2 because a single 460 was not performing well enough for me. I suppose I could fire up Metro2033 with a single 780SC card enabled and see if I get a larger percentage of difference between 2D/3D benchmarks. Then again I play at 720P, I'd expect those that play at 1080P to experience an even greater difference than I do.
@RAGEdemon - I think we all can agree that some games are more demanding in 2D than others, I also think that this may translate loosely to how much more demanding (or a hit on FPS) they are in 3D.

The test that Bob posted shows what I recall typically happening with underpowered hardware. If your GPU is struggling (>90% utilization) to get a non-vsync limited FPS number in 2D, your 3D results will be 1/2 or less then that.

While many of the results that Eqzitara posted links to I'd discount as unsupported/unplayable in 3DVision and some of the others appear to show signs of a vsync limit the other results actually show a high power GPU (top of the consumer line for Nvidia in the case of the 780ti) taking way more than a 35% performance hit when 3DVision was enabled.

I think I've almost always used SLI with 3DVision, at the time I added a 2nd GTX460 to play TombRaider: Underworld and Crysis2 because a single 460 was not performing well enough for me.

I suppose I could fire up Metro2033 with a single 780SC card enabled and see if I get a larger percentage of difference between 2D/3D benchmarks. Then again I play at 720P, I'd expect those that play at 1080P to experience an even greater difference than I do.

i7-2600K-4.5Ghz/Corsair H100i/8GB/GTX780SC-SLI/Win7-64/1200W-PSU/Samsung 840-500GB SSD/Coolermaster-Tower/Benq 1080ST @ 100"

Posted 12/24/2013 11:39 PM   
Well, this is all performance testing, which is something I used to do for a living, so I'm fairly strict on what I consider acceptable experiments. Earlier results that I've seen, including RageDemon's table, suggest to me that they are actually CPU limited, not GPU limited. The assumption and common wisdom is that games are all about the GPU, and in my experience that is not the case. As an example of this mental bias regarding GPU we have the 'driver efficiency' suggested by Neil of MTBS3D. Rating games or cards by this 'efficiency' measure is 100% bogus, because you are changing the metric from how well the game can be played into something synthetic that is actually the opposite of what you want. Take for example a super slow CPU, with a super powerful GPU. The GPU can render anything it is delivered in milliseconds, but the CPU cannot fill the pipe fast enough. This slow CPU case will give you a 100% 'efficiency'. Because you lowered the top bar, the 2D case, to match the 3D case. Both are unplayable, but 'efficient'. This is a horrible idea. When I see a frame rate drop of 50% from 2D to 3D, then I know that the GPU is being maxed out, and that there are no other bottlenecks of significance. When you are seeing differences of only 30% from 2D to 3D, this is not a plus, it's a minus. It means that you have another bottleneck in your system that is limiting your high end in 2D. As demonstrated by Arkham, the 3D driver itself is not adding that much overhead. I like to concentrate deeply upon a single well run test cases, not look at lots of fuzzier numbers. If 3D vision does not render a second frame, giving us a 50% frame rate hit, then how do you explain my Arkham test? I'm happy to extend the test for SLI at SuperSampling, but that 60 fps game cap is very likely to cause us problems. 2x of 45fps=90, at 1080p. But I'll look.
Well, this is all performance testing, which is something I used to do for a living, so I'm fairly strict on what I consider acceptable experiments.

Earlier results that I've seen, including RageDemon's table, suggest to me that they are actually CPU limited, not GPU limited. The assumption and common wisdom is that games are all about the GPU, and in my experience that is not the case.

As an example of this mental bias regarding GPU we have the 'driver efficiency' suggested by Neil of MTBS3D. Rating games or cards by this 'efficiency' measure is 100% bogus, because you are changing the metric from how well the game can be played into something synthetic that is actually the opposite of what you want.

Take for example a super slow CPU, with a super powerful GPU. The GPU can render anything it is delivered in milliseconds, but the CPU cannot fill the pipe fast enough. This slow CPU case will give you a 100% 'efficiency'. Because you lowered the top bar, the 2D case, to match the 3D case. Both are unplayable, but 'efficient'. This is a horrible idea.


When I see a frame rate drop of 50% from 2D to 3D, then I know that the GPU is being maxed out, and that there are no other bottlenecks of significance.

When you are seeing differences of only 30% from 2D to 3D, this is not a plus, it's a minus. It means that you have another bottleneck in your system that is limiting your high end in 2D. As demonstrated by Arkham, the 3D driver itself is not adding that much overhead.

I like to concentrate deeply upon a single well run test cases, not look at lots of fuzzier numbers.

If 3D vision does not render a second frame, giving us a 50% frame rate hit, then how do you explain my Arkham test?


I'm happy to extend the test for SLI at SuperSampling, but that 60 fps game cap is very likely to cause us problems. 2x of 45fps=90, at 1080p. But I'll look.

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

Posted 12/25/2013 12:40 AM   
Tried the experiment of SuperSampling at 1440p, since my GTX 580s will do that. This runs and is mostly playable, but I think is a good example of how tricky performance testing is. Something else is making it run poorly, and it would not appear to be lack of GPU exactly. When running, instead of being fluid and smooth, it's fully stuttery and bad. Without trying to narrow it down, my best guess is that this is a lack of RAM on the video card, or possibly PCI bandwidth. (sort of related in that if it has to page out textures, PCI is too slow to load them.) Here is the test example. Yellow is with 3D ON, Red is 3D toggle-off but not disabled. [img]http://bo3b.net/bataa/3D_on_off_1440p.JPG[/img] We didn't get double here, but it's also unplayable, so I'm not too concerned. The interesting part to notice is that when I Toggled-Off the 3D, that the GPU usage went UP. There is no logical sense to it going up, unless there is a bottleneck elsewhere. I'll look into seeing if I can disable the frame cap.
Tried the experiment of SuperSampling at 1440p, since my GTX 580s will do that. This runs and is mostly playable, but I think is a good example of how tricky performance testing is. Something else is making it run poorly, and it would not appear to be lack of GPU exactly. When running, instead of being fluid and smooth, it's fully stuttery and bad.

Without trying to narrow it down, my best guess is that this is a lack of RAM on the video card, or possibly PCI bandwidth. (sort of related in that if it has to page out textures, PCI is too slow to load them.)

Here is the test example. Yellow is with 3D ON, Red is 3D toggle-off but not disabled.

Image


We didn't get double here, but it's also unplayable, so I'm not too concerned. The interesting part to notice is that when I Toggled-Off the 3D, that the GPU usage went UP. There is no logical sense to it going up, unless there is a bottleneck elsewhere.

I'll look into seeing if I can disable the frame cap.

Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers

Posted 12/25/2013 01:19 AM   
[quote="bo3b"] I'll look into seeing if I can disable the frame cap.[/quote] IIRC, there is something strange going on with the Batman Arkham Asylum engine as stuttering was a common problem at release and is still complained about to this day with a lot of people having no workaround. I seem to remember something about having it run with certain parameters to help such stuttering. Could it be caused by DX11 vs DX9 I wonder. I see from your screenshots that you are already running the MSI (rivatuner) statistics server for OSD. Through MSI settings, you can enable display of GPU memory, if that is any help to you. Batman Arkham Asylum may not be the best benchmark for what we are trying to ascertain. It seems to be overly demanding / not too optimised engine, especially for Stereo3D it would seem. I'm sure you have done this already but please make sure that PhysX is disabled for all benchmarks for obvious reasons ;-) Also, Happy Holidays everyone! Image from 3D Vision blog: [img]http://3dvision-blog.com/wp-content/uploads/2011/12/happy-holidays-3d.jpg[/img]
bo3b said:

I'll look into seeing if I can disable the frame cap.


IIRC, there is something strange going on with the Batman Arkham Asylum engine as stuttering was a common problem at release and is still complained about to this day with a lot of people having no workaround. I seem to remember something about having it run with certain parameters to help such stuttering. Could it be caused by DX11 vs DX9 I wonder.

I see from your screenshots that you are already running the MSI (rivatuner) statistics server for OSD. Through MSI settings, you can enable display of GPU memory, if that is any help to you.

Batman Arkham Asylum may not be the best benchmark for what we are trying to ascertain. It seems to be overly demanding / not too optimised engine, especially for Stereo3D it would seem.

I'm sure you have done this already but please make sure that PhysX is disabled for all benchmarks for obvious reasons ;-)


Also, Happy Holidays everyone!

Image from 3D Vision blog:
Image

Windows 10 64-bit, Intel 7700K @ 5.1GHz, 16GB 3600MHz CL15 DDR4 RAM, 2x GTX 1080 SLI, Asus Maximus IX Hero, Sound Blaster ZxR, PCIe Quad SSD, Oculus Rift CV1, DLP Link PGD-150 glasses, ViewSonic PJD6531w 3D DLP Projector @ 1280x800 120Hz native / 2560x1600 120Hz DSR 3D Gaming.

Posted 12/25/2013 03:41 AM   
  13 / 15    
Scroll To Top