Hey guys,
Since I have been gaming with a friend, i realized there is a micro stutter. It's not that big problem, but is there a way to eliminate it? Is this normal for 3D vision?
I just wanna know, since I will be building my 3D rig this year...so just comment on that.
Thanks a lot
Since I have been gaming with a friend, i realized there is a micro stutter. It's not that big problem, but is there a way to eliminate it? Is this normal for 3D vision?
I just wanna know, since I will be building my 3D rig this year...so just comment on that.
This can happen either in 3D or 2D mode, whenever the time to render the next frame drops below the refresh rate of the monitor. Having higher refresh rates helps make it less visible, though.
This can happen either in 3D or 2D mode, whenever the time to render the next frame drops below the refresh rate of the monitor. Having higher refresh rates helps make it less visible, though.
If I don't keep up on subsequent replies to a thread, please send me a PM, as I monitor a large number of threads across all the forums.
hi, for 3d u need the best single gpu graphics card u can afford, and fast cpu, i dont experience micro-stuttering playing in 3D Vision on 580, im playing D3 Reaper of Souls right now, Mass Effect 3, sometimes Borderlands 2 and zero stuttering. but if u will have problem to maintain 60fps, cap to 30fps instead
but u can test maximum prerendered frames in NvInspector from 1 to 8, but remember, after 5, it will create too much input lag
hi, for 3d u need the best single gpu graphics card u can afford, and fast cpu, i dont experience micro-stuttering playing in 3D Vision on 580, im playing D3 Reaper of Souls right now, Mass Effect 3, sometimes Borderlands 2 and zero stuttering. but if u will have problem to maintain 60fps, cap to 30fps instead
but u can test maximum prerendered frames in NvInspector from 1 to 8, but remember, after 5, it will create too much input lag
It would help to know what game your trying to run and what GPU card(s) your trying to run it on.
Personally I can't recall the last time I've seen stuttering, microstuttering or which game or rig I was using when I last viewed it. (I do recall it happening to me once or twice)
It would help to know what game your trying to run and what GPU card(s) your trying to run it on.
Personally I can't recall the last time I've seen stuttering, microstuttering or which game or rig I was using when I last viewed it. (I do recall it happening to me once or twice)
We were playing Arkham origins on 2 GTX 660s. Yeah there was micro stuttering. However when i build my rug i will be targetting most demanding games like AC IV BF, Titanfall ... yup, that's why i though 2 870s would do well. With single GPU will be hassle in 3D that's why, since it supports SLI, i would do it.
So in the end, the only way to remove the micro stuttering as i understand from you is massive GPU power?
We were playing Arkham origins on 2 GTX 660s. Yeah there was micro stuttering. However when i build my rug i will be targetting most demanding games like AC IV BF, Titanfall ... yup, that's why i though 2 870s would do well. With single GPU will be hassle in 3D that's why, since it supports SLI, i would do it.
So in the end, the only way to remove the micro stuttering as i understand from you is massive GPU power?
Arkham Origins is an intensive game, which in any case would push that system hard with max settings. But my guess is that the main suspects are SLI, PhysX, and/or Tessellation.
Try this: Disable SLI, and switch the 2nd card into dedicated PhysX mode using the nvidia control panel. Firstly, you might be surprised to find that you actually get higher FPS in Arkham Origins this way. That's exactly what I found with my Titans (It makes sense: Arkham Origins has one of the most intensive PhysX implementations of any game to date, as well as poor SLI scaling).
Secondly, this should clear up any PhysX bottleneck issues, and obviously solve any SLI issues. I'd say there's a good chance you won't get microstuttering anymore.
If you still do though, try turning off tesselation, and then do the usual: turn down other settings, close background programs, lower resolution, update drivers, etc.
Arkham Origins is an intensive game, which in any case would push that system hard with max settings. But my guess is that the main suspects are SLI, PhysX, and/or Tessellation.
Try this: Disable SLI, and switch the 2nd card into dedicated PhysX mode using the nvidia control panel. Firstly, you might be surprised to find that you actually get higher FPS in Arkham Origins this way. That's exactly what I found with my Titans (It makes sense: Arkham Origins has one of the most intensive PhysX implementations of any game to date, as well as poor SLI scaling).
Secondly, this should clear up any PhysX bottleneck issues, and obviously solve any SLI issues. I'd say there's a good chance you won't get microstuttering anymore.
If you still do though, try turning off tesselation, and then do the usual: turn down other settings, close background programs, lower resolution, update drivers, etc.
lol wait a sec, if I disable SLI, then does that mean the two GPUs won't be connected anymore? What other games are like that? I wonder how will I arrange the FPS on the ACIV when I get my build...it's not really nicely optimized...
lol wait a sec, if I disable SLI, then does that mean the two GPUs won't be connected anymore? What other games are like that? I wonder how will I arrange the FPS on the ACIV when I get my build...it's not really nicely optimized...
Yep, Origins is PhysX heavy. Your dual 660 is roughly equivalent to my dual 580, and I would get occasional stutter during particularly foggy/smokey scenes in Asylum.
Definitely worth the experiment of dedicating one card to PhysX, since you have it, and know good stutter spots. If that fixes it, you should think about your build maybe including one of your then old 660s as a dedicated PhysX card. Make sure the motherboard can do it properly.
Yep, Origins is PhysX heavy. Your dual 660 is roughly equivalent to my dual 580, and I would get occasional stutter during particularly foggy/smokey scenes in Asylum.
Definitely worth the experiment of dedicating one card to PhysX, since you have it, and know good stutter spots. If that fixes it, you should think about your build maybe including one of your then old 660s as a dedicated PhysX card. Make sure the motherboard can do it properly.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607 Latest 3Dmigoto Release Bo3b's School for ShaderHackers
[quote="bo3b"]Yep, Origins is PhysX heavy. Your dual 660 is roughly equivalent to my dual 580, and I would get occasional stutter during particularly foggy/smokey scenes in Asylum.
Definitely worth the experiment of dedicating one card to PhysX, since you have it, and know good stutter spots. If that fixes it, you should think about your build maybe including one of your then old 660s as a dedicated PhysX card. Make sure the motherboard can do it properly.[/quote]
Pair Maxwell 870 with Kepler 660? no thx. I won't buy any Kepler cards for my new build - Pure Maxwell!.. I will probably be only on my iPad 2 for 6 months (TSMC delay) until the Maxwell GPUs are out...currently i have a GTS 250 and an old CPU Core2 Quad Q9300. Sorry I have no GTX 660s, read my previous posts.
I will not be playing ONLY Arkham Origins...So i will need my SLI for 3D vision on ACIVBF.
bo3b said:Yep, Origins is PhysX heavy. Your dual 660 is roughly equivalent to my dual 580, and I would get occasional stutter during particularly foggy/smokey scenes in Asylum.
Definitely worth the experiment of dedicating one card to PhysX, since you have it, and know good stutter spots. If that fixes it, you should think about your build maybe including one of your then old 660s as a dedicated PhysX card. Make sure the motherboard can do it properly.
Pair Maxwell 870 with Kepler 660? no thx. I won't buy any Kepler cards for my new build - Pure Maxwell!.. I will probably be only on my iPad 2 for 6 months (TSMC delay) until the Maxwell GPUs are out...currently i have a GTS 250 and an old CPU Core2 Quad Q9300. Sorry I have no GTX 660s, read my previous posts.
I will not be playing ONLY Arkham Origins...So i will need my SLI for 3D vision on ACIVBF.
WhiteSky, man it's hard to get through to you.
The point is that you would run SLI 870 PLUS the 660 as dedicated PhysX duty. Read Volnaiskra's blog for how much difference this can make.
http://volnapc.com/how-much-difference-does-a-dedicated-physx-card-make
And lastly, you should trust us. We actually do know what we are talking about.
[quote="bo3b"]WhiteSky, man it's hard to get through to you.
The point is that you would run SLI 870 PLUS the 660 as dedicated PhysX duty. Read Volnaiskra's blog for how much difference this can make.
http://volnapc.com/how-much-difference-does-a-dedicated-physx-card-make
And lastly, you should trust us. We actually do know what we are talking about.
[/quote]
Now you make more sense you know... I trust you but...it's just that I didn't get what you wanted to say...sorry for that.
[quote="Volnaiskra"]Arkham Origins is an intensive game, which in any case would push that system hard with max settings. But my guess is that the main suspects are SLI, PhysX, and/or Tessellation.
Try this: Disable SLI, and switch the 2nd card into dedicated PhysX mode using the nvidia control panel. Firstly, you might be surprised to find that you actually get higher FPS in Arkham Origins this way. That's exactly what I found with my Titans (It makes sense: Arkham Origins has one of the most intensive PhysX implementations of any game to date, as well as poor SLI scaling).
Secondly, this should clear up any PhysX bottleneck issues, and obviously solve any SLI issues. I'd say there's a good chance you won't get microstuttering anymore.
If you still do though, try turning off tesselation, and then do the usual: turn down other settings, close background programs, lower resolution, update drivers, etc.
[/quote]
Surely tossing as much GPU power as you can helps but we must keep in mind that all PC games require some configuring/adjusting to the system(s) we are running them on.
As Volnaiskra points out above with some experimentation you can find which Game settings are causing your stuttering and ether turn them down, leave them off or reconfigure/optimize/adjust your system to better respond to the load the game is creating.
Some games may have a scene(s), entire chapters or levels that tend to stutter more than others. (Crysis3's now infamous "rope physic's" in one of the earlier chapters comes to mind here)
In some or many of these cases no amount of GPU power could/can solve these issues. Sometimes turning settings down help and other times its just something we have to ignore.
Volnaiskra said:Arkham Origins is an intensive game, which in any case would push that system hard with max settings. But my guess is that the main suspects are SLI, PhysX, and/or Tessellation.
Try this: Disable SLI, and switch the 2nd card into dedicated PhysX mode using the nvidia control panel. Firstly, you might be surprised to find that you actually get higher FPS in Arkham Origins this way. That's exactly what I found with my Titans (It makes sense: Arkham Origins has one of the most intensive PhysX implementations of any game to date, as well as poor SLI scaling).
Secondly, this should clear up any PhysX bottleneck issues, and obviously solve any SLI issues. I'd say there's a good chance you won't get microstuttering anymore.
If you still do though, try turning off tesselation, and then do the usual: turn down other settings, close background programs, lower resolution, update drivers, etc.
Surely tossing as much GPU power as you can helps but we must keep in mind that all PC games require some configuring/adjusting to the system(s) we are running them on.
As Volnaiskra points out above with some experimentation you can find which Game settings are causing your stuttering and ether turn them down, leave them off or reconfigure/optimize/adjust your system to better respond to the load the game is creating.
Some games may have a scene(s), entire chapters or levels that tend to stutter more than others. (Crysis3's now infamous "rope physic's" in one of the earlier chapters comes to mind here)
In some or many of these cases no amount of GPU power could/can solve these issues. Sometimes turning settings down help and other times its just something we have to ignore.
[quote="WhiteSkyMage"]Now you make more sense you know... I trust you but...it's just that I didn't get what you wanted to say...sorry for that.[/quote]Cool man.
SLI 870 with dedicated PhysX 660 on a PLX motherboard with overclocked Haswell CPU. SWEET rig. Breaks the bank, but as good as it gets.
(Sorry for that 660 reference, I didn't quite catch that was your buddy's system. Steal one of his 660s when he's not looking, he'll never notice. :-)
More to the point: For AC IV, and BF, you really need a good CPU. Both are CPU bound, not GPU bound. Even in 3D at 1440, the CPU matters more.
So much more that if you needed to trim for budget, you can probably start with a single 870/880 and only add SLI if you need it.
[quote="bo3b"]
SLI 870 with dedicated PhysX 660 on a PLX motherboard with overclocked Haswell CPU. SWEET rig. Breaks the bank, but as good as it gets.
More to the point: For AC IV, and BF, you really need a good CPU. Both are CPU bound, not GPU bound. Even in 3D at 1440, the CPU matters more.
[/quote]
Maybe I didn't mention - the entry level 6-core i7 5820K (2011-3) Haswell-E CPU? That's the one I am thinking of...do you think I need the 8 core one (i will be doing video capture/edit using ShadowPlay and some editor, i will find out later which one, but I think 6 core CPU is actually a "sweet spot" between gaming and workstation CPU)
For the PhysX however, I might get the GTX 750Ti...Then I will have a PURE Maxwell system :) or do you think 660 would be a better choice?
However what about the other nvidia textures? HBAO, TXAA and stuff? Do they need "dedication" as well - if yes, then I might put that dedicated GPU to do physX and everything else except the 3D rendering and whatever...
My question is here - [b]can I split the textures?[/b] - so that the ones requiring more performance to be taken care of the SLI cards and the others by the dedicated card... I just wanna make sure that this micro-stuttering is taken care of...
SLI 870 with dedicated PhysX 660 on a PLX motherboard with overclocked Haswell CPU. SWEET rig. Breaks the bank, but as good as it gets.
More to the point: For AC IV, and BF, you really need a good CPU. Both are CPU bound, not GPU bound. Even in 3D at 1440, the CPU matters more.
Maybe I didn't mention - the entry level 6-core i7 5820K (2011-3) Haswell-E CPU? That's the one I am thinking of...do you think I need the 8 core one (i will be doing video capture/edit using ShadowPlay and some editor, i will find out later which one, but I think 6 core CPU is actually a "sweet spot" between gaming and workstation CPU)
For the PhysX however, I might get the GTX 750Ti...Then I will have a PURE Maxwell system :) or do you think 660 would be a better choice?
However what about the other nvidia textures? HBAO, TXAA and stuff? Do they need "dedication" as well - if yes, then I might put that dedicated GPU to do physX and everything else except the 3D rendering and whatever...
My question is here - can I split the textures? - so that the ones requiring more performance to be taken care of the SLI cards and the others by the dedicated card... I just wanna make sure that this micro-stuttering is taken care of...
For the Haswell-E CPU, the only reason to not go with the 8 core version would be cost. But it's a pretty big delta. Intel always bends you over for the absolute top chip.
I'd say it depends upon how much video editing you do. You will absolutely see a significant difference from 6 to 8 cores in video editing, compression, conversion. If you are getting paid for video work, there is no question it is worth the money. If you do it for fun, it's time, but may not matter enough to justify the cost.
Game-wise there is no drawback to 8 cores, they just mostly won't get used so it's a waste of money. 4 cores with i5 (what I run, no hyperthread), is probably not quite right for the future. We already have a couple of games, BF4, Rage, probably WatchDogs.
Seems to me the 6 core (12 thread with HyperThreading) is plenty for gaming for at least a couple of years.
The only slight caveat is that with more cores and Hyperthreading it's hard to get much overclock. A lot of games are essentially single threaded, and you mentioned AC4, which suffers from this. This is why I went with a 4 core chip, so that I could overclock the primary core better. Not sure it was the right choice though, because overclocking Haswell is pretty limited no matter what.
If I were buying new today, and video editing and re-encoding is on the agenda, I would go with your plan of 6 core Haswell-E.
For the 660 vs 750ti as a dedicated PhysX card- I honestly have no idea. Probably 750ti is overkill, but the only way to know would be testing, and I haven't seen any testing of that level of card for PhysX. 660 is a little more expensive than 750ti, and 750ti is more power efficient which might matter for 3 cards on your PSU.
750ti might also be of value for video encoding. If the software you use would also pull in a third card, that would make it a clear winner.
Probably 750ti is the right choice, but I can't say without any testing results.
Lastly on the textures versus other rendering techniques- there is no way to split that out to another card, so whatever your primary GPU is, that's what gets used. If you SLI, you get a bump here, and for 3D Vision it's nearly sure to be worth it.
I would only go single card for 3D Vision if cost were an issue. And even there, I would probably go with less than top of the line as a better value proposition.
With SLI Maxwell, and Haswell-E, you will be able to run most everything with no stutter. Buuuut... The reality is that you will still hit stutter in some games, like AC4, because of bad optimizations that force everything to a single core.
Hope that helps.
For the Haswell-E CPU, the only reason to not go with the 8 core version would be cost. But it's a pretty big delta. Intel always bends you over for the absolute top chip.
I'd say it depends upon how much video editing you do. You will absolutely see a significant difference from 6 to 8 cores in video editing, compression, conversion. If you are getting paid for video work, there is no question it is worth the money. If you do it for fun, it's time, but may not matter enough to justify the cost.
Game-wise there is no drawback to 8 cores, they just mostly won't get used so it's a waste of money. 4 cores with i5 (what I run, no hyperthread), is probably not quite right for the future. We already have a couple of games, BF4, Rage, probably WatchDogs.
Seems to me the 6 core (12 thread with HyperThreading) is plenty for gaming for at least a couple of years.
The only slight caveat is that with more cores and Hyperthreading it's hard to get much overclock. A lot of games are essentially single threaded, and you mentioned AC4, which suffers from this. This is why I went with a 4 core chip, so that I could overclock the primary core better. Not sure it was the right choice though, because overclocking Haswell is pretty limited no matter what.
If I were buying new today, and video editing and re-encoding is on the agenda, I would go with your plan of 6 core Haswell-E.
For the 660 vs 750ti as a dedicated PhysX card- I honestly have no idea. Probably 750ti is overkill, but the only way to know would be testing, and I haven't seen any testing of that level of card for PhysX. 660 is a little more expensive than 750ti, and 750ti is more power efficient which might matter for 3 cards on your PSU.
750ti might also be of value for video encoding. If the software you use would also pull in a third card, that would make it a clear winner.
Probably 750ti is the right choice, but I can't say without any testing results.
Lastly on the textures versus other rendering techniques- there is no way to split that out to another card, so whatever your primary GPU is, that's what gets used. If you SLI, you get a bump here, and for 3D Vision it's nearly sure to be worth it.
I would only go single card for 3D Vision if cost were an issue. And even there, I would probably go with less than top of the line as a better value proposition.
With SLI Maxwell, and Haswell-E, you will be able to run most everything with no stutter. Buuuut... The reality is that you will still hit stutter in some games, like AC4, because of bad optimizations that force everything to a single core.
Hope that helps.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607 Latest 3Dmigoto Release Bo3b's School for ShaderHackers
Since I have been gaming with a friend, i realized there is a micro stutter. It's not that big problem, but is there a way to eliminate it? Is this normal for 3D vision?
I just wanna know, since I will be building my 3D rig this year...so just comment on that.
Thanks a lot
If I don't keep up on subsequent replies to a thread, please send me a PM, as I monitor a large number of threads across all the forums.
but u can test maximum prerendered frames in NvInspector from 1 to 8, but remember, after 5, it will create too much input lag
Personally I can't recall the last time I've seen stuttering, microstuttering or which game or rig I was using when I last viewed it. (I do recall it happening to me once or twice)
i7-2600K-4.5Ghz/Corsair H100i/8GB/GTX780SC-SLI/Win7-64/1200W-PSU/Samsung 840-500GB SSD/Coolermaster-Tower/Benq 1080ST @ 100"
So in the end, the only way to remove the micro stuttering as i understand from you is massive GPU power?
Try this: Disable SLI, and switch the 2nd card into dedicated PhysX mode using the nvidia control panel. Firstly, you might be surprised to find that you actually get higher FPS in Arkham Origins this way. That's exactly what I found with my Titans (It makes sense: Arkham Origins has one of the most intensive PhysX implementations of any game to date, as well as poor SLI scaling).
Secondly, this should clear up any PhysX bottleneck issues, and obviously solve any SLI issues. I'd say there's a good chance you won't get microstuttering anymore.
If you still do though, try turning off tesselation, and then do the usual: turn down other settings, close background programs, lower resolution, update drivers, etc.
Definitely worth the experiment of dedicating one card to PhysX, since you have it, and know good stutter spots. If that fixes it, you should think about your build maybe including one of your then old 660s as a dedicated PhysX card. Make sure the motherboard can do it properly.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers
Pair Maxwell 870 with Kepler 660? no thx. I won't buy any Kepler cards for my new build - Pure Maxwell!.. I will probably be only on my iPad 2 for 6 months (TSMC delay) until the Maxwell GPUs are out...currently i have a GTS 250 and an old CPU Core2 Quad Q9300. Sorry I have no GTX 660s, read my previous posts.
I will not be playing ONLY Arkham Origins...So i will need my SLI for 3D vision on ACIVBF.
The point is that you would run SLI 870 PLUS the 660 as dedicated PhysX duty. Read Volnaiskra's blog for how much difference this can make.
http://volnapc.com/how-much-difference-does-a-dedicated-physx-card-make
And lastly, you should trust us. We actually do know what we are talking about.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers
Now you make more sense you know... I trust you but...it's just that I didn't get what you wanted to say...sorry for that.
Surely tossing as much GPU power as you can helps but we must keep in mind that all PC games require some configuring/adjusting to the system(s) we are running them on.
As Volnaiskra points out above with some experimentation you can find which Game settings are causing your stuttering and ether turn them down, leave them off or reconfigure/optimize/adjust your system to better respond to the load the game is creating.
Some games may have a scene(s), entire chapters or levels that tend to stutter more than others. (Crysis3's now infamous "rope physic's" in one of the earlier chapters comes to mind here)
In some or many of these cases no amount of GPU power could/can solve these issues. Sometimes turning settings down help and other times its just something we have to ignore.
i7-2600K-4.5Ghz/Corsair H100i/8GB/GTX780SC-SLI/Win7-64/1200W-PSU/Samsung 840-500GB SSD/Coolermaster-Tower/Benq 1080ST @ 100"
SLI 870 with dedicated PhysX 660 on a PLX motherboard with overclocked Haswell CPU. SWEET rig. Breaks the bank, but as good as it gets.
(Sorry for that 660 reference, I didn't quite catch that was your buddy's system. Steal one of his 660s when he's not looking, he'll never notice. :-)
More to the point: For AC IV, and BF, you really need a good CPU. Both are CPU bound, not GPU bound. Even in 3D at 1440, the CPU matters more.
So much more that if you needed to trim for budget, you can probably start with a single 870/880 and only add SLI if you need it.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers
Maybe I didn't mention - the entry level 6-core i7 5820K (2011-3) Haswell-E CPU? That's the one I am thinking of...do you think I need the 8 core one (i will be doing video capture/edit using ShadowPlay and some editor, i will find out later which one, but I think 6 core CPU is actually a "sweet spot" between gaming and workstation CPU)
For the PhysX however, I might get the GTX 750Ti...Then I will have a PURE Maxwell system :) or do you think 660 would be a better choice?
However what about the other nvidia textures? HBAO, TXAA and stuff? Do they need "dedication" as well - if yes, then I might put that dedicated GPU to do physX and everything else except the 3D rendering and whatever...
My question is here - can I split the textures? - so that the ones requiring more performance to be taken care of the SLI cards and the others by the dedicated card... I just wanna make sure that this micro-stuttering is taken care of...
I'd say it depends upon how much video editing you do. You will absolutely see a significant difference from 6 to 8 cores in video editing, compression, conversion. If you are getting paid for video work, there is no question it is worth the money. If you do it for fun, it's time, but may not matter enough to justify the cost.
Game-wise there is no drawback to 8 cores, they just mostly won't get used so it's a waste of money. 4 cores with i5 (what I run, no hyperthread), is probably not quite right for the future. We already have a couple of games, BF4, Rage, probably WatchDogs.
Seems to me the 6 core (12 thread with HyperThreading) is plenty for gaming for at least a couple of years.
The only slight caveat is that with more cores and Hyperthreading it's hard to get much overclock. A lot of games are essentially single threaded, and you mentioned AC4, which suffers from this. This is why I went with a 4 core chip, so that I could overclock the primary core better. Not sure it was the right choice though, because overclocking Haswell is pretty limited no matter what.
If I were buying new today, and video editing and re-encoding is on the agenda, I would go with your plan of 6 core Haswell-E.
For the 660 vs 750ti as a dedicated PhysX card- I honestly have no idea. Probably 750ti is overkill, but the only way to know would be testing, and I haven't seen any testing of that level of card for PhysX. 660 is a little more expensive than 750ti, and 750ti is more power efficient which might matter for 3 cards on your PSU.
750ti might also be of value for video encoding. If the software you use would also pull in a third card, that would make it a clear winner.
Probably 750ti is the right choice, but I can't say without any testing results.
Lastly on the textures versus other rendering techniques- there is no way to split that out to another card, so whatever your primary GPU is, that's what gets used. If you SLI, you get a bump here, and for 3D Vision it's nearly sure to be worth it.
I would only go single card for 3D Vision if cost were an issue. And even there, I would probably go with less than top of the line as a better value proposition.
With SLI Maxwell, and Haswell-E, you will be able to run most everything with no stutter. Buuuut... The reality is that you will still hit stutter in some games, like AC4, because of bad optimizations that force everything to a single core.
Hope that helps.
Acer H5360 (1280x720@120Hz) - ASUS VG248QE with GSync mod - 3D Vision 1&2 - Driver 372.54
GTX 970 - i5-4670K@4.2GHz - 12GB RAM - Win7x64+evilKB2670838 - 4 Disk X25 RAID
SAGER NP9870-S - GTX 980 - i7-6700K - Win10 Pro 1607
Latest 3Dmigoto Release
Bo3b's School for ShaderHackers