3Dmigoto now open-source...
  132 / 143    
Thanks DSS for the 1.3.10 update!! The include/exclude will be very handy...a more "clean" d3dx.ini. Also now will be easier to mantain the universal fixes when 3dmigoto need to be upgraded. For the overlay i made, i have a question...i made an overlay.ini to put all the overlay (presets and F1 help), but this line need to be in the d3dx.ini, otherwise don't work (don't work in the overlay.ini): [code][CommandListUpdateActivationTime] y3 = time post y3 = time[/code] any idea? or that's the way this works I have a preset in the d3dx.ini that call that command_list...i also need to put in the overlay.ini?
Thanks DSS for the 1.3.10 update!! The include/exclude will be very handy...a more "clean" d3dx.ini.
Also now will be easier to mantain the universal fixes when 3dmigoto need to be upgraded.

For the overlay i made, i have a question...i made an overlay.ini to put all the overlay (presets and F1 help), but this line need to be in the d3dx.ini, otherwise don't work (don't work in the overlay.ini):

[CommandListUpdateActivationTime]
y3 = time
post y3 = time


any idea? or that's the way this works

I have a preset in the d3dx.ini that call that command_list...i also need to put in the overlay.ini?

MY WEB

Helix Mod - Making 3D Better

My 3D Screenshot Gallery

Like my fixes? you can donate to Paypal: dhr.donation@gmail.com

Posted 04/15/2018 02:44 PM   
[quote="DJ-RK"]-Built in key for cycling through different marking_mode options[/quote] I should be able to add that fairly easily :) [quote]-Ability to set key presets for different frame analysis analyse_options/dump settings. ie. One for just dumping out a HUD shader, another for dumping out all shaders after a certain one (ie. AO), and another for just doing a full dump[/quote]Hmmm... I need to think about this one, since the settings can be spread out over a number of sections... Maybe I could add suffixes to analyse_frame, analyse_options and dump for the different modes... [quote]Edit: Actually, is that first thing already doable just by setting up my own preset keys?[/quote]No, keys/presets can currently only set IniParams, separation, convergence and run command lists, and command lists can't set that either. [quote="masterotaku"]Performance in Grim Dawn is the same in 1.3.10 as in 1.3.7.[/quote]Thanks for checking that - that gives me some confidence that I can use the same approach to track when shaders are freed to make parts of the code more maintainable :) [quote] I tried the scissor clipping adjustment in two games: - In FFXII, two of the four HUD pixel shaders complained about "return;", saying that o0 and o1 weren't completely initialized. Why don't the other two complain about this, I wonder?[/quote] Annoying. You can just remove the "return" statement - the "discard" instruction is enough for the scissor clipping. [quote]That "* 1.3;" you do in the stereo adjustment wasn't good in this case. Just *1 is what this HUD needs.[/quote] I should probably have clarified - I was blindly copying the same HUD adjustment that had been used in the vertex shader in the NieR Automata fix because the adjustment in the pixel shader needed to be the same as the vertex shader, and that *1.3 was used in that fix. For other fixes you would match whatever the vertex shader adjustment was, or if you were applying this to a shader that the driver stereo corrected instead of you (e.g. decals), you would need to match the driver's stereo correction. [quote]Too bad that text elements get extra depth for some reason (same shader as other normal depth things) so they can't use the clipping area correctly.[/quote] That often means these are drawn to an off-screen render target, then that is drawn to the screen and both of those steps have been stereoised resulting in the text being double-adjusted. In most cases, adding render target size filtering can resolve this to only stereoise the HUD when it is drawn to the screen (unless the off-screen render target was the same size as the resolution): [code] [ShaderOverrideCrosshair] hash = f6870f9fe26b75c4 x3 = rt_width y3 = rt_height z3 = res_width w3 = res_height [/code] [code] ... vertex shader body ... float4 rt_filter = IniParams.Load(int2(3, 0)); if (any(rt_filter.xy != rt_filter.zw)) return; ... HUD adjustment ... [/code] [quote]- In Grim Dawn, it's a mess. The code I use in FFXII disables most of the HUD PS shader most of the time, but clipping varies as I move in the game. It's strange, as if "v0.xy" changed depending on world position and not the screen position of the HUD... But the VS is normal when I add depth and stuff. I'll have to try more things.[/quote] Make sure you are using whatever input register corresponds to SV_Position, which may or may not be v0. If none are using SV_Position already you can just add a new one.
DJ-RK said:-Built in key for cycling through different marking_mode options
I should be able to add that fairly easily :)

-Ability to set key presets for different frame analysis analyse_options/dump settings. ie. One for just dumping out a HUD shader, another for dumping out all shaders after a certain one (ie. AO), and another for just doing a full dump
Hmmm... I need to think about this one, since the settings can be spread out over a number of sections... Maybe I could add suffixes to analyse_frame, analyse_options and dump for the different modes...

Edit: Actually, is that first thing already doable just by setting up my own preset keys?
No, keys/presets can currently only set IniParams, separation, convergence and run command lists, and command lists can't set that either.

masterotaku said:Performance in Grim Dawn is the same in 1.3.10 as in 1.3.7.
Thanks for checking that - that gives me some confidence that I can use the same approach to track when shaders are freed to make parts of the code more maintainable :)


I tried the scissor clipping adjustment in two games:

- In FFXII, two of the four HUD pixel shaders complained about "return;", saying that o0 and o1 weren't completely initialized. Why don't the other two complain about this, I wonder?

Annoying. You can just remove the "return" statement - the "discard" instruction is enough for the scissor clipping.

That "* 1.3;" you do in the stereo adjustment wasn't good in this case. Just *1 is what this HUD needs.

I should probably have clarified - I was blindly copying the same HUD adjustment that had been used in the vertex shader in the NieR Automata fix because the adjustment in the pixel shader needed to be the same as the vertex shader, and that *1.3 was used in that fix. For other fixes you would match whatever the vertex shader adjustment was, or if you were applying this to a shader that the driver stereo corrected instead of you (e.g. decals), you would need to match the driver's stereo correction.

Too bad that text elements get extra depth for some reason (same shader as other normal depth things) so they can't use the clipping area correctly.

That often means these are drawn to an off-screen render target, then that is drawn to the screen and both of those steps have been stereoised resulting in the text being double-adjusted. In most cases, adding render target size filtering can resolve this to only stereoise the HUD when it is drawn to the screen (unless the off-screen render target was the same size as the resolution):

[ShaderOverrideCrosshair]
hash = f6870f9fe26b75c4
x3 = rt_width
y3 = rt_height
z3 = res_width
w3 = res_height


... vertex shader body ...

float4 rt_filter = IniParams.Load(int2(3, 0));
if (any(rt_filter.xy != rt_filter.zw))
return;

... HUD adjustment ...


- In Grim Dawn, it's a mess. The code I use in FFXII disables most of the HUD PS shader most of the time, but clipping varies as I move in the game. It's strange, as if "v0.xy" changed depending on world position and not the screen position of the HUD... But the VS is normal when I add depth and stuff. I'll have to try more things.

Make sure you are using whatever input register corresponds to SV_Position, which may or may not be v0. If none are using SV_Position already you can just add a new one.

2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit

Alienware M17x R4 w/ built in 3D, Intel i7 3740QM, GTX 680m 2GB, 16GB DDR3 1600MHz RAM, Win7 64bit, 1TB SSD, 1TB HDD, 750GB HDD

Pre-release 3D fixes, shadertool.py and other goodies: http://github.com/DarkStarSword/3d-fixes
Support me on Patreon: https://www.patreon.com/DarkStarSword or PayPal: https://www.paypal.me/DarkStarSword

Posted 04/15/2018 03:25 PM   
[quote="DHR"]For the overlay i made, i have a question...i made an overlay.ini to put all the overlay (presets and F1 help), but this line need to be in the d3dx.ini, otherwise don't work (don't work in the overlay.ini): [code][CommandListUpdateActivationTime] y3 = time post y3 = time[/code] any idea? or that's the way this works I have a preset in the d3dx.ini that call that command_list...i also need to put in the overlay.ini? [/quote]This is the namespacing I was talking about - when you move [CommandListUpdateActivationTime] into ShaderFixes\overlay.ini, 3DMigoto will rename it to [CommandList\ShaderFixes\overlay.ini\UpdateActivationTime], which you can see in the d3d11_log.txt. If you refer to that from within overlay.ini you don't need to do anything special, but if you refer to it from a different configuration file you need to use the full namespace, like: [code] [PresetFoo] run = CommandList\ShaderFixes\overlay.ini\UpdateActivationTime [/code]
DHR said:For the overlay i made, i have a question...i made an overlay.ini to put all the overlay (presets and F1 help), but this line need to be in the d3dx.ini, otherwise don't work (don't work in the overlay.ini):

[CommandListUpdateActivationTime]
y3 = time
post y3 = time


any idea? or that's the way this works

I have a preset in the d3dx.ini that call that command_list...i also need to put in the overlay.ini?
This is the namespacing I was talking about - when you move [CommandListUpdateActivationTime] into ShaderFixes\overlay.ini, 3DMigoto will rename it to [CommandList\ShaderFixes\overlay.ini\UpdateActivationTime], which you can see in the d3d11_log.txt. If you refer to that from within overlay.ini you don't need to do anything special, but if you refer to it from a different configuration file you need to use the full namespace, like:

[PresetFoo]
run = CommandList\ShaderFixes\overlay.ini\UpdateActivationTime

2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit

Alienware M17x R4 w/ built in 3D, Intel i7 3740QM, GTX 680m 2GB, 16GB DDR3 1600MHz RAM, Win7 64bit, 1TB SSD, 1TB HDD, 750GB HDD

Pre-release 3D fixes, shadertool.py and other goodies: http://github.com/DarkStarSword/3d-fixes
Support me on Patreon: https://www.patreon.com/DarkStarSword or PayPal: https://www.paypal.me/DarkStarSword

Posted 04/15/2018 03:36 PM   
Thanks DSS!! now is working....i suppose i do a quick read to the release note and don't pay properly attention. i will read it again.
Thanks DSS!! now is working....i suppose i do a quick read to the release note and don't pay properly attention. i will read it again.

MY WEB

Helix Mod - Making 3D Better

My 3D Screenshot Gallery

Like my fixes? you can donate to Paypal: dhr.donation@gmail.com

Posted 04/15/2018 04:35 PM   
I have solved (finally!) my FFXII problem. The VS stereoizing effect for HUD depth was affecting two pixel shaders, and only one of them was the one I needed. So I sent the other through a different iniparam route and all is well :) (I also disabled clipping for it). I discovered it first thanks to your suggestion about render target resolution, which in this case it is the same as the screen resolution, but it made me test so many things that in the end I noticed the real issue. Thanks. I'll update the fix on the blog soon-ish. About Grim Dawn, I'm sure about it. It's the PS linked to the correct VS. o0 is SV_POSITION in the VS, so I used v0 in the PS because it's defined as "float4 v0 : SV_Position0,". I'm not sure what's going on in this game, but I'll try to solve it over time.
I have solved (finally!) my FFXII problem. The VS stereoizing effect for HUD depth was affecting two pixel shaders, and only one of them was the one I needed. So I sent the other through a different iniparam route and all is well :) (I also disabled clipping for it). I discovered it first thanks to your suggestion about render target resolution, which in this case it is the same as the screen resolution, but it made me test so many things that in the end I noticed the real issue. Thanks. I'll update the fix on the blog soon-ish.

About Grim Dawn, I'm sure about it. It's the PS linked to the correct VS. o0 is SV_POSITION in the VS, so I used v0 in the PS because it's defined as "float4 v0 : SV_Position0,". I'm not sure what's going on in this game, but I'll try to solve it over time.

CPU: Intel Core i7 7700K @ 4.9GHz
Motherboard: Gigabyte Aorus GA-Z270X-Gaming 5
RAM: GSKILL Ripjaws Z 16GB 3866MHz CL18
GPU: MSI GeForce RTX 2080Ti Gaming X Trio
Monitor: Asus PG278QR
Speakers: Logitech Z506
Donations account: masterotakusuko@gmail.com

Posted 04/15/2018 05:30 PM   
The assembler is used in the universal fixes right? Is the speed of the assembler ever a problem? In the TEKKEN7 fix you get some slowdown first time you run it. Think the issue was solved by precompiling all shaders which is not possible for an universal fix. Other than that I just want to know if my small contribution to 3Dmigoto is still keeping everybody happy. At least I'm really proud of it. Feels like the top 10 people in the community contribute 99% of the fixes.
The assembler is used in the universal fixes right?

Is the speed of the assembler ever a problem?
In the TEKKEN7 fix you get some slowdown first time you run it.
Think the issue was solved by precompiling all shaders which is not possible for an universal fix.

Other than that I just want to know if my small contribution to 3Dmigoto is still keeping everybody happy.
At least I'm really proud of it.

Feels like the top 10 people in the community contribute 99% of the fixes.

Thanks to everybody using my assembler it warms my heart.
To have a critical piece of code that everyone can enjoy!
What more can you ask for?

donations: ulfjalmbrant@hotmail.com

Posted 04/15/2018 05:56 PM   
Request: more iniparams. The current number is 8 sets of xyzw. Was it on purpose, like name length limitations in some games? In some games that need a lot of tinkering I'm in the verge of running out of them, and I'm not sure to what extent shader overrides can share them without affecting each other. Like using rt_width in multiple shaders for the new clipping feature, passing ps-t0 to another variable, etc.
Request: more iniparams. The current number is 8 sets of xyzw. Was it on purpose, like name length limitations in some games?

In some games that need a lot of tinkering I'm in the verge of running out of them, and I'm not sure to what extent shader overrides can share them without affecting each other. Like using rt_width in multiple shaders for the new clipping feature, passing ps-t0 to another variable, etc.

CPU: Intel Core i7 7700K @ 4.9GHz
Motherboard: Gigabyte Aorus GA-Z270X-Gaming 5
RAM: GSKILL Ripjaws Z 16GB 3866MHz CL18
GPU: MSI GeForce RTX 2080Ti Gaming X Trio
Monitor: Asus PG278QR
Speakers: Logitech Z506
Donations account: masterotakusuko@gmail.com

Posted 04/16/2018 08:34 AM   
My tools are not currently setup to build 3Dmigoto. I don't believe there is a real limitation to how many INI parameters can be used but I would probably try 16 sets of xyzw. The simplest thing to do would be just change the number of rows and adjust elsewhere accordingly. Unless I'm mistaken 3Dmigoto still use VS2013 and relies on a very old Windows SDK. I find having multiple versions of Visual Studio to be a bit messy. Bottomline it should be possible but I think DSS will make such a build quicker than me as I currently can't compile 3Dmigoto as mentioned. edit: As a co-developer it's not ideal to not be able to compile, I blame my recent computer crash among other things.
My tools are not currently setup to build 3Dmigoto.

I don't believe there is a real limitation to how many INI parameters can be used but I would probably try 16 sets of xyzw.

The simplest thing to do would be just change the number of rows and adjust elsewhere accordingly.
Unless I'm mistaken 3Dmigoto still use VS2013 and relies on a very old Windows SDK.

I find having multiple versions of Visual Studio to be a bit messy.

Bottomline it should be possible but I think DSS will make such a build quicker than me as I currently can't compile 3Dmigoto as mentioned.

edit:
As a co-developer it's not ideal to not be able to compile, I blame my recent computer crash among other things.

Thanks to everybody using my assembler it warms my heart.
To have a critical piece of code that everyone can enjoy!
What more can you ask for?

donations: ulfjalmbrant@hotmail.com

Posted 04/16/2018 09:16 AM   
[quote="masterotaku"]Request: more iniparams. The current number is 8 sets of xyzw. Was it on purpose, like name length limitations in some games?[/quote]We can add more, but there is a cost paid every time these are updated during a frame, which will be higher the more of them there are. If we increase this I'm not going to pick an arbitrary limit (because how long is a piece of string?), but will allow it to be set in the d3dx.ini and the default will remain 8 x 4 component values. I will set a not so arbitrary limit of 1 x 4K page though (256 x 4 component values) and make sure it is aligned, because when you're talking hardware generally part of the cost is paid per-page, and surely 1024 values is more than enough? [quote]In some games that need a lot of tinkering I'm in the verge of running out of them, and I'm not sure to what extent shader overrides can share them without affecting each other. Like using rt_width in multiple shaders for the new clipping feature, passing ps-t0 to another variable, etc.[/quote] Any that you set in a ShaderOverride just before using it can be reused, but any that you use to store a persistent setting cannot. e.g. out of the IniParams that are used by the shaders shipped with 3DMigoto: x6, y6, z6, w6, y7, z7, w7 are safe to reuse provided you don't need a value stored in these to be persistent, since the mouse shader will clobber them whenever it is run. x7 is *NOT* safe to reuse because it is a setting used by the SBS shader that needs to remain persistent. The examples you gave should be safe to reuse. [b]...But...[/b] I actually don't think that blindly expanding IniParams is a particularly good solution. They are already pretty unwieldy to keep track of with just 32 of them, they don't self-describe what they are used for, and it's difficult to work out which are safe to use and which might break something else, and they aren't namespaced. I don't mind expanding it because it's fairly straight forward to do, but I feel that it is a band-aid approach and we would be better served by something else (we already expanded it from 4 to 32 back in the day, but it is used for a lot more now). I'm also not entirely sure what the historical reason for using a Texture1D for IniParams was (I suspect we just didn't know better at the time) - it's always been a bit of an odd choice, and it seems to me that we would have been better served by a constant buffer or a structured buffer where we could assign names to each of the parameters. Avoiding a constant buffer might be understandable since there are only a fairly limited number of slots (and the nvidia driver uses one of those), but there's no reason we couldn't have used a structured buffer. We can't really change the type of IniParams now, but if we're going to expand *something* I'd rather it be something that addresses the issues with IniParams. What I want to do: - Introduce separate named variables for any settings that needs to be persistent (e.g. the SBS shader output mode) that is set to an ini param only when it is actually needed in a shader, so that you don't need to worry about which params are used elsewhere provided you use this consistently throughout the d3dx.ini (We could even add an option to zero out any ini params that weren't set to catch any forgotton persistent uses, but that can't be the default for backwards compatibility reasons). This is also a basic requirement of the upcoming conditional logic. - Expand the custom resources to have the same ease of setting components inside them as ini params do, and perhaps even easier by defining a structure for them with named fields. These can already be used as constant buffers or structured buffers. - Have the SBS and mouse shaders use a custom resource instead of IniParams so that IniParams is entirely yours to do with as you see fit. - Maybe: Add a way to bind a named variable to a particular IniParams or custom Resource slot, so that the GPU memory is automatically updated whenever the variable is updated and the Ini Parser can yell at you if you ever try to use that slot for something else.
masterotaku said:Request: more iniparams. The current number is 8 sets of xyzw. Was it on purpose, like name length limitations in some games?
We can add more, but there is a cost paid every time these are updated during a frame, which will be higher the more of them there are. If we increase this I'm not going to pick an arbitrary limit (because how long is a piece of string?), but will allow it to be set in the d3dx.ini and the default will remain 8 x 4 component values.

I will set a not so arbitrary limit of 1 x 4K page though (256 x 4 component values) and make sure it is aligned, because when you're talking hardware generally part of the cost is paid per-page, and surely 1024 values is more than enough?

In some games that need a lot of tinkering I'm in the verge of running out of them, and I'm not sure to what extent shader overrides can share them without affecting each other. Like using rt_width in multiple shaders for the new clipping feature, passing ps-t0 to another variable, etc.

Any that you set in a ShaderOverride just before using it can be reused, but any that you use to store a persistent setting cannot.

e.g. out of the IniParams that are used by the shaders shipped with 3DMigoto:

x6, y6, z6, w6, y7, z7, w7 are safe to reuse provided you don't need a value stored in these to be persistent, since the mouse shader will clobber them whenever it is run.

x7 is *NOT* safe to reuse because it is a setting used by the SBS shader that needs to remain persistent.

The examples you gave should be safe to reuse.

...But...

I actually don't think that blindly expanding IniParams is a particularly good solution. They are already pretty unwieldy to keep track of with just 32 of them, they don't self-describe what they are used for, and it's difficult to work out which are safe to use and which might break something else, and they aren't namespaced. I don't mind expanding it because it's fairly straight forward to do, but I feel that it is a band-aid approach and we would be better served by something else (we already expanded it from 4 to 32 back in the day, but it is used for a lot more now).

I'm also not entirely sure what the historical reason for using a Texture1D for IniParams was (I suspect we just didn't know better at the time) - it's always been a bit of an odd choice, and it seems to me that we would have been better served by a constant buffer or a structured buffer where we could assign names to each of the parameters. Avoiding a constant buffer might be understandable since there are only a fairly limited number of slots (and the nvidia driver uses one of those), but there's no reason we couldn't have used a structured buffer. We can't really change the type of IniParams now, but if we're going to expand *something* I'd rather it be something that addresses the issues with IniParams.

What I want to do:
- Introduce separate named variables for any settings that needs to be persistent (e.g. the SBS shader output mode) that is set to an ini param only when it is actually needed in a shader, so that you don't need to worry about which params are used elsewhere provided you use this consistently throughout the d3dx.ini (We could even add an option to zero out any ini params that weren't set to catch any forgotton persistent uses, but that can't be the default for backwards compatibility reasons). This is also a basic requirement of the upcoming conditional logic.

- Expand the custom resources to have the same ease of setting components inside them as ini params do, and perhaps even easier by defining a structure for them with named fields. These can already be used as constant buffers or structured buffers.

- Have the SBS and mouse shaders use a custom resource instead of IniParams so that IniParams is entirely yours to do with as you see fit.

- Maybe: Add a way to bind a named variable to a particular IniParams or custom Resource slot, so that the GPU memory is automatically updated whenever the variable is updated and the Ini Parser can yell at you if you ever try to use that slot for something else.

2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit

Alienware M17x R4 w/ built in 3D, Intel i7 3740QM, GTX 680m 2GB, 16GB DDR3 1600MHz RAM, Win7 64bit, 1TB SSD, 1TB HDD, 750GB HDD

Pre-release 3D fixes, shadertool.py and other goodies: http://github.com/DarkStarSword/3d-fixes
Support me on Patreon: https://www.patreon.com/DarkStarSword or PayPal: https://www.paypal.me/DarkStarSword

Posted 04/16/2018 08:45 PM   
[quote="Flugan"]The assembler is used in the universal fixes right?[/quote]Yes, anything that uses ShaderRegex uses the assembler - I intentionally did not support HLSL for reliability and performance reasons. [quote]Is the speed of the assembler ever a problem?[/quote]It hasn't been a significant factor for me, and it is way faster than the HLSL compiler. [quote]Think the issue was solved by precompiling all shaders which is not possible for an universal fix.[/quote]We can add caching of these, but they will also need to cache associated command lists and the cache will need to be invalidated by the d3dx.ini timestamp (or whichever ini file contained the ShaderRegex definition). I don't think they should be cached in the same place as the regular shaders. Also, you can use the vs2017 branch - we still use vs2013 for official releases, but the vs2017 branch is working just fine. Port status is here: https://github.com/bo3b/3Dmigoto/issues/92
Flugan said:The assembler is used in the universal fixes right?
Yes, anything that uses ShaderRegex uses the assembler - I intentionally did not support HLSL for reliability and performance reasons.

Is the speed of the assembler ever a problem?
It hasn't been a significant factor for me, and it is way faster than the HLSL compiler.

Think the issue was solved by precompiling all shaders which is not possible for an universal fix.
We can add caching of these, but they will also need to cache associated command lists and the cache will need to be invalidated by the d3dx.ini timestamp (or whichever ini file contained the ShaderRegex definition). I don't think they should be cached in the same place as the regular shaders.

Also, you can use the vs2017 branch - we still use vs2013 for official releases, but the vs2017 branch is working just fine. Port status is here: https://github.com/bo3b/3Dmigoto/issues/92

2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit

Alienware M17x R4 w/ built in 3D, Intel i7 3740QM, GTX 680m 2GB, 16GB DDR3 1600MHz RAM, Win7 64bit, 1TB SSD, 1TB HDD, 750GB HDD

Pre-release 3D fixes, shadertool.py and other goodies: http://github.com/DarkStarSword/3d-fixes
Support me on Patreon: https://www.patreon.com/DarkStarSword or PayPal: https://www.paypal.me/DarkStarSword

Posted 04/16/2018 09:14 PM   
Having trouble using the preset exclude function, attempting to do so exactly as in your example just gives an error "WARNING: Unrecognised entry: exclude=presetgameconvergence". Need to get this working as I'm having trouble juggling presets without it. Also, I found that I can only have a shader/texture trigger for one preset at once. Any possibility of getting this appended that it can actually attempt to trigger 2 different presets? Was hoping to have one shader be used to apply for 2 presets, one of them only requiring that one shader and the other preset requiring that shader plus another (and in that secondary shader I would use an exclude command for the first preset to ensure it doesn't get triggered). Also, any way to use priority matching outside of fuzzy matching? Any chance of adding a priority flag to the presets sections? Lastly, and I'm guessing this is a longshot, but any possibility of the absence of a shader/texture being a trigger for a preset? Currently just using a workaround of the driver set convergence as the "preset" (so when no other presets are loaded it reverts back to this), but that's not ideal for all scenarios.
Having trouble using the preset exclude function, attempting to do so exactly as in your example just gives an error "WARNING: Unrecognised entry: exclude=presetgameconvergence". Need to get this working as I'm having trouble juggling presets without it.

Also, I found that I can only have a shader/texture trigger for one preset at once. Any possibility of getting this appended that it can actually attempt to trigger 2 different presets? Was hoping to have one shader be used to apply for 2 presets, one of them only requiring that one shader and the other preset requiring that shader plus another (and in that secondary shader I would use an exclude command for the first preset to ensure it doesn't get triggered).

Also, any way to use priority matching outside of fuzzy matching? Any chance of adding a priority flag to the presets sections?

Lastly, and I'm guessing this is a longshot, but any possibility of the absence of a shader/texture being a trigger for a preset? Currently just using a workaround of the driver set convergence as the "preset" (so when no other presets are loaded it reverts back to this), but that's not ideal for all scenarios.

3D Gaming Rig: CPU: i7 7700K @ 4.9Ghz | Mobo: Asus Maximus Hero VIII | RAM: Corsair Dominator 16GB | GPU: 2 x GTX 1080 Ti SLI | 3xSSDs for OS and Apps, 2 x HDD's for 11GB storage | PSU: Seasonic X-1250 M2| Case: Corsair C70 | Cooling: Corsair H115i Hydro cooler | Displays: Asus PG278QR, BenQ XL2420TX & BenQ HT1075 | OS: Windows 10 Pro + Windows 7 dual boot

Like my fixes? Dontations can be made to: www.paypal.me/DShanz or rshannonca@gmail.com
Like electronic music? Check out: www.soundcloud.com/dj-ryan-king

Posted 04/17/2018 12:38 AM   
[quote="DJ-RK"]Having trouble using the preset exclude function, attempting to do so exactly as in your example just gives an error "WARNING: Unrecognised entry: exclude=presetgameconvergence". Need to get this working as I'm having trouble juggling presets without it.[/quote] My bad - the command is "exclude_preset", not "exclude". I'd removed the exclude from what I was working on, so when I copied it for the example I added it back in, and got it wrong. [quote]Also, I found that I can only have a shader/texture trigger for one preset at once. Any possibility of getting this appended that it can actually attempt to trigger 2 different presets? Was hoping to have one shader be used to apply for 2 presets, one of them only requiring that one shader and the other preset requiring that shader plus another (and in that secondary shader I would use an exclude command for the first preset to ensure it doesn't get triggered).[/quote]This is working for me: [code] [ShaderOverrideFrostedGlassDistort] hash = 3334daf5ba8e9d88 preset = Test preset = Test2 [PresetTest] run = CommandListTest [PresetTest2] run = CommandListTest2 [/code] [quote]Also, any way to use priority matching outside of fuzzy matching? Any chance of adding a priority flag to the presets sections?[/quote]Can you be a little clearer about what you want here? In most cases we use the section names to determine the order where this could affect the result, but it looks like Presets missed that memo and are using an unordered_map. I'll change that to a sorted map for the next release, but all that will do is make sure that they always run in a consistent order, not provide any sort of mutual exclusion between them (technically that is all match_priority does as well, but you could have a later preset trump a value set by an earlier one, or an earlier preset could maybe run a command list that excludes a later one). [quote]Lastly, and I'm guessing this is a longshot, but any possibility of the absence of a shader/texture being a trigger for a preset? Currently just using a workaround of the driver set convergence as the "preset" (so when no other presets are loaded it reverts back to this), but that's not ideal for all scenarios.[/quote]Haven't tested, but this should work: [code] [Present] preset = foo [ShaderOverrideFoo] hash = ... exclude_preset = foo [/code]
DJ-RK said:Having trouble using the preset exclude function, attempting to do so exactly as in your example just gives an error "WARNING: Unrecognised entry: exclude=presetgameconvergence". Need to get this working as I'm having trouble juggling presets without it.

My bad - the command is "exclude_preset", not "exclude". I'd removed the exclude from what I was working on, so when I copied it for the example I added it back in, and got it wrong.

Also, I found that I can only have a shader/texture trigger for one preset at once. Any possibility of getting this appended that it can actually attempt to trigger 2 different presets? Was hoping to have one shader be used to apply for 2 presets, one of them only requiring that one shader and the other preset requiring that shader plus another (and in that secondary shader I would use an exclude command for the first preset to ensure it doesn't get triggered).
This is working for me:

[ShaderOverrideFrostedGlassDistort]
hash = 3334daf5ba8e9d88
preset = Test
preset = Test2

[PresetTest]
run = CommandListTest

[PresetTest2]
run = CommandListTest2


Also, any way to use priority matching outside of fuzzy matching? Any chance of adding a priority flag to the presets sections?
Can you be a little clearer about what you want here? In most cases we use the section names to determine the order where this could affect the result, but it looks like Presets missed that memo and are using an unordered_map. I'll change that to a sorted map for the next release, but all that will do is make sure that they always run in a consistent order, not provide any sort of mutual exclusion between them (technically that is all match_priority does as well, but you could have a later preset trump a value set by an earlier one, or an earlier preset could maybe run a command list that excludes a later one).

Lastly, and I'm guessing this is a longshot, but any possibility of the absence of a shader/texture being a trigger for a preset? Currently just using a workaround of the driver set convergence as the "preset" (so when no other presets are loaded it reverts back to this), but that's not ideal for all scenarios.
Haven't tested, but this should work:

[Present]
preset = foo

[ShaderOverrideFoo]
hash = ...
exclude_preset = foo

2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit

Alienware M17x R4 w/ built in 3D, Intel i7 3740QM, GTX 680m 2GB, 16GB DDR3 1600MHz RAM, Win7 64bit, 1TB SSD, 1TB HDD, 750GB HDD

Pre-release 3D fixes, shadertool.py and other goodies: http://github.com/DarkStarSword/3d-fixes
Support me on Patreon: https://www.patreon.com/DarkStarSword or PayPal: https://www.paypal.me/DarkStarSword

Posted 04/17/2018 02:38 AM   
Thanks for getting back to me so quickly. [quote="DarkStarSword"] [quote]Lastly, and I'm guessing this is a longshot, but any possibility of the absence of a shader/texture being a trigger for a preset? Currently just using a workaround of the driver set convergence as the "preset" (so when no other presets are loaded it reverts back to this), but that's not ideal for all scenarios.[/quote]Haven't tested, but this should work: [code] [Preset] preset = foo [ShaderOverrideFoo] hash = ... exclude_preset = foo [/code] [/quote] Wait, so maybe I'm misunderstanding the function behind the exclude_preset. I thought it prevented foo from being triggered when the shader is active, but are you saying it triggers foo when the shader is inactive? Edit: Or... wait, are you triggering a preset within a preset? Like, a preset inception? :D
Thanks for getting back to me so quickly.

DarkStarSword said:
Lastly, and I'm guessing this is a longshot, but any possibility of the absence of a shader/texture being a trigger for a preset? Currently just using a workaround of the driver set convergence as the "preset" (so when no other presets are loaded it reverts back to this), but that's not ideal for all scenarios.
Haven't tested, but this should work:

[Preset]
preset = foo

[ShaderOverrideFoo]
hash = ...
exclude_preset = foo



Wait, so maybe I'm misunderstanding the function behind the exclude_preset. I thought it prevented foo from being triggered when the shader is active, but are you saying it triggers foo when the shader is inactive?

Edit: Or... wait, are you triggering a preset within a preset? Like, a preset inception? :D

3D Gaming Rig: CPU: i7 7700K @ 4.9Ghz | Mobo: Asus Maximus Hero VIII | RAM: Corsair Dominator 16GB | GPU: 2 x GTX 1080 Ti SLI | 3xSSDs for OS and Apps, 2 x HDD's for 11GB storage | PSU: Seasonic X-1250 M2| Case: Corsair C70 | Cooling: Corsair H115i Hydro cooler | Displays: Asus PG278QR, BenQ XL2420TX & BenQ HT1075 | OS: Windows 10 Pro + Windows 7 dual boot

Like my fixes? Dontations can be made to: www.paypal.me/DShanz or rshannonca@gmail.com
Like electronic music? Check out: www.soundcloud.com/dj-ryan-king

Posted 04/17/2018 03:16 AM   
Gah, what's wrong with me lately? That was supposed to read [Present], not [Preset] - as in, trigger the preset on every present call (every frame) unless it is excluded (and I gave it a quick test and it works). In my defence, I foresaw this mixup happening, but wasn't around when presets were finally implemented to raise this again: [url]https://github.com/bo3b/3Dmigoto/issues/33#issuecomment-157266552[/url]
Gah, what's wrong with me lately? That was supposed to read [Present], not [Preset] - as in, trigger the preset on every present call (every frame) unless it is excluded (and I gave it a quick test and it works).

In my defence, I foresaw this mixup happening, but wasn't around when presets were finally implemented to raise this again:
https://github.com/bo3b/3Dmigoto/issues/33#issuecomment-157266552

2x Geforce GTX 980 in SLI provided by NVIDIA, i7 6700K 4GHz CPU, Asus 27" VG278HE 144Hz 3D Monitor, BenQ W1070 3D Projector, 120" Elite Screens YardMaster 2, 32GB Corsair DDR4 3200MHz RAM, Samsung 850 EVO 500G SSD, 4x750GB HDD in RAID5, Gigabyte Z170X-Gaming 7 Motherboard, Corsair Obsidian 750D Airflow Edition Case, Corsair RM850i PSU, HTC Vive, Win 10 64bit

Alienware M17x R4 w/ built in 3D, Intel i7 3740QM, GTX 680m 2GB, 16GB DDR3 1600MHz RAM, Win7 64bit, 1TB SSD, 1TB HDD, 750GB HDD

Pre-release 3D fixes, shadertool.py and other goodies: http://github.com/DarkStarSword/3d-fixes
Support me on Patreon: https://www.patreon.com/DarkStarSword or PayPal: https://www.paypal.me/DarkStarSword

Posted 04/17/2018 04:48 AM   
Ahhh, yeah, that totally makes sense. Also totally works! Thanks. :)
Ahhh, yeah, that totally makes sense. Also totally works! Thanks. :)

3D Gaming Rig: CPU: i7 7700K @ 4.9Ghz | Mobo: Asus Maximus Hero VIII | RAM: Corsair Dominator 16GB | GPU: 2 x GTX 1080 Ti SLI | 3xSSDs for OS and Apps, 2 x HDD's for 11GB storage | PSU: Seasonic X-1250 M2| Case: Corsair C70 | Cooling: Corsair H115i Hydro cooler | Displays: Asus PG278QR, BenQ XL2420TX & BenQ HT1075 | OS: Windows 10 Pro + Windows 7 dual boot

Like my fixes? Dontations can be made to: www.paypal.me/DShanz or rshannonca@gmail.com
Like electronic music? Check out: www.soundcloud.com/dj-ryan-king

Posted 04/17/2018 05:50 AM   
  132 / 143    
Scroll To Top