SimHaven

Hi, Welcome to Simhaven the Friendly Worldwide Forum for Flight Simulation
If you are a member please log in here,

if you wish to be a member please use the Register button

This International Haven is for Enthusiasts of Flight Simulation, Especially those interested in creating their own scenery, or Aircraft Re-paints who are looking for a friendly, Informative clubhouse atmosphere.


    nvidia inspector manual

    Share

    Admin
    Admin

    Posts : 227
    Join date : 2015-09-14
    Location : close to BGBW

    nvidia inspector manual

    Post by Admin on Mon Jan 23, 2017 12:12 pm

    this may help someone make sense of inspector , but there again it may not

    Thanks to guru3D.com if the manual below hasnt totally turned your brain into mush then have a look at this site afterward
    by the guy who made the program      http://wiki.step-project.com/Guide:NVIDIA_Inspector

    and if at the end you feel that you understand it all drop me a line and explain it to me please Smile


    List of basic options and explanations:

    1 - Compatibility

       Ambient Occlusion Compatibility: This is where you would enter or paste an 8 digit Hexadecimal code (Always with the prefix "0x") to get HBAO+ to work with any given game.
       There is an official list of Flags built into the driver configured for specific games (Though not necessarily well). These flags are not a combination of functions put together to form a flag as with Anti-Aliasing, but rather are directions pointed a set of programmed parameters for each flag.
       The first 2 bytes of the flag are reserved for Flags pertaining to DX10 and above. While the second set of bytes are reserved for Flags pertaining to DX9 only.
       Code:

       0x00000000" < DX10+, 0x00000000 < DX9

       Each of the 8 slots goes from a value of 1-16
       Code:

       0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F

       Giving them an approximate total of 65536 potential Flags for each set of APIs.
       Antialiasing Compatibility: This is where you would enter or paste an 8 digit Hexadecimal code (Always with the prefix "0x") to get various forms of Anti-Aliasing to work with any given DX9 game.
       This Hex code unlike AO Compatibility, is actually a combination of possible functions that tell the driver what to do in regards to what kind of buffer formats to look for, how to process them among other things.
       Antialiasing compatibility (DX1x): This is where you would enter or paste an 8 digit Hexadecimal code to force AA in DirectX10+...IF YOU HAD ONE!!
       Nvidia's effort sadly fell off the wagon here. There are few functions available for this and none of them work in any meaningful way. Heck, I don't even think the ones it has that are set up for early DX10 games work all that well. They only work for MSAA for the most part from what I remember.
       Antialiasing Fix: This is somewhat a mystery almost. Initially this was apparently made just for an issue relating to Team Fortress 2. (As such was originally known as the TF2Fix). But as it turned out, this affected a very large number of games.

       Currently the only description of the function available is
       Code:

       "FERMI_SETREDUCECOLORTHRESHOLDSENABLE" (Fermi> Set Reduce Color Thresholds Enable)

       This would suggest it's a Fermi issue, but it applies really to at least Fermi and everything after it.
       It's also interesting that turning the AA fix on disables this. The default setting of "Off" is actually a value of 0x00000001. The On value is 0x00000000(Team Fortress 2) (It should say just ON in Inspector, but there is a bug currently that just gives you the hex code here) (*As of 05/22/16)

       DO NOT Enable this Globally As what it does depends on a per game basis. It is noted whether a game needs it or not in the Anti-Aliasing flags thread or whether it causes issues.
       SLI Compatibility bits: This is where you would enter or paste an 8 digit Hexadecimal code (Always with the prefix "0x") to get SLI working in DX9 applications. if the application doesn't already have a flag in the driver. OR if the official flag doesn't work well and is of poor quality.

       Like AA compatibility bits. These flags are a combination of functions within the driver.
       SLI compatiblity bits (DX10+DX11): This is where you would enter or paste an 8 digit Hexadecimal code (Always with the prefix "0x") to get SLI working in DX10+ applications. if the application doesn't already have a flag in the driver. OR if the official flag doesn't work well and is of poor quality.

       Like AA compatibility bits. These flags are a combination of functions within the driver.
       SLI compatiblity bits (DX12):This is a new one. I assume it is the same as the other two. Currently there are only 2 flags in the driver. But as more DX12 games come out i'm sure there will be more and this should actually be interesting to see how it plays out.

    2 - Sync and Refresh

       Frame Rate Limiter: This setting will enable the driver's built in Frame Rate limiter in a series of pre-defined values non whole number values. It's worth noting that the quality of it has historically been a bit spotty. The behavior was changed at some point so whole numbers aren't possible. I think it's simply due to how it works(Some sort of prediction or more complicated system that has never been exposed to the user), before the limiter with whole values would never really stick to those numbers.
       I think it's worthwhile now to do more investigating on the limiter as it is now. The 60FPS setting is 59.7 or 60.7, with Vsync enabled it might work differently too.
       Personally though, through all of my experienceUnwinder's RTSS generally is more useful as it lets you set the value yourself and is more stable consistently.
       GSYNC Application Mode: When Using GSYNC it is important to keep in game Vsync disabled to avoid conflicts.
       GSYNC Requested State:
       GSYNC Global Feature:
       GSYNC Global Mode:
       GSYNC Indicator Overlay:
       Maximum pre-rendered frames: Taken from
       Quote:
       The 'maximum pre-rendered frames' function operates within the DirectX API and serves to explicitly define the maximum number of frames the CPU can process ahead of the frame currently being drawn by the graphics subsystem. This is a dynamic, time-dependent queue that reacts to performance: at low frame rates, a higher value can produce more consistent gameplay, whereas a low value is less likely to introduce input latencies incurred by the extended storage time (of higher values) within the cache/memory. Its influence on performance is usually barely measurable, but its primary intent is to control peripheral latency.
       Values from 1-8, the default is 3 by the driver and I would not recommend higher than that. A value of 1 or 2 will reduce Input Latency further at the cost of slightly higher CPU performance cost.
       When using 1/2 Refresh Rate Vsync a value of 1(Sometimes 2 will suffice but 1 generally reduces latency more) is essentially required. As 1/2 sync will introduce significantly more Input Latency.
       In addition, setting "30 fps ( Frame Rate Limiter v2 )" may also help reduce more input latency when using this. You may want to try the V2 30FPS limit with 60hz sync as well. Might have better latency
       Triple buffering: Enables Triple Buffering for Vsync, but ONLY for the OpenGL API. For a run down of TB here is an article. If you wish to enable TB for D3D APIs you can download and use D3DOverrider..

       It's worth noting that GSYNC makes the concept of Double and Triple Buffering entirely irrelevant. This is only for standard sync monitors.
       Vertical Sync Smooth AFR behavior:
       Quote:
       Smooth Vsync is a new technology that can reduce stutter when Vsync is enabled and SLI is active.

       When SLI is active and natural frame rates of games are below the refresh rate of your monitor, traditional vsync forces frame rates to quickly oscillate between the refresh rate and half the refresh rate (for example, between 60Hz and 30Hz). This variation is often perceived as stutter. Smooth Vsync improves this by locking into the sustainable frame rate of your game and only increasing the frame rate if the game performance moves sustainably above the refresh rate of your monitor. This does lower the average framerate of your game, but the experience in many cases is far better.
       Vertical Sync Tear Control: This controls when a frame drop is detected whether Vsync should be disabled to maintain performance or sync should drop to the next syncable rate. At 60hz, without adaptive the frame rate will drop to 30FPS because it's the next syncable rate; 1/2.
       You can use TB as mentioned above instead of adaptive, or as long as you ensure you have enough power to sustain the peformance you are aiming for it shouldn't be an issue.

       Adaptive in my experience can be hit and miss, but so can Triple Buffering. In some cases TB can increase Input Latency, stay the same or decrease it. (Despite what anyone may say).
       It's up to you what you prefer to use. I prefer to not use adaptive. And again GSYNC makes this irrelevant.
       Vertical Sync: Controls whether Vsync can be enabled for any given application. Typically it's set to "Application Controlled". Which means instead it's up to the individual application itself to enable/disable or offer the option for Vsync.
       If you have a GSYNC monitor and install a driver, this is forced to ON automatically. Disabling it will disable Gsync globally.
       One recent example is Fallout 4. The game has no Vsync option, but it is forced on no matter what.
       You can disable it by setting this to "Force Off" on the Fallout 4 profile.
       Quote:
       Use 3D Application Setting. (Explained above)
       Force Off -Forces Vsync Off
       Force On - Forces Vsync On Usually forces to your set refresh rate. Though this won't help games that are locked to any given framerate to run at a higher rate.
       1/2 Refresh Rate - Forces 1/2 Vsync. On a 144hz monitor this is 72hz, 120hz/60, 60hz/30, 50hz/25 ; This is very useful for playing at 30FPS or on 30FPS games that usually have some microstutter problems when syncing at 60hz. Remember to use a lower Pre-Rendered Frames setting when using it. (In addition, setting "30 fps ( Frame Rate Limiter v2 )" may also help reduce more input latency!)
       1/3 Refresh Rate - Forces 1/3 sync. 144hz/48 , 120hz/40, 60hz/20, 50hz/Not possible with whole numbers.
       1/4 Refresh Rate - Forces 1/4 sync. 144hz/36, 120hz/30, 60hz/15, 50hz/Not possible with whole numbers
       Remember GSYNC makes this irrelevant. (AFIK) It is also important to keep in game Vsync disabled to avoid conflicts with GSYNC enabled. Though in one specific case with The Division it might not work right.

    3 - Antialiasing

       Antialiasing - Behavior Flags: These mostly exist as a method of governing usage of AA from Nvidia Control Panel (They are mostly useless today except for the ones that disable all usage).
       BUT, they also affect Inspector as well. So you will want to make sure you are clearing out ANY flags in this field for a game profile when forcing AA!
       As it WILL interfere and cause it not to work if you aren't careful.
       Antialiasing - Gamma Correction: Gamma correction for MSAA, this is defaulted through function in the drivers to ON for any GPU Fermi generations and forward. It was new about 10 years ago with Half Life 2 and there is no reason to disable it on modern hardware.
       http://www.anandtech.com/show/2116/12
       Antialiasing - Line Gamma: From what I know this is only for OpenGL and i'm not sure what it actually does. If you know please post!
       Antialiasing - Mode: This has 3 settings
       Code:

           Application Controlled

           Override Application Setting

           Enhance Application Setting

       When overriding AA in and game you will want to set "Override" and not either of the other two. Enhancing AA will become entirely dependent on the implementation of MSAA in the game you are modifying a profile for.
       More often than not, especially in modern DX10+ games. This is a total crap shoot, either it doesn't work, breaks something, or looks very bad.

       However, there are a few exceptions here. In certain game/game engines like Monolith Productions mid 2000s LithTech based games that use some sort of MSAA based FSAA the above 3 settings will generally not matter.

       A specific example: If you lhave the correct AA Flag on the profile and leave it at Application Controlled but have 8xQ MSAA and 8xSGSSAA enabled below it and enable F.E.A.R 2's FSAA it will enhance the FSAA.
       In this specific case, this actually looks far better than 8xSGSSAA or FSAA by themselves!

       Another example is Red Faction Guerrilla, you can't force AA in this game. However you can Enhance the in game MSAA with various methods of AA to some decent results. But it shines when you combine the in game AA+Enhanced AA and then downsample(Using DSR) while also enabling FXAA in the game Profile.
       (FXAA works when Enhancing in game AA. It used to when Overriding as well, but has been broken since after 331.82. It is applied last in the chain so it doesn't cause conflicts with other AA, though it's not recommended to use it at native over enhanced AA if that makes sense. Oversampling from Downsampling negates any smoothing issues)

       This is a rather unique exception as most games don't yield this good of results.

       Here are a few comparisons showing it off.
       http://screenshotcomparison.com/comp....php?id=103126
       This first one shows no AA by default | Vs | The game running at 2x2 Native resolution with 2xMSAA enabled in game with "Enhance Application Setting" enabled and set to 4xS (1x2 OGSSAA + 2xMSAA) together with 2xSGSSAA. Finally with FXAA enabled on the profile.

       http://screenshotcomparison.com/comp....php?id=103127
       This second one is cropped from the native 3200x1800 buffer with 2xMSAA+4xS+2xSGSSAA |Vs| That with FXAA also enabled showing that there are still some rough edges that FXAA cleans up before it is downsampled back to native 1600x900


       The 3rd comparison shows 2x2 native resolution + 2xMSAA | Vs | 2x2 Native + 2xMSAA+4xS+2xSGSSAA+FXAA cropped and upsampled with point filtering by 2x2 to show how much more aliasing is tackled and resolved.
       http://screenshotcomparison.com/comparison/161297
       Antialiasing - setting:This is where you would set any primary form of forced Anti Aliasing. Which can be MSAA,CSAA(G80 to Kepler GPUs ONLY), OGSSAA or HSAA (12xS for example).
       If you are going to use SGSSAA, you can use MSAA modes ONLY. The number of Color Samples have to match.
       This image from GuruKnight's thread explains this well.
       http://u.cubeupload.com/MrBonk/revisedaaoverviewtgr.png
       Antialiasing - Transparency Multisampling: http://http.download.nvidia.com/deve...ansparency.pdf
       The number of games this works with is unknown, but the results can be nice when it does work.
       http://screenshotcomparison.com/comparison/149642

       Nvidia has a demo of this you can download that also includes a demo of 4xTrSSAA
       https://www.nvidia.com/object/transparency_aa.html
       Antialiasing - Transparency Supersampling:
       The only options here are Transparency Super Sampling and Sparse Grid Super Sampling.
       In reality they are both SGSSAA, but they are different in their approach. TrSSAA is formally SGSSAA while SGSSAA is actually FSSGSSAA(That's a mouthful ). Which stands for Full Scene Sparse Grid Super Sampling Anti Aliasing.

       This works by replaying the pixel shading by N times number of color samples. With TrSSAA(SGSSAA) however it is decoupled from the main MSAA pass in that it only applies SGSSAA to Alpha Test surfaces like flat textures that come in all forms and varieties.

       While SGSSAA (FSSGSSAA) is coupled with MSAA and needs the sample counts to match for it to work properly as it uses the MSAA sub samples for the entire scene.

       Guru Knight's image again explains some of the usage of this.
       http://u.cubeupload.com/MrBonk/revisedaaoverviewtgr.png

       Do note that again, usually these require AA compatibility Flags to work!
       Enable Maxwell sample interleaving (MFAA): This enables Nvidia's new Multi Frame Anti Aliasing mode. This only works in DXGI (DX10+) and requires either the game to have MSAA enabled in the game or MSAA to be forced (Good luck with that. Few and far in between number of games that work).

       What it does is change the sub sample grid pattern every frame and then is reconstructed in motion with a "Temporal Synthesis Filter" as Nvidia calls it.
       There are some caveats to using this though.
           It is not compatible with SGSSAA as far as I have been able to test in a limited fashion with DX10+
           With TrSSAA in one case I tested could cause some blur on TrSSAA components
           It causes visible flickering on geometric edges and other temporal artifacts depending on the game and it's MSAA implementation. Part of this is nullified with Downsampling though. So it's GREAT to use with downsampling to improve AA/Performance
           With screenshots and videos captured locally, there will be visible sawtooth patterns.
           It has a Framerate requirement of about 40FPS minimum. Otherwise the Temporal Synthesis Filter seems to fall apart in a strange way depending on the game.
           It is not compatible with SLI (Yet?)

           For example, with Grandia II Anniversary or Far Cry 3 Blood Dragon when the game is motion you'll notice severe blurring and smearing in motion. Making it unplayable. Strangely enough though if you record video of while the framerate is under this it will not be visible in the recording at all. Bizarre.

           In Lost Planet DX10, at 40FPS and under, the flickering and temporal artifacts simply increase, making it look irritating.
       Nvidia Predefined FXAA usage: Simply tells the driver whether FXAA is allowed to be turned on Nvidia Control Panel (Primarily) or Nvidia Inspector.
       Toggle FXAA indicator on or off: This will display a small green icon in the left upper corner if enabled showing whether FXAA is enabled or not.
       Toggle FXAA on or off: Turns FXAA on or Off. You can also enable this when you are Enhancing in game AA as shown above. You used to be able to do so when Overriding as well, but has been broken since every version after 331.82.
       You wouldn't want to use that ALL the time, but only in very specific use cases involving Oversampling. (Battlefield Bad Company 2 is one I can think of that would benefit if you still could do it)

    4 - Texture Filtering

       Anisotropic Filtering mode: Simply tells the driver whether the Driver controls AF or the AF does it's own thing.
       I highly recommend you leave this set to "User Defined/Off". Because lots of games do not have Texture Filtering options and lots of games also have mediocre or intentionally mediocre Texture Filtering.

       Most of the time anyway, Driver Level AF is higher quality than in game AF.
       Modern recent examples: Just Cause 3, Assassin's Creed Syndicate.
       Only rarely will this incur a performance hit of any significance when overriding globally.
       AC:Syndicate is one example of this, but it's worth it IMO and it's is a rare duck. Most games do not incur the same kind of performance requirement.
       Anisotropic Filtering setting: If you have it set as above. This determines what level of Texture Filtering is forced on an Application. 16x is the best. But it also has options for "Off[point]" which is Point Filtering(You wouldn't want this 9/10 times.) "Off[Linear]" which i'm pretty sure is Bilinear filtering.
       Prevent Anisotropic Filtering: Similar to AA Behavior Flags, if this is set to On it will ignore Driver Overrides from NI or NVCP. You don't want this. Some games have this set by Nvidia to On and i'm not sure why. I've never seen an issue arise in these games.
       Texture Filtering - Anisotropic filter optimization: -This and the below don't have much information available. The most I could glean was from Patents by Nvidia no less. Essentially they reduce the number of texture samples when using AF to improve performance.(Like Trilinear Optimization below) Leave these disabled. This might have been necessary 10-12 years ago. But not now.

       Update 07/16- Taken from NvGames.dll
       Quote:
       "Anisotropic filter optimization improves performance by limiting trilinear filtering to the primary texture stage where the general appearance and color of objects is determined. Bilinear filtering is used for all other stages, such as those used for lighting, shadows, and other effects. This setting only affects DirectX programs."
       813, "• Select On for higher performance with a minimal loss in image quality• Select Off if you see shimmering on objects"
       814, "Anisotropic sample optimization limits the number of anisotropic samples used based on texel size. This setting only affects DirectX programs."
       "• Select On for higher performance with a minimal loss in image quality• Select Off if you see shimmering on objects"
       Texture Filtering - Anisotropic sample optimization - See above
       Texture Filtering - Driver Controlled LOD Bias: When using SGSSAA with this enabled will allow the driver to compute it's own Negative LOD Bias for textures to help improve sharpness for those who prefer it. It's generally less than the fixed amounts that are recommended.

       When this is enabled setting a manual bias will not do anything. And AutoLOD will always be applied.
       Texture Filtering - LOD Bias (DX) - The Level of Detail Bias setting for textures in DirectX Backends. This normally only works under 2 circumstances.
       For both "Driver Controlled LoD Bias" must be set to "Off"
           When Overriding or Enhancing AA.
           The last choice is an interesting one. If you leave the "Antialiasing Mode" setting to "Application Controlled" but you set the AA and Transparency setting to SGSSAA (Eg 4xMSAA and 4xSGSSAA;TrSSAA in OpenGL then you can freely set the LOD bias and the changes will work without forcing AA. This has the side effect that in some games if the game has MSAA, it will act as if you were "Enhancing" the game setting.
           Comparison example http://screenshotcomparison.com/comparison/159382

       An explanation of the LoD Bias from http://naturalviolence.webs.com/lodbias.htm
       Quote:
       The so called Level of Detail BIAS (LOD BIAS) controls at which distance from the viewer the switch to lower resolution mip maps takes place. The standard value of the LOD BIAS is 0.0. If you lower the LOD BIAS below zero, the mip map levels are moved farther away, resulting in seemingly sharper textures. But if the scene is moving, the textures start to shimmer.

       Because of this, it's not a good idea to use a lower LOD BIAS to improve the sharpness of the image. It's better to use an Anisotropic Filter instead.
       If you wish to use a -LOD Bias when forcing SGSSAA, these are the recommended amounts
       Quote:
       2xSGSSAA (2 samples): -0.5
       4xSGSSAA (4 samples): -1.0
       8xSGSSAA (8 samples): -1.5
       Do not use a -LOD Bias when using OGSSAA (YxY modes) and HSAA modes, these already have their own automatic LOD bias (That can cause issues in some games)
       http://naturalviolence.webs.com/sgssaa.htm
       Quote:
       The formula for determining the correct lod bias is "y = -0.5 * log, base 2, of (n)" where n is the number of samples and y is the correct lod bias. Of course since some of you may have forgotten how to use logarithms I went ahead and typed out the correct values for each SSAA mode above. This function simply means that every time you double the number of samples you subtract another 0.5 from the lod bias.

       Please note that despite common belief reducing the LOD bias will not eliminate any blurring issues that you may be having. Blurring issues with SGSSAA are caused by conflicting post processing shaders used by an application (game) and have nothing to do with the texture mapping process. Proper SSAA is supposed to IMPROVE texture quality, even without a negative LOD bias, the ability to use a negative LOD bias without any texture shimmering issues is just a nice bonus. Normally without any form of SSAA 0.0 becomes the most ideal LOD bias in most cases, a lower value will increase shimmering and a higher value will reduce the sharpness of distant textures. However SSAA helps fight the texture shimmering that a low LOD bias normally causes (you can use a lower LOD bias without experiencing texture shimmering, the values provided above should maintain the same level of shimmering that a LOD bias of 0.0 without SSAA would have). This allows you to potentially further improve the sharpness of textures, particularly distant textures. It is quite possible with SSAA + lower LOD bias to achieve much better texture quality in some games than would normally be possible without SSAA. Please also keep in mind that you should only do this with fullscene SSAA implementations like SGSSAA or OGSSAA. TRSSAA does not reduce texture shimmering and therefore gives you no reason to lower the LOD bias. Also keep in mind that this trick will only work with d3d9, d3d8, and openGL applications.

       Both ati's RGSSAA and nvidia's HSAA have auto lod bias adjustment built in so when using them do not tweak the lod bias. However you still need to make sure you allow negative lod. If you're not using any SSAA set negative lod to clamp.
       Texture Filtering - LOD Bias (OGL) - The same as the above except for OpenGL. I don't remember if the trick mentioned above works for OGL as well to use a LOD bias without forcing AA. If you want to try, set the Transparency setting to "4xSupersampling" instead of SGSSAA.
       Texture Filtering - Negative LOD bias - This used to control whether NLOD's were Clamped (Not allowed) or Allowed . With Fermi GPU's and on, this no longer really functions. By default it clamps. Driver Controlled LoD Bias works either way.
       Texture Filtering - Quality -Leave this at High Quality, this is an old optimization for older hardware to improve AF performance at the cost of some quality. If you have older hardware, like G80(8000 series) and prior, feel free to play around to see if it helps at all.
       Texture Filtering - Trilinear Optimization - Same as above, an optimization of the number of texture samples taken for texture filtering. The HQ Setting above disables this anyway. Might only have to do when using Trilinear filtering.
       Patent - http://www.freepatentsonline.com/7193627.html

    5 - Common

       Ambient Occlusion setting - This needs to be set to enable HBAO+, there are 3 settings.
           Performance
           Quality
           High Quality
       Q and HQ are pretty similar (Though before HBAO+ was introduced there was a bigger difference), Performance lowers the resolution and precision of the effect noticeably in many games. With less accurate shading and stronger shading. However in some games, it actually fixes some buggy issues when using the Q and HQ settings without other drawbacks. (Ex: Oddworld New N Tasty, Urban Trials Freestyle). The HBAO+ thread and list usually mention if needed.
       Ambient Occlusion usage - When using HBAO+ just set this to On.
       Extension limit - I'm not sure what this is exactly (Excuse my stupid), but my GoogleFu has turned up posts of people needing this feature for OpenGL games to work correctly, not crash, run at the right speed etc. (Example: Soldier of Fortune and one of the Riddick games;?So it might be worth leaving it at "on". Or setting it for specific OGL games, there are a few other values listed as well. So if you have an OGL game. Feel free to play around.

       Update 07/16- I found this information
       Quote:
       Extension limit indicates whether the driver extension string has been trimmed for compatibility with particular applications. Some older applications cannot process long extension strings and will crash if extensions are unlimited."
       • If you are using an older OpenGL application, turning this option on may prevent crashing\n• If you are using a newer OpenGL application, you should turn this option off"
       Multi-display/mixed-GPU acceleration -
       Quote:
       Those options control GPU-based acceleration in OpenGL applications and will not have any effect on performance on DirectX platforms. Mixed GPU acceleration permits the use of heterogeneous graphics boards driving multiple monitors independently.
       https://forums.geforce.com/default/t...n-how-to-use-/
       Power management mode: -
       Quote:
       Setting Power management mode from "Adaptive" to "Maximum Performance" can improve performance in certain applications when the GPU is throttling the clock speeds incorrectly. To change this setting, with your mouse, right-click over the Windows desktop and select "NVIDIA Control Panel" -> from the NVIDIA Control Panel, select the "Manage 3D settings" from the left column -> click on the Power management mode drop down box and select "Prefer Maximum Performance". Click over the "Apply" button at the bottom of the panel to apply the changes.
       https://nvidia.custhelp.com/app/answ...um-performance
       Shader cache: - This was added in driver 337.88
       Quote:
       http://www.geforce.com/whats-new/art...ch-dog-drivers
       Shaders are used in almost every game, adding numerous visual effects that can greatly improve image quality (you can see the dramatic impact of shader technology in Watch Dogs here). Generally, shaders are compiled during loading screens, or during gameplay in seamless world titles, such as Assassin's Creed IV, Batman: Arkham Origins, and the aforementioned Watch Dogs. During loads their compilation increases the time it takes to start playing, and during gameplay increases CPU usage, lowering frame rates. When the shader is no longer required, or the game is closed, it is disgarded, forcing its recompilation the next time you play.

       In today's 337.88 WHQL drivers we've introduced a new NVIDIA Control Panel feature called "Shader Cache", which saves compiled shaders to a cache on your hard drive. Following the compilation and saving of the shader, the shader can simply be recalled from the hard disk the next time it is required, potentially reducing load times and CPU usage to optimize and improve your experience.

       By default the Shader Cache is enabled for all games, and saves up to 256MB of compiled shaders in %USERPROFILE%\AppData\Local\Temp\NVIDIA Corporation\NV_Cache. This location can be changed by moving your entire Temp folder using Windows Control Panel > System > System Properties > Advanced > Environmental Variables > Temp, or by using a Junction Point to relocate the NV_Cache folder. To change the use state of Shader Cache on a per-game basis simply locate the option in the NVIDIA Control Panel, as shown below.
       Threaded optimization- We do not know what this actually does. But it works in DX and OGL and apparently can help and make things worse depending on the game. It defaults to Auto, so that might be the best way to leave it aside from known problematic games.
       Known Games with Problems when enabled.
           Neverwinter Nights
           Battlefield Bad Company 2 in MP
           The Chronicles of Riddick: Assault on Dark Athena
           DayZ/Arma 2 (Might not be the case anymore. Verification would be nice)
       Known Games that it is helpful with when enabled.
           Source Engine games (Verification?)
           Sleeping Dogs
       If you know of any other problem games. Do let me know!

    6 - SLI

       Antialiasing - SLI AA: From GuruKnight
       Quote:
       SLI AA essentially disables normal AFR rendering, and in 2-way mode will use the primary GPU for rendering+forced AA, while the secondary GPU is only used to do AA work.
       In SLI8x mode for example, each GPU would then do 4xMSAA after which the final result becomes 4xMSAA+4xMSAA=8xMSAA.
       This can be useful in games without proper SLI support, so at least the second GPU is not just idling.

       However it unfortunately only works correctly in OpenGL, and there will be no difference in temporal behavior between for example normal forced 4xMSAA+4xSGSSAA and SLI8x+8xSGSSAA in DX9.
       http://www.nvidia.com/object/slizone_sliAA_howto1.html
       Disable bridgeless SLI
       Number of GPUs to use on SLI rendering mode
       NVIDIA predefined number of GPUs to use on SLI rendering mode on DX10
       NVIDIA predefined number of GPUs to use on SLI rendering mode
       NVIDIA predefined SLI mode on DirectX10
       NVIDIA predefined SLI mode
       SLI indicator
       SLI Rendering mode
       Memory Allocation Policy - ElectronSpider did some testing with this and some interesting results that are worth taking a look at for games that might be more VRAM intensive. http://forums.guru3d.com/showpost.ph...5&postcount=39
    avatar
    G-GMDH

    Posts : 786
    Join date : 2015-09-19

    Re: nvidia inspector manual

    Post by G-GMDH on Mon Jan 23, 2017 1:11 pm

    I lost the will to live after the first sentence!
    avatar
    jaydor

    Posts : 554
    Join date : 2015-09-14
    Location : South Wales Valley's UK

    Re: nvidia inspector manual

    Post by jaydor on Mon Jan 23, 2017 4:19 pm

    G-GMDH wrote:I lost the will to live after the first sentence!


    LOL
    avatar
    ddavid

    Posts : 378
    Join date : 2015-09-14

    Re: nvidia inspector manual

    Post by ddavid on Mon Jan 23, 2017 5:46 pm

    James, at the base hardware level, having a 64-bit wide mask is pretty straightforward, although it does contain a lot of on's and off's! Things were a lot easier when I worked with 4-bit and 8-bit systems but that was a long time ago!

    Cheers - Dai. Cool


    Sponsored content

    Re: nvidia inspector manual

    Post by Sponsored content


      Current date/time is Mon Sep 25, 2017 3:18 pm