In a previous post I talked about how diagnostics by default are turned ON for Azure roles, and that you should turn them OFF if you don’t want to incur a ton of Azure storage transaction charges. I finally spent some time diving into the various configuration settings and now understand how to leave diagnostics ON, and adjust their configuration throughout the lifetime of the running Azure instances. This way, you don’t persist any log messages during “normal” operation, but can then ratchet up the settings to debug an issue, then turn them back off.
An alternative to using the diagnostic APIs in your role’s OnStart method and thereby hardcoding the settings is to use a configuration file that the Azure diagnostics runtime will interrogate on a given interval while your instance is running. If you make a change to the config, the settings take effect the next time the runtime checks the config file. There is a lot of info about how to author the diagnostics.wadcfg file and where to put it so that it gets deployed correctly. Here is a lot more info, including the order in which config settings are found. What I could not find however, was information about where the config file gets deployed, and how to change it at runtime.
First off, contrary to what I told you in the previous post, you need to start with diagnostics turned ON. This deploys a default diagnostics config file to a container in your Azure storage account called wad-control-container/(deploymentId)/instanceRoleFile (example shown below is for a local compute instance – when deployed to Azure, there will be GUID based folders). By default, the various diagnostic sections in the config file will each have a default transfer period of 0, which (I think) means “don’t persist these diagnostics (logs, crash dumps, event logs, etc) in my storage account”. If you follow the links above to create a diagnostics.wadcfg file, then any sections in that file override the defaults when your role is deployed. Additionally, if you have any code in your OnStart method that changes diagnostics settings, they will also be reflected in the runtime config file. Basically the resulting config file deployed to storage is the merged result of configs and code-based settings you have in your role.
With a source diagnostics.wadcfg file as shown here (contains only a single Logs element)
the resulting configuration file deployed to my storage account looks like this (shown here using Azure Storage Explorer). Notice my Logs section got merged with the other default sections (with transfer periods of 0):
In my sample app, I have a link that causes my controller to write a couple Trace.TraceXXX messages (one warning and one informational). With the setup above, I click on the link, causing a few trace messages, and after a minute passes by, I check the WADLogsTable in table storage and see my trace messages (the filter is set to Verbose, so I see both Warning and Informational messages).
Now I can upload a new version of the configuration file (keeping the name the same), this time it has the Logs section filter set to Warning.
Cause a few more of the same trace messages to get created, wait a minute, and check the WADLogsTable. This time we only see warnings (the “informational” messages have been filtered out based on our uploaded config file settings).
To summarize, this is probably the best way to configure diagnostics since it’s outside of your application code. You can upload a new configuration file any time and the runtime will adjust to your new settings. (I should mention that the runtime will poll for configuration changes at an interval based on the value you set in the configuration file).