Managing Sitecore logs across multiple environments has always been somewhat of a tedious job. The default appender drops log files into the data folder on each instance and it is left up to the operations team to collect the information and analyze it. Tools such as the Sitecore Log Analyzer are great for identifying trends and patterns, filtering, and view logs, however, as far as I am aware this is no easy way to look at trends across multiple machines in an environment, or across all of your infrastructure for a holistic logging view.

Azure Microsoft Operations Management Suite: Log Analytics

Azure offers the log analytics product, which allows users to easily collect, aggregate, and analyze their Azure IaaS and PaaS logs as well as agents that can be installed onto in house infrastructure. Azure has released the Custom Log data source functionality (currently preview 11/2016). With a Sitecore instance running on an existing Azure VM, it takes only minutes to collect and aggregate your Sitecore logs without writing any code or even logging into a machine. In addition to the Custom Log data source, there are also data sources for IIS, Syslogs, Performance Counters and Windows Events.

Connecting an Azure VM to Log Analytics

An Azure VM can be connected to Log Analytics by logging into the Azure Portal and creating an OMS Workspace. Once your workspace has been created navigate to it in the Azure portal.


It will take several minutes for Azure to install the agent on your VM. If you are not using Azure VMs a remote agent installer is also available.

Adding a Custom Log

Once your VM has been added navigate Azure OMS Portal Settings page.

At the time this was written the Custom Logs functionality was still in preview, you may need to enable this functionality in the settings tab.

Next we need to add our custom log. For this step you are going to need example log files for all of your Sitecore logs handy. For this example I am using the crawling logs, however you can use anyone that you like, the process is the same.

Once you upload your log file, Azure will attempt to parse it, currently there are two options: Timestamp or New line. For default Sitecore logs we will use New line. A preview of how the record will be interpreted is shown based on the file you uploaded. Each New line will represent a single record to Log Analytics, in a later step we will define custom fields for each piece of data in a log entry. These fields will then be searchable.

Azure needs to know where to look for the logs on your VM, in this step you will define a path to your log files. The paths can contain wildcard expressions as shown in the image below. For the sake of simplicity, all of your machines pathing should be the same. This will pickup any logs at the path defined that the OMS Agent is running on.

The final step in adding a log is giving it a name and description. I would recommend sticking with the same name that the log file uses by default.

According to the Azure documentation it will take approximately an hour before your log data will appear in the log search utility. It will pull in new logs in addition to any existing logs in the directory that match the pattern defined in the previous step.

Searching For Your Log Data

To locate your log data in search, use a simple wild card query. You should see a Type facet with your custom log appearing as an available selection. Once you click on this all the log entries that Azure has imported should appear.

Default Fields and Custom Fields

By default your log entries should have the following fields:

  • TimeGenerated: TimeStamp
  • RawData: This is the log entry generated by Sitecore, without mapping custom fields all of your data will be thrown into this field.
  • Computer: The VM that this log entry originated from.

As it stands the unformatted data is pretty useful, you can now aggregate all of your CD/CM/Processing servers logs into a single searchable, sortable(date) place. However, it would be useful to be able to filter out ERROR vs INFO or search based on an error message alone. This can be accomplished by defining custom fields.

Custom fields allow you to reformat the data in the RawData field and parse it into their own unique fields with a datatype. This is with the data extraction wizard.

Once the extraction wizard is open, simple highlight text from the RawData field that you want to break out into a seperate field. In the screen show below, I am using the Level value. Next name the field, finally define the field type.

Once you hit the extract button, log analyzer will do a search of the logs that are currently loaded and show you what it thinks is correct value to parse from all of your entries in logs of that type. You can see that it is smart enough to pickup INFOs and WARNs without any additional configuration or regex magic.

Now repeat the process for the remaining values you would like parsed into their own fields.

Run a search again for your log data, and you should now see each field you defined broken out. These are now queryable. Additionally you can combine multiple logs together in a single view, to get a more holistic view of what is occurring on your implementation. For instance, you can look at your Crawling log and Default log in a combined view, you can look at your Default log and the IIS W3 logs at the same time to identify issues.

List View