OMS Log Analytics can collect large volumes of machine data from multiple sources. This data is a fantastic source of information, but at the same time, it is nothing worth if you cannot get the correct data out if it. At the core of Log Analytics is log search. In this chapter, you will learn how to query data with log search to find the information you need. You will see examples on how to filter, aggregate, transform and visualize the data. The language we use almost looks like PowerShell, so all IT Pros will quickly become comfortable with Log Analytics query syntax. The chapter will start with basic queries and then move on to more advanced ones.
Note: For the examples in the next sections we will use mostly Demo Log Analytics Workspace provided by Microsoft. You can sign up for it at https://experience.mms.microsoft.com/ .
Note: Almost all the examples you will see in this chapter that are performed in OMS portal can be performed in Azure portal as well.
When data is collected from different sources it is stored in Log Analytics as records or documents. For example, every single Windows event from Application log when ingested to Log Analytics will be represented as a record. Every record belongs to a single Type. Type is like a container for the same set of data. For example, Windows Event Log events are stored in Event Type, syslog events are stored in Syslog Type, etc. The data for each record is segregated into Fields. If we take a single record it will be represented in the form of field names and values for them. The fields itself can also be different types depending on the values they are holding. For example, a field containing numeric value will have a type of double. A field can be of type String, Boolean, Double, Date/time or GUID. Follow the steps below to discover Types and Fields in your Log Analytics Workspace.
FIGURE 1. TYPES
FIGURE 2. TYPES AND FIELDS
You will notice that some of the fields are unique for across the types but there are others like Computer and TimeGenerated that are common for every type.
In the previous section, we executed our first query. Basically, we queried Log Analytics to return us every type of data it holds. In this section, we will continue with some general query syntax to get you started. Follow the steps below to search across different types:
FIGURE 3. BACKUP SEARCH RESULTS
From the UI, you will notice the following things:
Every search query that you will make needs to follow this basic syntax:
filterExpression | command1 | command2 …
The filter expression as its name implies performs filtering on the data. The commands are used for presenting, aggregating or transforming data. As you can apply more than one command to the data, multiple commands must be separated by the pipe character ( | ). With our two previous examples, we only used the filterExpression. In the first example, we used a wildcard (*) to get all the data from all types and in the second one we searched for specific word across all fields in all types.
Try a few more searches like these:
FIGURE 4. WINDOWS SERVER SEARCH RESULTS
FIGURE 5. BACKUP ERROR SEARCH RESULTS
Note: When you perform such broader queries like the ones shown so far on a large amount of data it will make some time until all the results are returned. OMS Log Analytics will stream the returned records to show results as fast as possible.
Searching based on simple strings is useful for discovery or troubleshooting purposes but to narrow the results you will need to perform some basic filtering in almost any query you will use in Log Analytics. Execute the steps bellow to perform basic filtering:
FIGURE 6. TYPE:EVENT SEARCH RESULTS
Now you have searched for records of a specific type. By having Type:Event in our filter expression we are saying that we want the events from that type. You will notice that the left pane also changed dynamically based on what are you searching. In the left pane, we see fields and values for them that are associated with results of the query. You can use either a colon (:) or an equal sign (=) after the type or field name and before the value. Type:Event and Type=Event are equivalent in meaning, you can choose according to your preference.
Note: All the field names and the values for the string and text fields are case sensitive. If you type type:Event or Type:event in the search field and execute you will get an error.
Follow the steps below to perform basic filtering to get results for specific type and field:
FIGURE 7. TYPE:EVENT EVENTLOG:APPLICATION SEARCH RESULTS
Comparison operators are an essential part of filtering. In fact, in previous examples we used one such operator – the equal (=) sign. The other comparison operators that you can use in search queries are:
Note: The first four operators in the list above can be used only with fields that contain numeric or date/time values.
Execute the steps below to perform a search query with comparison operators:
FIGURE 8. TYPE=EVENT EVENTLEVEL>=2 SEARCH RESULTS
From the returned results and the summary in the left pane, you can see that we are getting windows events that have EventLevel greater or equal than 2.
Every record in Log Analytics is bound to specific time. When a record is ingested into Log Analytics it must meet two requirements:
The value for TimeGenerated field is usually the time when the event happened or when it was ingested into Log Analytics. The registered time is in UTC but when you view it from the OMS portal it will be displayed in your local time for better user experience.
Because every record is bound to specific time the queries itself are also bound to it. When you are searching data always keep in mind that you are getting data for specific time frame. Quite often people think of Log Analytics as one big SQL instance, which is wrong because Log Analytics has another dimension which is time.
Follow the steps below to perform filtering with Date/Time Math operators:
FIGURE 9. TYPE:ALERT SEARCH RESULTS
FIGURE 10. TYPE:ALERT TIMEGENERATED>NOW-1HOUR SEARCH RESULTS
With the second query, we are getting all the records of Type Alert that has been registered in the last hour. Between Figure 9 and Figure 10 you will notice a few differences:
To achieve this query, we have used the function NOW that gives us the current time and we subtracted one hour from it. The result is filtered on TimeGenerated field. Instead of NOW we could have used direct date/time value like 2017-02-23T12:00 but there are not many cases where you would use such approach. Table 1 lists the operators that allow us to match on date/time fields:
Operator | Description |
/ | Rounds Date/Time to the specified unit. Example: NOW/DAY rounds the current Date/Time to the midnight of the current day. |
+ | Offsets Date/Time by the specified number of units. Example: NOW+1HOUR offsets the current Date/Time by one hour ahead. |
- | Offsets Date/Time by the specified number of units. Example: NOW-10DAYS offsets the current date/time back by 10 days. |
TABLE 4. DATE/TIME MATH OPERATORS
When using precise date/time value you must enter it in one of the following formats:
To achieve higher precision on calculating date/time fields you can chain operators: NOW+1MONTHS-10DAY/MINUTE
Words like MONTH and DAY are units for date/time and table 2 lists all available:
Date/Time unit | Description |
YEAR, YEARS | Rounds to current year, or offsets by the specified number of years. |
MONTH, MONTHS | Rounds to current month, or offsets by the specified number of months. |
DAY, DAYS, DATE | Rounds to current day of the month, or offsets by the specified number of days. |
HOUR, HOURS | Rounds to current hour, or offsets by the specified number of hours. |
MINUTE, MINUTES | Rounds to current minute, or offsets by the specified number of minutes. |
SECOND, SECONDS | Rounds to current second, or offsets by the specified number of seconds. |
MILLISECOND, MILLISECONDS, MILLI, MILLIS | Rounds to current millisecond, or offsets by the specified number of milliseconds. |
TABLE 5. DATE/TIME UNITS
Note: Date/Time units are not case sensitive. You can use also use singular or plural.
Logical operators are also supported by the query syntax. Table 3 lists them along with their C-style aliases:
Logical Operator | C-style alias |
AND | && |
OR | || |
NOT | ! |
TABLE 6. LOGICAL OPERATORS
Parentheses can be used to group the operators. Out of the box when you use multiple filtering expressions without providing explicitly logical operators AND logical operator is used between them.
Perform the steps below to use logical operators:
FIGURE 11. LOGICAL OPERATORS SEARCH QUERY
With the above query example, we want to get all windows event records that with warning or error status but are not in Operations Manager or DFS replication log.
Note: The OR logical operator is very useful when you want to correlate results between different types. Example: (Type:Event OR Type:Alert) (EventLevelName:Error OR AlertSeverity=critical)
In this section, we will look at some of the more advanced filtering capabilities like ranges, wildcards, and regex.
Ranges are a filtering capability that is used on fields with numeric or date/time values. They are quite often used with TimeGenerated field and fields that provide IDs. Ranges have the following syntax:
field:[from..to]
Execute the steps below to execute search queries with ranges:
FIGURE 12. TYPE:EVENT EVENTID:[1000..2000] SEARCH RESULTS
FIGURE 13. TYPE:ALERT TIMEGENERATED:[NOW-4HOURS..NOW/HOUR-2HOUR+20MINUTES] SEARCH RESULTS
From the first query example, we are searching all windows events with IDs from 1000 to 2000. Ranges help abbreviate search queries by eliminating the need to define every single ID with a separate filtering expression using the OR logical operator, while still achieving the same result. If you have multiple smaller ranges that have to be expressed in a query, you can use the OR logical operator to group them. With the second query example, we want to capture all the alerts that occurred within a specific time frame in the past. Fortunately, date/time math functions allow us to chain them together in a way that allows us to be very granular in specifying time. Notice that we also round up on the hour the end date. Instead of using NOW function you can also use exact dates in the range.
Note: When using ranges, we cannot use equal sing (=) instead of a colon.
There are a lot of patterns in machine data. With wildcards, we can use these patterns to achieve advanced filtering. Follow the steps below to use wildcards:
FIGURE 14. WILDCARDS SEARCH QUERY
In the example above we use the (*) character to represent one or more characters in a field value. In the query, we search for computer names that start with the word 'Acme'. Additionally, as we are not specifying type we correlate different logs by using the OR operator and searching for computers in the 172.16.10.X network. When we are using wildcards the value for the field cannot be in quotes because the (*) character will be taken literally. Due to this rule, we cannot use spaces, slash (/ ), backslash (\), dots (.), etc. with wildcards. To replace spaces and dots we can use the question mark (?) character.
With Contains keyword you can filter on a field that contains a specified string. The end result will return the records that contain the string in that field. This is case sensitive, will only work with string fields, and may not include any escape characters. Contains keyword has the following syntax:
field:contains("string")
Follow the steps bellow to perform Contains filtering:
FIGURE 15. CONTAINS SEARCH QUERY
Contains has partial similarity to wildcards and regex but it is somehow limited because it can be used on string fields.
Note: Contains, like the other filtering commands, does not work on non-indexed fields, such as RenderedDescription in Type:Event for example.
Regex aims at providing the same goals as wildcards but it is way more powerful for specifying patterns. The RegEx keyword allows you to specify a regular expression for this filter.
Follow the steps bellow to perform Regex filtering:
FIGURE 16. REGEX SEARCH QUERY
The query with regex filter we used is analogical to the wildcard one in the previous section. If you can achieve the query that you want with wildcards use them instead of Regex. Use regex filtering only when you need to achieve more complex filters that you cannot achieve with the other filtering capabilities or the query will be too big.
Note: You can find the full Regex syntax at https://docs.microsoft.com/en-us/azure/log-analytics/log-analytics-log-searches-regex Regex is a popular method of finding and defining patterns, so you can find many of examples on the Internet. As the field values are case sensitive in Log Analytics currently, any regex case insensitive operations will not work.
The IN keyword allows you to select from a list of values. That list can be static or dynamic based on another query. This keyword is often used with Computer Groups feature as well.
IN keyword with static values syntax:
field IN {value1,value2,value3,...}
IN keyword with dynamic values syntax:
(Outer Query) field IN {Inner query | measure count() by field}
IN keyword with Computer groups syntax:
Computer IN $ComputerGroups[(Computer group category name):(Computer group name)]
With both dynamic values and computer groups syntaxes for IN keyword we are feeding the data trough aggregation from inner query to the outer query.
Note: The inner query is executed at the same time interval as the outer one which is available in the left pane.
Execute the following steps in order to perform filtering with IN keyword:
FIGURE 17. SATIC VALUES IN KEYWORD SEARCH QUERY
FIGURE 18. DYNAMIC VALUES IN KEYWORD SEARCH QUERY
FIGURE 19. COMPUTER GROUPS IN KEYWORD SEARCH QUERY
Using dynamic lists with IN keyword either with the output from an inner query or with computer groups is very useful for correlating data between different types.
Note: In the second example, we are using the Distinct command in the inner query which is covered later in this chapter. For the third example, we are using Computer Groups which will be covered in a later chapter.
In this section, we will cover a few commands that perform basic formatting on the results.
The select command allows you to limit the fields that are returned from the search query. By default, if select is not used all fields are returned but in many cases, you only need a few of all fields for better data analyzing. Select has the following syntax:
select field1, field2, ...
Perform the steps bellow to execute search query with the Select command:
FIGURE 20. SELECT COMMAND SEARCH QUERY
Besides the ability to get only specific fields in the results you also control the order of the fields with the select command.
By default, non-aggregated results are sorted in descending order by the TimeGenerated field. If the results are aggregated they will be sorted in descending order by the first aggregated value. The default sorting can be changed by using the sort command which has the following syntax:
sort field1 asc|desc, field2 asc|desc, …
You can sort on multiple fields. The asc/desc prefix is optional and if you do not specify it the asc sort order is assumed.
Execute the steps below to perform a search query with the sort command:
FIGURE 21. SORT COMMAND SEARCH QUERY
From the above example, we are able to get the processes that have generated the most traffic. We have chained sort and select commands to get better visualization of the results.
Top and Limit commands both do the same thing that is to limit the returned records to a specific number. They have the following syntax:
top number limit number
Execute the steps below to perform a search query with the Top command:
FIGURE 22. TOP/LIMIT COMMAND SEARCH QUERY
In this example, we used the query from the previous section but we have limited the results to the first 3.
Skip is another command that formats the output results. It allows you to skip a specific number of results. The syntax is the following:
skip number
Execute the steps below to perform a search query with the Skip command:
FIGURE 23. SKIP COMMAND SEARCH QUERY
We have extended the command in the previous section to include skip command also. The difference from the previous command is that we skip the first 1000 records first before getting the top 3 processes with most total bytes.
The distinct command provides a distinct list of values for a field. The same behavior but different output can be achieved with the measure count() command, which we will cover in a later section of this chapter. This command is mostly used in Computer Groups or in inner queries with IN keyword. The syntax for the Distinct command is the following:
distinct field
Execute the steps below to perform a search query with Distinct command:
FIGURE 24. DISTINCT COMMAND SEARCH QUERY
The query from the example will return the names of all computers who are missing security updates.
Aggregating data in Log Analytics is a way to crunch the data to get some insights on it. When you aggregate the data, you group it by a field and perform one or more statistical functions on the grouped data.
The measure command is used for data aggregation followed by the statistical function.
One of the following two syntaxes can be used:
measure aggregateFunction1([aggregatedField]) [as fieldAlias1] [, aggregateFunction2([aggregatedField2]) [as fieldAlias2] [, ...]] by groupField1 [, groupField2 [, groupField3]] interval
[interval]
measure aggregateFunction1([aggregatedField]) [as fieldAlias1] [, aggregateFunction2([aggregatedField2]) [as fieldAlias2] [, ...]] interval [interval]
Table 4 lists and describes the different parameters of the measure command syntax:
Parameter | Description |
aggregateFunction | The name of the aggregate function. The following aggregate functions are supported:
The aggregation functions are case insensitive. |
aggregatedField | The field that is being aggregated. This field is optional for the COUNT aggregate function, but must be an existing numeric field for SUM, MAX, MIN, AVG STDDEV, PERCENTILE## or PCT##. The aggregatedField can also be any field created with Extend command which is covered later in this chapter. Additionally, you could perform different functions on aggregatedField before the aggregateFunction is being applied. Those supported functions are listed at https://docs.microsoft.com/en-us/azure/log-analytics/log-analytics-search-reference in Supported Functions table. |
fieldAlias | The alias for the calculated aggregated value. This is an optional parameter. If it is not specified, the field name will be AggregatedValue. |
groupField | The name of the field that the result set is grouped by. If the BY clause is omitted but an interval is specified (as a second syntax), the TimeGenerated field is assumed by default. |
Interval | The time interval in the format: nnnNAME where nnn is the positive integer number and NAME is the interval name. Supported interval names include (case sensitive):
The names are case sensitive and have to be written in all upper-case or lower-case letters |
TABLE 7. MEASURE COMMAND PARAMETERS
Execute the steps below to perform a search query with measure count command:
FIGURE 25. MEASURE COUNT COMMAND SEARCH QUERY
With the above query, we group the alerts by alert name and we get the number of alerts for each group for the last day. Notice that the aggregated field alias is displayed in the UI as well.
Execute the steps below to perform a search query with measure avg command:
FIGURE 26. MEASURE AVG COMMAND SEARCH QUERY
This query returns the average % Committed Bytes In Use for each computer for the last day. Notice because we did not specify an alias for the aggregated data the default one is used.
Execute the steps below to perform a search query with measure countdistinct command:
FIGURE 27. MEASURE COUNTDISTINCT COMMAND SEARCH QUERY
With measure countdistinct command we are getting the number of different processes that have been running on each computer for the last day.
Execute the steps below to perform a search query with multiple measure commands:
FIGURE 28. MULTIPLE MEASURE COMMANDS SEARCH QUERY
This query example uses multiple measure commands. For each computer, we calculate the minimum, maximum, average, percentile 95 and percentile 50 values for the last day.
Execute the steps below to perform a search query with aggregating data on multiple fields:
FIGURE 29. AGGREGATING DATA ON MULTIPLE FIELDS SEARCH QUERY
The above query is a good example for aggregating data on multiple fields. In this case, we are getting percentile 95 Time Taken (Request Time) for each web site and computer. Noticed that we are also using an additional supported function like Div (divide) to convert TimeTaken value from milliseconds to seconds.
Note: You can aggregate data up to 3 fields maximum.
Execute the steps below to perform a search query, aggregating data on an interval:
FIGURE 30. AGGREGATING DATA ON INTERVAL SEARCH QUERY
When the Interval parameter is used, Log Analytics will automatically switch the data visualization from simple chart to Line chart. The interval parameter basically aggregates the data multiple times in the specified interval. So instead of getting one record for percentile 95 for each site and computer, we get multiple records of those spread in different times. If we choose table view from the UI we can see those multiple records having different TimeGenerated values ash shown in Figure 31.
FIGURE 31. AGGREGATING DATA ON INTERVAL SEARCH QUERY IN TABLE FORMAT
Note: If you change the interval in the example query from 10MINTES to 10SECONDS you will notice that error like "Intervals for aggregate functions must result in less than 2000 time slices Unexpected 'measure' at position 17." is generated. When the data is sliced into too many intervals (above 2000) Log Analytics will not be able to visualize it. In such case, you can do a few things to resolve the error. You can find more information about this issue at https://cloudadministrator.wordpress.com/2016/12/07/less-than -2000-time-slices-error-in-oms-log-analytics-resolution/ .
In this section, we will go through some commands that provide some data formatting like adding new fields to the records or displaying the data in different visualizations.
With the Where command, you can filter aggregated results, because of that can be used only after aggregation with measure command.
Execute the steps below to perform search queries with where command:
CounterName="% Processor Time" InstanceName=_Total | measure PCT95(CounterValue) as PCT95 by Computer | where PCT95>80 and hit enter. Results are visible in Figure 32.
FIGURE 32. WHERE COMMAND SEARCH QUERY RESULTS
CounterName="% Processor Time" InstanceName=_Total | measure PCT95(CounterValue) by Computer | where (AggregatedValue<80 AND AggregatedValue>60) and hit enter. Results are visible in Figure 33.
FIGURE 33. WHERE COMMAND WITH DEFAULT ALIAS SEARCH QUERY RESULTS
CounterName="% Processor Time" InstanceName=_Total | measure PCT95(CounterValue) as PCT95 by Computer interval 30MINUTES | where PCT95>80 and hit enter. Results are visible in Figure 34.
FIGURE 34. WHERE COMMAND WITH INTERVAL SEARCH QUERY RESULTS
In these 3 examples, we have used the where command in different ways. The first example shows how to use the where command when you have used an alias for the aggregated data. The second example shows using multiple filters on the aggregated data by using the AND operator. Also in that example, we are using the default alias for the aggregated data. The third example demonstrates using the where command when the data is aggregated in intervals. In Figure 34 you will notice that the lines in the chart are not continues. The reason for this display is that only those who are in the filter we set with where are displayed.
Note: When you are working with metrics or performance data always aggregated and filter with where after the aggregation. In most cases, if you filter before the aggregation you will not achieve the desired effect. For example:
Correct query:
Type:Perf ObjectName=Processor
CounterName="% Processor Time"
InstanceName=_Total | measure
PCT95(CounterValue) as PCT95 by Computer | where PCT95>80
Wrong query:
Type:Perf ObjectName=Processor CounterName="% Processor Time"
InstanceName=_Total CounterValue>80 | measure
PCT95(CounterValue) as PCT95 by Computer
The Dedup command returns the first record for every unique value of the given filter. You can use Dedup after filtered data but not after aggregated one.
Execute the steps below to perform a search query with the Dedup command:
FIGURE 35. DEDUP COMMAND SEARCH QUERY RESULTS
The query in the example gives you an instant view what is the current % free disk space on each drive and computer. You can use the Sort command to control which are the first records returned. For example, a query like Type:Perf ObjectName=LogicalDisk CounterName="% Free Space" | Dedup CounterPath | sort TimeGenerated ASC will return completely different results because the sorting is reversed.
Extend is a very powerful command that gives you the ability to create new fields at runtime. Additionally, you can use the run-time fields in measure commands as well.
Execute the steps below to perform search queries with the Extend command:
FIGURE 36. EXTEND COMMAND SEARCH QUERY RESULTS
FIGURE 37. EXTEND COMMAND WITH MEASURE COMMAND SEARCH QUERY RESULTS
FIGURE 38. EXTEND COMMAND WITH DEDUP COMMAND SEARCH QUERY RESULTS
With the first example query, we just extend the existing records by adding to each one of them another field that has the Time Taken value in seconds. The second example shows how we can aggregate the data that is in the extended field. You may notice that this example will return the same results we saw in one of the previous examples with the measure command. The third example demonstrates how we can use the Dedup command with Extend command. That query will get the latest % free space value on C drive for each computer and will return a new field FreeSpaceState that will have a value of Normal if the % free space is above 25 and Low if it is below 25.
Note: With Extend you can use the functions listed at https://docs.microsoft.com/en-us/azure/log-analytics/log-analytics-search-reference under Supported Functions table to manipulate the data. More examples can be found at https://blogs.technet.microsoft.com/msoms/2016/09/13/arithmetic-operations-meet-aggregate-queries-in-oms/ .
With Display command, you can visualize the data in different ways. The command has the following syntax:
display visualizationName
Currently, there are 3 types of visualizations that are supported:
We have already seen the first two visualizations from the examples in the previous sections. The Table visualization can be selected from the UI when you search data. The Line chart visualization appears by default when aggregating data and using the interval parameter. Never the less using display command explicitly has its benefits.
Execute the steps below to perform search queries with display command:
Type:Perf ObjectName=LogicalDisk CounterName="% Free Space" InstanceName="C:" | Dedup CounterPath | EXTEND if(map(CounterValue,0,25,0,1),"Normal","Low") as FreeSpaceState | select Computer,InstanceName,CounterName,CounterValue,FreeSpaceState | Display Table
FIGURE 39. DISPLAY TABLE COMMAND SEARCH QUERY RESULTS
FIGURE 40. DISPLAY LINECHART COMMAND SEARCH QUERY RESULTS
FIGURE 41. DISPLAY STACKEDBARCHART COMMAND SEARCH QUERY RESULTS
The examples clearly show how you can use the display command. What visualization you should use depends purely on the data and the purpose it is used for.
Note: When using LineChart or StackedBarChart we do not need to specify explicitly the interval parameter. In fact, the authors of the book recommend using the Display command instead of the Interval parameter as Display will automatically slice the data into intervals that fit into the time window for which the query is run. Explicitly define the interval parameter on when you want to achieve some precision.
Instead of typing the queries to get the data that you need you can use the Log Search UI in OMS portal to construct queries. Although the UI cannot construct everything from the search syntax it can be useful in situations where you are exploring data or troubleshooting. Still, there are some unique features that can be accessed via the UI like Minify. In this section, we will walk through the possibilities of the Log Search UI in OMS portal.
The UI has several functions that allow you to do referencing, filtering and grouping with a few simple clicks.
Execute the steps below to perform referencing action via the UI:
FIGURE 42. FIELD MENU – REFERENCES
FIGURE 43. REFERENCES QUERY RESULTS
Notice that because we are not doing filtering but rather referencing we get results with Information value for Event Level Name as well.
Execute the steps below to perform filtering action from field menu via the UI:
FIGURE 44. FIELD MENU – FILTER
FIGURE 45. FILTER RESULTS
Execute the steps below to perform filtering action from left pane via the UI:
FIGURE 46. FIELD MENU – ADD TO FILTERS
FIGURE 47. EVENTLOG FILTERS
FIGURE 48. FILTERING WITH LEFT PANE
FIGURE 49. FILTERING WITH LEFT PANE RESULTS
FIGURE 50. TIME FILTER
FIGURE 51. TIME FILTER – 6 HOURS
FIGURE 52. TIME FILTER – CUSTOM
FIGURE 53. TIME FILTER – CUSTOM TIME
FIGURE 54. QUERY FILTERED ON CUSTOM TIME FRAME
In this exercise, we have performed several actions with the UI. First, we added a field to the filters in the left pane. It is useful when you want to explore what kind of values you have in that field and their number without actually executing the measure count() command. Second, we used the left pane to filter results for a field. Because we have selected two field values that were translated in the query with AND operator. On our third action, we filtered on the pre-defined time frame. You can notice from the figures how the bars of the time frame graphic and the number of results changed when we filtered for the last 6 hours. On our last action, we filtered on a custom time frame. When using custom time window, you can choose the start time and the end time. For example, you can filter to a particular hour and day in the past in order to see what kind events were generated at that particular time. Even if you construct your queries by typing you will still use the Time Window widget to narrow down results. Notice that in the time window widget there is a legend which states what 1 bar from the graphic is equal to. If you hover over a single bar you will see the end time, the start time and the number of results (records) for that time period. You can also use the time slider in the widget to narrow down the time window.
Execute the steps below to perform grouping action via the UI:
FIGURE 55. FIELD MENU – GROUP BY
FIGURE 56. MEASURE COUNT() COMMAND FROM UI
In this example, we have used the UI to perform measure command. Unfortunately, the UI is limited to only using the count() statistical function.
Note: All the queries that are executed in Log Search are saved in History. You can access your previous queries that you have executed by clicking on History from the top menu. A Search History Slider will appear on the right. When you click on a previous query it will be executed in the time window that was executed initially. If you still want to switch to the last day or another time window you have to use the Time Window widget.
Log Analytics search has some default views depending on the search queries executed but the UI the UI allows you change and configure the visualization depending on your needs. In this section, we will walk trough of the different visualization options available.
Execute the steps below to explore the visualization available:
FIGURE 57. LIST VISUALIZATION
FIGURE 58. TABLE VISUALIZATION
FIGURE 59. METRICS VISUALIZATION
FIGURE 60. UPDATES VISUALIZATION
FIGURE 61. APPLICATION INSIGHTS VISUALIZATION
FIGURE 62. APPLICATION INSIGHTS DRILL DOWN VISUALIZATION
FIGURE 63. APPLICATION INSIGHTS DRILL DOWN VISUALIZATION
Note: All the visualizations that have graphics are interactive and you can click and hover over on different objects. For example, in the Line chart, you can drill down by selecting part of the visualization. When you perform that action, you will drill down to the selected time frame.
FIGURE 64. CHARTS SETTINGS
FIGURE 65. STACKED BAR CHART
(CounterName="Available MBytes") | measure avg(CounterValue) by Computer interval 30MINUTES and hit enter
FIGURE 66. Y AXIS CHART SETTINGS
FIGURE 67. CONFIGURE Y AXIS CHART SETTINGS
FIGURE 68. CONFIGURED LINE CHART
FIGURE 69. CONFIGURED LINE CHART WITH MIN, MAX, AND LABEL
FIGURE 70. LINE CHART WITH MIN, MAX, AND LABEL
The same Y Axis settings can be applied to Stacked Bar chart as well.
Note: You can switch between Linear and Logarithmic scales for both charts. To learn more about when to use these settings navigate to https://cloudadministrator.wordpress.com/2016/07/13/linear-or-logarithmic-view-for-performance-data-in-oms-log-analytics/ .
Minify feature groups Log Analytics search results to reduce event noise and give a summarized view of your search results. This feature is only available for a handful of types: Event, ServiceFabricOperationalEvent, W3CIISLog, Syslog, Alert, and AlertHistory. When you include one of those types in your log search query the Minify option appears under the search field.
Execute the steps below to explore the Minify feature:
FIGURE 71. MINIFY FEATURE
FIGURE 72. MINIFY EXPANDED RESULTS
FIGURE 73. MINIFY RESULTS WITH SENSITIVITY 9
Data can be exported from Log Analytics to a csv file from the Log Search UI. The export is limited to 5000 records though. Once you have the data exported you can use other tools to filter and visualize it.
Execute the steps below to export data from Log Analytics:
FIGURE 74. EXPORT FEATURE
FIGURE 75. EXPORTED RESULTS
The same queries that you can use in the OMS portal can be used to get data out of Log Analytics via PowerShell. Queries can be executed with the GetAzureRmOperationalInsightsSearchResults cmdlet which is part of AzureRM.OperationalInsights PowerShell module. The advantage of getting data from Log Analytics programmatically is that you can automate a lot of tedious and repetitive tasks.
Execute the code below to query Log Analytics via PowerShell:
# Provide credentials
$Creds = Get-Credential `
-Message 'Provide Azure Subscription Credentials...'
# Login to Azure
Login-AzureRmAccount `
-Credential $Creds `
-ErrorAction Stop | Out-Null
# Pick Subscription/TenantID
$Azure =
(Get-AzureRmSubscription `
-ErrorAction Stop |
Out-GridView `
-Title 'Select a Subscription/Tenant ID...' `
-PassThru)
# Select Subscription
Select-AzureRmSubscription `
-SubscriptionId $Azure.Id `
-TenantId $Azure.TenantId `
-ErrorAction Stop| Out-Null
$OMSWorkspaceResourceGroup = "OMS"
$OMSWorkspaceName = "Contoso"
$Query = "Type:Update"
$results = Get-AzureRmOperationalInsightsSearchResults `
-ResourceGroupName $OMSWorkspaceResourceGroup `
-WorkspaceName $OMSWorkspaceName `
-Top 200 `
-Query $Query `
-Start (Get-Date).AddHours(-12) `
-End (Get-Date)
Write-Output "Search ID is $($results.id)"
Write-Output 'First Result:'
$results.Value[0]
Write-Output 'All Results:'
$results.Value
The results from the script are visible in Figure 76.
FIGURE 76. SEARCH QUERY RESULTS VIA POWERSHELL
The same queries without modifications can be used with PowerShell. In Log Search UI the time window is set automatically to last 1 day, in the cmdlet we have to provide it via the -Start and -End parameters. In PowerShell, we also have the option to specify a maximum number of returned records by using the -Top parameter. Of course, you can use the command in the query instead of providing it as a parameter in PowerShell. As with the export feature, you can only get up to 5000 results from a single query by using the cmdlet.
You can overcome that limitation by using Skip and Top commands inside your query. You can run the cmdlet multiple times with different Skip value each time. For example, you want to get 15000 events from Type:Event. Your first query will be 'Type:Event | Skip 0 | Top 5000', your second one will be 'Type:Event | Skip 5000 | Top 5000' and your third one would be 'Type:Event | Skip 10000 | Top 5000'. Keep in mind that when you run those queries the -Start and -End parameters must be the same values for all of them in order to get the right results. Potentially you can make this dynamic so you get results in batches of 5000 until there are no results returned.
Note: When you are using the Get-AzureRmOperationalInsightsSearchResults to get a large number of results the query will be executed but the results will be streamed. In such case, partial results are returned and the status of the search query is pending. You can wait for the search result to complete by checking the status of the returned results. You can find a good example how to achieve this in the cmdlet documentation page at https://docs.microsoft.com/en-us/powershell/resourcemanager/azurerm.operationalinsights/v2.5.0/get-azurermoperationalinsightssearchresults .
Download the Code
You can download the full script from GitHub at https://github.com/insidemscloud/OMSBookV2, in the \Chapter 2\scripts directory. The file name is QueryLogAnalytics.ps1.
Understanding and mastering Log Analytics search syntax is essential for working with the service and the various solutions in it. With the search syntax, you can turn raw data into insights. Those insights will help you analyze, monitor and resolve complex issues happening in your environment.
In this chapter, we have shown you from top to bottom on how to search and present data in Log Analytics. We have have learned through real-world examples how to construct queries to achieve the desired results. We hope these examples will help you start with the search syntax and start exploring the data in your environment.
In the next two chapters, we will have a look at the two principal capabilities of Azure Automation service – Process Automation and Configuration Management.