
In step 5, add a Name, Description, and other details about your SLO.In step 4, specify the Error Budget Calculation Method and your Objective(s).In step 3, define a Time Window for the SLO.Sample query for Sumo Logic Threshold metric (Metrics type): metric=CPU_usage.Select one of the following values: avg, sum, min, max, count, none.Rollup is an aggregation function Sumo Logic uses when quantizing metrics. For more details, refer to the Sumo Logic documentation.
#SUMO LOGIC SUM TIMESLICE SERIES#
In Sumo Logic, quantization is the process of aggregating metric data points for time series over an interval of time.Select value and units for Quantization.In step 2, select Sumo Logic as the Data Source for your SLO, then specify the Metric.In step 1 of the SLO wizard, select the Service the SLO will be associated with.# If you don’t want the metrics to be exposed, comment out or delete the N9_METRICS_PORT variable.įollow the instructions below to create Sumo Logic Threshold metric using Metrics type:

# The 9090 is the default value and can be changed.

# The N9_METRICS_PORT is a variable specifying the port to which the /metrics and /health endpoints are exposed. Name : nobl9 - agent - myorg - myproject - sumologicagent # It is not a ready-to-apply k8s deployment description, and the client_id and client_secret are only exemplary values. The length of bars represents number of trading requests per minute, and the colored segments represent the distribution of response time.# DISCLAIMER: This deployment description contains only the fields necessary for the purpose of this demo. The “stacking” option allows you to draw bar charts with values from different columns stacked onto each other. This is especially useful when the data is visualized. Here we tell the query engine to rearrange the table using time slice values as row labels, and response time as column labels. Stocktrade | timeslice 1m | extract “(?d+$)” | toInt(ceil(response_time/100) * 100) as response_time | count by _timeslice, response_time | transpose row _timeslice column response_time Wouldn’t it be nice if we could rearrange the data into the following table? For example, in the table above, the first five rows give us the distribution of response time at 8:00, the next five rows at 8:01, etc. This gets the data we want, but it is not presented in a format that is easy to digest. Stocktrade | timeslice 1m | extract “(?d+$)” | toInt(ceil(response_time/100) * 100) as response_time | count by _timeslice, response_time That is easy with the timeslice operator: Now, it would also be interesting to see how the distribution changes over time. Here we start with a search for “stocktrade” to get only the lines we are interested in, extract the response time using a regular expression, round it up to the next 100 millisecond, and count the occurrence of each number. Stocktrade | extract “(?d+$)” | toInt(ceil(response_time/100) * 100) as response_time | count by response_time One way to do that is to build a histogram of the response time using the following query: We are interested in finding out the distribution of this number so as to know how quickly individual trades are processed. There is a wealth of information in this log line, but to keep it simple, let’s focus on the last number, in this case 449, which is the server response time in milliseconds.

Let’s say you work for an online brokerage firm, and your trading server logs lines that look like the following, among other things: In this post I want to introduce you to a recent addition to the toolbox, the transpose operator.

There are currently about two dozen operators available and we are constantly adding new ones. In addition to searching for individual log messages, you may extract, transform, filter and aggregate data from them using a sequence of operators. Sumo Logic lets you access your logs through a powerful query language.
