The timeslice operator segregates data by time period, so you can create bucketed results based on a fixed interval (for example, five minute buckets). Timeslice also supports creating a fixed number of buckets, for example, 150 buckets over the last 60 minutes.
There are two primary use cases for this operator:
- Group data into time-sliced buckets for aggregation analysis
- Group data into time-sliced buckets for time-series visual analysis
Let’s say you log each time a user successfully logs into your service, and you want to track how many logins per hour, on a daily basis. You can use the timeslice operator to group the data into one-hour segments, and view the data over a 24 hour period.
Successful logins per hour.
| parse “login_status=*” as login_status
| where login_status=”success”
| timeslice 1h
| count by _timeslice
... | timeslice by <time_period> | aggregating_operator _timeslice
... | timeslice # buckets as <alias> | aggregating_operator <alias>
Supported <time_periods> are hours (h), minutes (m), and seconds (s).
- An alias for the timeslice field is optional. If an alias is not provided, a default _timeslice field is created.
- The timeslice operator is commonly used in conjunction with the transpose operator. After you’ve timesliced the data into buckets, the transpose operator allows you to plot aggregated data in a time series.
- The timeslice operator must be used with an aggregating operator such as count by or group by.
- The number of buckets in your query is a target or maximum, not necessarily the exact number of buckets that will be returned. For example, if your query specifies 150 buckets, Sumo Logic will find a reasonable clock-aligned resolution to return approximately 150 buckets in the query results.
There is a known issue with the timeslice operator and Daylight Savings Time (DST). When the clock moves forward, any timeslice operation that crosses the DST boundary is affected. For this reason, results may show more than one entry for that day.
For example, in Australia, DST goes into effect on October 2nd for Spring. For that day, with a 1d timeslice, you would see two entries for the same day: one for 12 a.m. and another for 11 p.m.
In another example, if you had a 4h timeslice, you would usually see results at 12 a.m., 4 a.m., 8 a.m., 12 p.m., etc. But when the DST happens, the result after 12 a.m. could be either 3 a.m. or 5 a.m., depending on Fall or Spring.
Timeslice Operator Examples
Timeslice by 5m
Fixed-size buckets at 5 minutes. The output field is default _timeslice.
Timeslice by 2h as 2hrs
Fixed-size buckets that are 2 hours long. The output field name is aliased to 2hrs.
Timeslice 2h as 2hrs
Shorthand of the previous example. The by keyword is optional.
Timeslice 150 buckets
Bucketing to 150 buckets over the search results.
Timeslice by 1m as my_time_bucket_field_name
Fixed-size buckets of 1 minute each. The output field name is aliased to my_time_bucket_field_name.
Examples in queries
* | timeslice by 5m | count by _timeslice
This outputs a table in the Aggregates tab with columns _count and _timeslice with the timeslices spaced in 5 minute intervals.
* | timeslice by 5m as my_field_name_alias | count by _sourceCategory, my_field_name_alias
This outputs three columns: _count, _sourceCategory, and my_field_name_alias.
* | timeslice 10 buckets | count by _sourceCategory, _timeslice
This outputs a table in the Aggregates tab with columns _count, _sourceCategory, and _timeslice with 10 rows for each _sourceCategory in that table if you have messages covering the entire search period.
Example1: Checking the server distribution over time to make sure the load balancer is working properly.
| timeslice 1h
| parse "server_ip=* "
| count _timeslice, server_ip
| transpose row _timeslice column server_ip
This query produces these results in the Aggregates tab, which you can display as a column chart.
Example 2: All computer access to Sumo Logic over time.
| parse using public/iis
| timeslice 1m
| count by _timeslice, s_ip
| transpose row _timeslice column s_ip
This query produces these results in the Aggregates tab, which you can display as a stacked column chart:
Example 3: Monitoring non-normal status codes (400s and 500s) on IIS servers.
| parse “status_code=* “ as status_code
| where status_code >= 400
| timeslice 5m
| count as count by _timeslice,_sourceHost
| transpose row _timeslice column _sourceHost
This query produces these results in the Aggregates tab, which you can display as an area chart: