# Metrics Operators

The following table lists the metrics supported operators and provides examples of queries containing each type of operator.

Operator Description and Syntax Examples

`accum`

Converts each time-series in the row to a series of running totals. The running total in each series starts from the value of the first data point in the series, then iteratively adds up successive values.

For example, if RequestCount is { 2, 0, 4, 3, 0, 0 }, `RequestCount | accum` will be { 2, 2, 6, 9, 9, 9 }.

`accum`

`RequestCount | accum`

`avg`

Calculates the average of all the resulting time series. If grouping is specified, it calculates the average for each group.

`avg [by FIELD [, FIELD, ...]]`

```dep=prod metric=cpu_system | avg cluster=search metric=cpu_idle | avg by node```

`bottomk`

Select the bottom specified time series  sorted by the value of a mathematical expression evaluated over the query time range.

`bottomk (number, aggregator)`

Supported aggregate functions:

min, max, avg, count, sum, pct(n), latest

Take the bottom 5 time series with the highest maximum value:

`dep=prod metric=cpu_system | bottomk (5, max)`

`count`

Counts the total number of time series that match the query. If grouping is specified, it counts the total number for each group.

`count [by FIELD [, FIELD, ...]]`

```dep=prod | count cluster=search | count by node```

`delta`

Computes the backward difference at each data point in the time series to determine how much the metric has changed from its last value in the series.

This operator also assigns the value of the `metric` tag to be `delta(\$metric)`.

delta

`metric=Net_InBytes Interface=eth0 | delta`

`eval`

Evaluates a time series based on a user-specified math expression.

`metrics query | eval <math expression>`

where math expression is a valid math expression with `_value` as the placeholder for each data point in the time series.

Supported Basic operations:

+, -, *, /

Supported Math functions:

sin, cos, abs, log, round, ceil, floor, tan, exp, sqrt, min, max

`_sourceCategory=ApacheHttpServer metrics=request_per_sec | rate | eval max(_value, 0)`

`_sourceCategory=ApacheHttpServer metrics=cpu_idle | eval _value * 100`

`filter`

Filters a query to help reduce the number of series returned by applying a boolean test to some aggregate quantity.

`filter <aggregator> <boolean operator> <numerical value>`

Supported aggregate functions:

`min`, `max`, `avg`, count, `sum`, `pct(n)`, `latest`

Note that when you use `max`  as the aggregation function for `filter`, you must set an upper limit, as shown in the second example in the "Examples" column to the right.

Show only cpu metrics whose average over the time range queried is greater than 80%
`cpu | filter avg > 80`

Show only cpu metrics where the min is greater than 20% and the max less than 50%

`cpu | filter min > 20 and max < 50`

`max`

Calculates the maximum value of the time series that match the query. If grouping is specified, it calculates the maximum for each group.

`max [by FIELD [, FIELD, ...]]`

```dep=prod metric=cpu_system | max cluster=search metric=cpu_idle | max by node```

`min`

Calculates the minimum value of the time series that match the query. If grouping is specified, it calculates the minimum for each group.

`min [by FIELD [, FIELD, ...]]`

```dep=prod metric=cpu_system | min cluster=search metric=cpu_idle | min by node```

`parse`

Parses the given field to create new fields to use in the metrics query. If no field is specified while parsing Graphite metrics, the metric name is used.

Each wildcard in the pattern corresponds to a specified field. The parse operator supports both lazy (shortest match) and greedy (longest match) wildcard matches. Use '*' for a lazy match, or '**' for a greedy match.

`parse [field=FIELD] PATTERN as FIELD [, FIELD, ...]`

```dep=prod | parse *-search-* as deployment, instance cluster=frontend | parse field=user **-* as user_id, user_type```

`pct`

Calculates the specified percentile of the metrics that match the query. If grouping is specified, it calculates the specified percentile for each group.

`pct(DOUBLE) [by FIELD [, FIELD, ...]]`

```dep=prod metric=cpu_system | pct(95) cluster=search metric=cpu_idle | pct(99.9) by node```

`quantize`

Segregates time series data by time period. This allows you to create aggregated results in buckets of fixed intervals (for example, 5-minute intervals).

`quantize to INTERVAL [using ROLLUP]`

where ROLLUP is avg, min, max, sum or count.

```_sourceCategory=hostmetrics | quantize to 5m logins | quantize to 5m using sum```

`rate`

Computes a rate based on the forward difference at each time in the time series. The difference between the current and the next recorded value in a time series is scaled to a value per second.

This operator also assigns the value of the `metric` tag to be`rate(\$metric)`and the value of the `unit` metadata field to be `\$unit/second`.

rate

`metric=Net_InBytes Interface=eth0 | rate`

`sum`

Calculates the sum of the metrics values that match the query. If grouping is specified, it calculates the sum for each group.

`sum [by FIELD [, FIELD, ...]]`

```dep=prod metric=cpu_system | sum cluster=search metric=cpu_idle | sum by node```

`topk`

Select the top specified time series sorted by the value of a mathematical expression evaluated over the query time range.

`topk (number, aggregator)`

Supported aggregate functions:

min, max, avg, count, sum, pct(n), latest

Take the top 10 time series with the highest maximum value:

`metric=cpu_system | topk (10, max)`

Reduce each time series by calculating (max / avg * 2) for it. Sort by this reduced value and take the top 10 values:

`metric=cpu_system |topk (10, max /avg * 2)`

`timeshift`

Shifts the time series from your metrics query by the specified amount of time. This can help when comparing a time series across multiple time periods.

`timeshift TIME_INTERVAL`

```cluster=search metric=cpu_idle | timeshift 5h dep=prod metric=cpu_system | timeshift -1m```