Log to Metric Transform

The Vector log_to_metric transform accepts log events but outputs metric events allowing you to convert logs into one or more metrics.

Configuration

vector.toml
[transforms.log_to_metric]
type = "log_to_metric"
[[transforms.log_to_metric.metrics]]
type = "histogram"
field = "time"
name = "time_ms" # optional
tags.status = "{{status}}" # optional
tags.host = "{{host}}" # optional
tags.env = "${ENV}" # optional

Options

[table]commonrequired

metrics

A table of key/value pairs representing the keys to be added to the event.

stringcommonrequired

field

The log field to use as the metric. See Null Fields for more info.

No default
View examples
boolcommonrequired

increment_by_value

If true the metric will be incremented by the field value. If false the metric will be incremented by 1 regardless of the field value.

No default
Only relevant when: type = "counter"
View examples
stringcommonrequired

name

The name of the metric. Defaults to <field>_total for counter and <field> for gauge.

No default
View examples
tablecommonoptional

tags

Key/value pairs representing metric tags.

stringcommonrequired
[tag-name]

Key/value pairs representing metric tags. Environment variables and field interpolation is allowed.

No default
View examples
stringenumcommonrequired

type

The metric type.

No default
Enum, must be one of: "counter" "gauge" "histogram" "set"
View examples

Output

  • Timings
  • Counting
  • Summing
  • Gauges
  • Sets

This example demonstrates capturing timings in your logs.

{
"host": "10.22.11.222",
"message": "Sent 200 in 54.2ms",
"status": 200,
"time": 54.2,
}

You can convert the time field into a distribution metric:

[transforms.log_to_metric]
type = "log_to_metric"
[[transforms.log_to_metric.metrics]]
type = "histogram"
field = "time"
name = "time_ms" # optional
tags.status = "{{status}}" # optional
tags.host = "{{host}}" # optional

A metric event will be output with the following structure:

{
"name": "time_ms",
"kind": "incremental",
"tags": {
"status": "200",
"host": "10.22.11.222"
}
"value": {
"type": "distribution",
"values": [54.2],
"sample_rates": [1.0]
}
}

This metric will then proceed down the pipeline, and depending on the sink, will be aggregated in Vector (such is the case for the prometheus \ sink) or will be aggregated in the store itself.

How It Works

Environment Variables

Environment variables are supported through all of Vector's configuration. Simply add ${MY_ENV_VAR} in your Vector configuration file and the variable will be replaced before being evaluated.

You can learn more in the Environment Variables section.

Multiple Metrics

For clarification, when you convert a single log event into multiple metric events, the metric events are not emitted as a single array. They are emitted individually, and the downstream components treat them as individual events. Downstream components are not aware they were derived from a single log event.

Null Fields

If the target log field contains a null value it will ignored, and a metric will not be emitted.

Reducing

It's important to understand that this transform does not reduce multiple logs into a single metric. Instead, this transform converts logs into granular individual metrics that can then be reduced at the edge. Where the reduction happens depends on your metrics storage. For example, the prometheus sink will reduce logs in the sink itself for the next scrape, while other metrics sinks will proceed to forward the individual metrics for reduction in the metrics storage itself.