Log to metric
Convert log events to metric events
Configuration
Example configurations
{
"transforms": {
"my_transform_id": {
"type": "log_to_metric",
"inputs": [
"my-source-or-transform-id"
],
"metrics": {
"field": null,
"type": "counter"
}
}
}
}
[transforms.my_transform_id]
type = "log_to_metric"
inputs = [ "my-source-or-transform-id" ]
[transforms.my_transform_id.metrics]
type = "counter"
transforms:
my_transform_id:
type: log_to_metric
inputs:
- my-source-or-transform-id
metrics:
field: null
type: counter
{
"transforms": {
"my_transform_id": {
"type": "log_to_metric",
"inputs": [
"my-source-or-transform-id"
],
"metrics": {
"field": null,
"increment_by_value": null,
"kind": "incremental",
"type": "counter"
}
}
}
}
[transforms.my_transform_id]
type = "log_to_metric"
inputs = [ "my-source-or-transform-id" ]
[transforms.my_transform_id.metrics]
kind = "incremental"
type = "counter"
transforms:
my_transform_id:
type: log_to_metric
inputs:
- my-source-or-transform-id
metrics:
field: null
increment_by_value: null
kind: incremental
type: counter
all_metrics
optional boolSetting this flag changes the behavior of this transformation.
Notably the `metrics` field will be ignored.
All incoming events will be processed and if possible they will be converted to log events. Otherwise, only items specified in the 'metrics' field will be processed.
use serde_json::json;
let json_event = json!({
"counter": {
"value": 10.0
},
"kind": "incremental",
"name": "test.transform.counter",
"tags": {
"env": "test_env",
"host": "localhost"
}
});
This is an example JSON representation of a counter with the following properties:
counter
: An object with a single propertyvalue
representing the counter value, in this case,10.0
).kind
: A string indicating the kind of counter, in this case, “incremental”.name
: A string representing the name of the counter, here set to “test.transform.counter”.tags
: An object containing additional tags such as “env” and “host”.
Objects that can be processed include counter, histogram, gauge, set and summary.
graph
optional objectExtra graph configuration
Configure output for component when generated with graph command
graph.node_attributes
optional objectNode attributes to add to this component’s node in resulting graph
They are added to the node as provided
graph.node_attributes.*
required string literalinputs
required [string]A list of upstream source or transform IDs.
Wildcards (*
) are supported.
See configuration for more info.
Outputs
<component_id>
Output Data
Metrics
counter
counterdistribution
distributionTelemetry
Metrics
linkcomponent_discarded_events_total
counterfilter
transform, or false if due to an error.component_errors_total
countercomponent_received_event_bytes_total
countercomponent_received_events_count
histogramA histogram of the number of events passed in each internal batch in Vector’s internal topology.
Note that this is separate than sink-level batching. It is mostly useful for low level debugging performance issues in Vector due to small internal batches.
component_received_events_total
countercomponent_sent_event_bytes_total
countercomponent_sent_events_total
counterutilization
gaugeExamples
Counter
Given this event...{
"log": {
"host": "10.22.11.222",
"message": "Sent 200 in 54.2ms",
"status": 200
}
}
transforms:
my_transform_id:
type: log_to_metric
inputs:
- my-source-or-transform-id
metrics:
- type: counter
field: status
name: response_total
namespace: service
tags:
status: "{{status}}"
host: "{{host}}"
[transforms.my_transform_id]
type = "log_to_metric"
inputs = [ "my-source-or-transform-id" ]
[[transforms.my_transform_id.metrics]]
type = "counter"
field = "status"
name = "response_total"
namespace = "service"
[transforms.my_transform_id.metrics.tags]
status = "{{status}}"
host = "{{host}}"
{
"transforms": {
"my_transform_id": {
"type": "log_to_metric",
"inputs": [
"my-source-or-transform-id"
],
"metrics": [
{
"type": "counter",
"field": "status",
"name": "response_total",
"namespace": "service",
"tags": {
"status": "{{status}}",
"host": "{{host}}"
}
}
]
}
}
}
[{"metric":{"counter":{"value":1},"kind":"incremental","name":"response_total","namespace":"service","tags":{"host":"10.22.11.222","status":"200"}}}]
Sum
Given this event...{
"log": {
"host": "10.22.11.222",
"message": "Order placed for $122.20",
"total": 122.2
}
}
transforms:
my_transform_id:
type: log_to_metric
inputs:
- my-source-or-transform-id
metrics:
- type: counter
field: total
name: order_total
increment_by_value: true
tags:
host: "{{host}}"
[transforms.my_transform_id]
type = "log_to_metric"
inputs = [ "my-source-or-transform-id" ]
[[transforms.my_transform_id.metrics]]
type = "counter"
field = "total"
name = "order_total"
increment_by_value = true
[transforms.my_transform_id.metrics.tags]
host = "{{host}}"
{
"transforms": {
"my_transform_id": {
"type": "log_to_metric",
"inputs": [
"my-source-or-transform-id"
],
"metrics": [
{
"type": "counter",
"field": "total",
"name": "order_total",
"increment_by_value": true,
"tags": {
"host": "{{host}}"
}
}
]
}
}
}
[{"metric":{"counter":{"value":122.2},"kind":"incremental","name":"order_total","tags":{"host":"10.22.11.222"}}}]
Gauges
Given this event...{
"log": {
"15m_load_avg": 48.7,
"1m_load_avg": 78.2,
"5m_load_avg": 56.2,
"host": "10.22.11.222",
"message": "CPU activity sample"
}
}
transforms:
my_transform_id:
type: log_to_metric
inputs:
- my-source-or-transform-id
metrics:
- type: gauge
field: 1m_load_avg
tags:
host: "{{host}}"
- type: gauge
field: 5m_load_avg
tags:
host: "{{host}}"
- type: gauge
field: 15m_load_avg
tags:
host: "{{host}}"
[transforms.my_transform_id]
type = "log_to_metric"
inputs = [ "my-source-or-transform-id" ]
[[transforms.my_transform_id.metrics]]
type = "gauge"
field = "1m_load_avg"
[transforms.my_transform_id.metrics.tags]
host = "{{host}}"
[[transforms.my_transform_id.metrics]]
type = "gauge"
field = "5m_load_avg"
[transforms.my_transform_id.metrics.tags]
host = "{{host}}"
[[transforms.my_transform_id.metrics]]
type = "gauge"
field = "15m_load_avg"
[transforms.my_transform_id.metrics.tags]
host = "{{host}}"
{
"transforms": {
"my_transform_id": {
"type": "log_to_metric",
"inputs": [
"my-source-or-transform-id"
],
"metrics": [
{
"type": "gauge",
"field": "1m_load_avg",
"tags": {
"host": "{{host}}"
}
},
{
"type": "gauge",
"field": "5m_load_avg",
"tags": {
"host": "{{host}}"
}
},
{
"type": "gauge",
"field": "15m_load_avg",
"tags": {
"host": "{{host}}"
}
}
]
}
}
}
[{"metric":{"gauge":{"value":78.2},"kind":"absolute","name":"1m_load_avg","tags":{"host":"10.22.11.222"}}},{"metric":{"gauge":{"value":56.2},"kind":"absolute","name":"5m_load_avg","tags":{"host":"10.22.11.222"}}},{"metric":{"gauge":{"value":48.7},"kind":"absolute","name":"15m_load_avg","tags":{"host":"10.22.11.222"}}}]
Histogram distribution
Given this event...{
"log": {
"host": "10.22.11.222",
"message": "Sent 200 in 54.2ms",
"status": 200,
"time": 54.2
}
}
transforms:
my_transform_id:
type: log_to_metric
inputs:
- my-source-or-transform-id
metrics:
- type: histogram
field: time
name: time_ms
tags:
status: "{{status}}"
host: "{{host}}"
[transforms.my_transform_id]
type = "log_to_metric"
inputs = [ "my-source-or-transform-id" ]
[[transforms.my_transform_id.metrics]]
type = "histogram"
field = "time"
name = "time_ms"
[transforms.my_transform_id.metrics.tags]
status = "{{status}}"
host = "{{host}}"
{
"transforms": {
"my_transform_id": {
"type": "log_to_metric",
"inputs": [
"my-source-or-transform-id"
],
"metrics": [
{
"type": "histogram",
"field": "time",
"name": "time_ms",
"tags": {
"status": "{{status}}",
"host": "{{host}}"
}
}
]
}
}
}
[{"metric":{"distribution":{"samples":[{"rate":1,"value":54.2}],"statistic":"histogram"},"kind":"incremental","name":"time_ms","tags":{"host":"10.22.11.222","status":"200"}}}]
Summary distribution
Given this event...{
"log": {
"host": "10.22.11.222",
"message": "Sent 200 in 54.2ms",
"status": 200,
"time": 54.2
}
}
transforms:
my_transform_id:
type: log_to_metric
inputs:
- my-source-or-transform-id
metrics:
- type: summary
field: time
name: time_ms
tags:
status: "{{status}}"
host: "{{host}}"
[transforms.my_transform_id]
type = "log_to_metric"
inputs = [ "my-source-or-transform-id" ]
[[transforms.my_transform_id.metrics]]
type = "summary"
field = "time"
name = "time_ms"
[transforms.my_transform_id.metrics.tags]
status = "{{status}}"
host = "{{host}}"
{
"transforms": {
"my_transform_id": {
"type": "log_to_metric",
"inputs": [
"my-source-or-transform-id"
],
"metrics": [
{
"type": "summary",
"field": "time",
"name": "time_ms",
"tags": {
"status": "{{status}}",
"host": "{{host}}"
}
}
]
}
}
}
[{"metric":{"distribution":{"samples":[{"rate":1,"value":54.2}],"statistic":"summary"},"kind":"incremental","name":"time_ms","tags":{"host":"10.22.11.222","status":"200"}}}]
Set
Given this event...{
"log": {
"branch": "dev",
"host": "10.22.11.222",
"message": "Sent 200 in 54.2ms",
"remote_addr": "233.221.232.22"
}
}
transforms:
my_transform_id:
type: log_to_metric
inputs:
- my-source-or-transform-id
metrics:
- type: set
field: remote_addr
namespace: "{{branch}}"
tags:
host: "{{host}}"
[transforms.my_transform_id]
type = "log_to_metric"
inputs = [ "my-source-or-transform-id" ]
[[transforms.my_transform_id.metrics]]
type = "set"
field = "remote_addr"
namespace = "{{branch}}"
[transforms.my_transform_id.metrics.tags]
host = "{{host}}"
{
"transforms": {
"my_transform_id": {
"type": "log_to_metric",
"inputs": [
"my-source-or-transform-id"
],
"metrics": [
{
"type": "set",
"field": "remote_addr",
"namespace": "{{branch}}",
"tags": {
"host": "{{host}}"
}
}
]
}
}
}
[{"metric":{"kind":"incremental","name":"remote_addr","namespace":"dev","set":{"values":["233.221.232.22"]},"tags":{"host":"10.22.11.222"}}}]
How it works
Multiple Metrics
log
event into multiple metric
events, the metric
events are not emitted as a single array. They are emitted
individually, and the downstream components treat them as individual events.
Downstream components are not aware they were derived from a single log event.Null Fields
field
contains a null
value it will ignored, and a metric
will not be emitted.Reducing
prometheus_exporter
sink will reduce logs in the sink itself
for the next scrape, while other metrics sinks will proceed to forward the
individual metrics for reduction in the metrics storage itself.