InfluxDB metrics

Deliver metric event data to InfluxDB

status: stable delivery: at-least-once egress: batch state: stateless

Configuration

Example configurations

{
  "sinks": {
    "my_sink_id": {
      "type": "influxdb_metrics",
      "inputs": "my-source-or-transform-id",
      "bucket": "vector-bucket",
      "consistency": "any",
      "database": "vector-database",
      "endpoint": "http://localhost:8086/",
      "org": "my-org",
      "password": "${INFLUXDB_PASSWORD}",
      "retention_policy_name": "autogen",
      "token": "${INFLUXDB_TOKEN}",
      "default_namespace": "service",
      "username": "todd"
    }
  }
}
[sinks.my_sink_id]
type = "influxdb_metrics"
inputs = "my-source-or-transform-id"
bucket = "vector-bucket"
consistency = "any"
database = "vector-database"
endpoint = "http://localhost:8086/"
org = "my-org"
password = "${INFLUXDB_PASSWORD}"
retention_policy_name = "autogen"
token = "${INFLUXDB_TOKEN}"
default_namespace = "service"
username = "todd"
---
sinks:
  my_sink_id:
    type: influxdb_metrics
    inputs: my-source-or-transform-id
    bucket: vector-bucket
    consistency: any
    database: vector-database
    endpoint: http://localhost:8086/
    org: my-org
    password: ${INFLUXDB_PASSWORD}
    retention_policy_name: autogen
    token: ${INFLUXDB_TOKEN}
    default_namespace: service
    encoding: null
    healthcheck: null
    username: todd
{
  "sinks": {
    "my_sink_id": {
      "type": "influxdb_metrics",
      "inputs": "my-source-or-transform-id",
      "bucket": "vector-bucket",
      "consistency": "any",
      "database": "vector-database",
      "endpoint": "http://localhost:8086/",
      "org": "my-org",
      "password": "${INFLUXDB_PASSWORD}",
      "retention_policy_name": "autogen",
      "tags": "field1",
      "token": "${INFLUXDB_TOKEN}",
      "default_namespace": "service",
      "username": "todd"
    }
  }
}
[sinks.my_sink_id]
type = "influxdb_metrics"
inputs = "my-source-or-transform-id"
bucket = "vector-bucket"
consistency = "any"
database = "vector-database"
endpoint = "http://localhost:8086/"
org = "my-org"
password = "${INFLUXDB_PASSWORD}"
retention_policy_name = "autogen"
tags = "field1"
token = "${INFLUXDB_TOKEN}"
default_namespace = "service"
username = "todd"
---
sinks:
  my_sink_id:
    type: influxdb_metrics
    inputs: my-source-or-transform-id
    bucket: vector-bucket
    consistency: any
    database: vector-database
    endpoint: http://localhost:8086/
    org: my-org
    password: ${INFLUXDB_PASSWORD}
    retention_policy_name: autogen
    tags: field1
    token: ${INFLUXDB_TOKEN}
    default_namespace: service
    batch: null
    encoding: null
    healthcheck: null
    request: null
    username: todd

batch

optional object
Configures the sink batching behavior.

batch.max_events

optional uint
The maximum size of a batch, in events, before it is flushed.
default: 20 (events)

batch.timeout_secs

optional uint
The maximum age of a batch before it is flushed.
default: 1 (seconds)

bucket

required string
The destination bucket for writes into InfluxDB 2.

consistency

common optional string
Sets the write consistency for the point for InfluxDB 1.

database

required string
Sets the target database for the write into InfluxDB 1.

default_namespace

common optional string
Used as a namespace for metrics that don’t have it. A namespace will be prefixed to a metric’s name.

encoding

required object
Configures the encoding specific sink behavior.

encoding.except_fields

optional array
Prevent the sink from encoding the specified fields.

encoding.only_fields

optional array
Makes the sink encode only the specified fields.

encoding.timestamp_format

optional string enum literal
How to format event timestamps.
Enum options
OptionDescription
rfc3339Formats as a RFC3339 string
unixFormats as a unix timestamp
default: rfc3339

endpoint

required string
The endpoint to send data to.

healthcheck

common optional object
Health check options for the sink.

healthcheck.enabled

optional bool
Enables/disables the healthcheck upon Vector boot.
default: true

inputs

required [string]

A list of upstream source or transform IDs. Wildcards (*) are supported but must be the last character in the ID.

See configuration for more info.

Array string literal
Examples
[
  "my-source-or-transform-id",
  "prefix-*"
]

org

required string
Specifies the destination organization for writes into InfluxDB 2.

password

common optional string
Sets the password for authentication if you’ve enabled authentication for the write into InfluxDB 1.

request

optional object
Configures the sink request behavior.

request.adaptive_concurrency

optional object
Configure the adaptive concurrency algorithms. These values have been tuned by optimizing simulated results. In general you should not need to adjust these.

request.concurrency

optional uint
The maximum number of in-flight requests allowed at any given time.
default: 5 (requests)

request.rate_limit_duration_secs

optional uint
The time window, in seconds, used for the rate_limit_num option.
default: 1 (seconds)

request.rate_limit_num

optional uint
The maximum number of requests allowed within the rate_limit_duration_secs time window.
default: 5

request.retry_attempts

optional uint
The maximum number of retries to make for failed requests. The default, for all intents and purposes, represents an infinite number of retries.
default: 1.8446744073709552e+19

request.retry_initial_backoff_secs

optional uint
The amount of time to wait before attempting the first retry for a failed request. Once, the first retry has failed the fibonacci sequence will be used to select future backoffs.
default: 1 (seconds)

request.retry_max_duration_secs

optional uint
The maximum amount of time, in seconds, to wait between retries.
default: 10 (seconds)

request.timeout_secs

optional uint
The maximum time a request can take before being aborted. It is highly recommended that you do not lower this value below the service’s internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60 (seconds)

retention_policy_name

common optional string
Sets the target retention policy for the write into InfluxDB 1.

tags

optional [string]
A set of additional fields that will be attached to each LineProtocol as a tag. Note: If the set of tag values has high cardinality this also increase cardinality in InfluxDB.
Array string field_path
Examples
[
  "field1",
  "parent.child_field"
]

token

required string
Authentication token for InfluxDB 2.

username

common optional string
Sets the username for authentication if you’ve enabled authentication for the write into InfluxDB 1.

Telemetry

Metrics

link

events_in_total

counter
The number of events accepted by this component either from tagged origin like file and uri, or cumulatively from other origins.
component_kind required
The Vector component kind.
component_name required
The Vector component name.
component_type required
The Vector component type.
container_name optional
The name of the container from which the event originates.
file optional
The file from which the event originates.
mode optional
The connection mode used by the component.
peer_addr optional
The IP from which the event originates.
peer_path optional
The pathname from which the event originates.
pod_name optional
The name of the pod from which the event originates.
uri optional
The sanitized URI from which the event originates.

events_out_total

counter
The total number of events emitted by this component.
component_kind required
The Vector component kind.
component_name required
The Vector component name.
component_type required
The Vector component type.

Examples

Counter

Given this event...
{
  "metric": {
    "counter": {
      "value": 1.5
    },
    "kind": "incremental",
    "name": "logins",
    "tags": {
      "host": "my-host.local"
    }
  }
}
...and this configuration...
[sinks.my_sink_id]
type = "influxdb_metrics"
inputs = [ "my-source-or-transform-id" ]
default_namespace = "service"
---
sinks:
  my_sink_id:
    type: influxdb_metrics
    inputs:
      - my-source-or-transform-id
    default_namespace: service
{
  "sinks": {
    "my_sink_id": {
      "type": "influxdb_metrics",
      "inputs": [
        "my-source-or-transform-id"
      ],
      "default_namespace": "service"
    }
  }
}
...this Vector event is produced:
service.logins,metric_type=counter,host=my-host.local value=1.5 1542182950000000011

Distribution

Given this event...
{
  "metric": {
    "distribution": {
      "samples": [
        {
          "rate": 1,
          "value": 1
        },
        {
          "rate": 2,
          "value": 5
        },
        {
          "rate": 3,
          "value": 3
        }
      ],
      "statistic": "histogram"
    },
    "kind": "incremental",
    "name": "sparse_stats",
    "namespace": "app",
    "tags": {
      "host": "my-host.local"
    }
  }
}
...and this configuration...
[sinks.my_sink_id]
type = "influxdb_metrics"
inputs = [ "my-source-or-transform-id" ]
---
sinks:
  my_sink_id:
    type: influxdb_metrics
    inputs:
      - my-source-or-transform-id
{
  "sinks": {
    "my_sink_id": {
      "type": "influxdb_metrics",
      "inputs": [
        "my-source-or-transform-id"
      ]
    }
  }
}
...this Vector event is produced:
app.sparse_stats,metric_type=distribution,host=my-host.local avg=3.333333,count=6,max=5,median=3,min=1,quantile_0.95=5,sum=20 1542182950000000011

Gauge

Given this event...
{
  "metric": {
    "gauge": {
      "value": 1.5
    },
    "kind": "absolute",
    "name": "memory_rss",
    "namespace": "app",
    "tags": {
      "host": "my-host.local"
    }
  }
}
...and this configuration...
[sinks.my_sink_id]
type = "influxdb_metrics"
inputs = [ "my-source-or-transform-id" ]
default_namespace = "service"
---
sinks:
  my_sink_id:
    type: influxdb_metrics
    inputs:
      - my-source-or-transform-id
    default_namespace: service
{
  "sinks": {
    "my_sink_id": {
      "type": "influxdb_metrics",
      "inputs": [
        "my-source-or-transform-id"
      ],
      "default_namespace": "service"
    }
  }
}
...this Vector event is produced:
app.memory_rss,metric_type=gauge,host=my-host.local value=1.5 1542182950000000011

Histogram

Given this event...
{
  "metric": {
    "histogram": {
      "buckets": [
        {
          "count": 2,
          "upper_limit": 1
        },
        {
          "count": 5,
          "upper_limit": 2.1
        },
        {
          "count": 10,
          "upper_limit": 3
        }
      ],
      "count": 17,
      "sum": 46.2
    },
    "kind": "absolute",
    "name": "requests",
    "tags": {
      "host": "my-host.local"
    }
  }
}
...and this configuration...
[sinks.my_sink_id]
type = "influxdb_metrics"
inputs = [ "my-source-or-transform-id" ]
---
sinks:
  my_sink_id:
    type: influxdb_metrics
    inputs:
      - my-source-or-transform-id
{
  "sinks": {
    "my_sink_id": {
      "type": "influxdb_metrics",
      "inputs": [
        "my-source-or-transform-id"
      ]
    }
  }
}
...this Vector event is produced:
requests,metric_type=histogram,host=my-host.local bucket_1=2i,bucket_2.1=5i,bucket_3=10i,count=17i,sum=46.2 1542182950000000011

Set

Given this event...
{
  "metric": {
    "kind": "incremental",
    "name": "users",
    "set": {
      "values": [
        "first",
        "another",
        "last"
      ]
    },
    "tags": {
      "host": "my-host.local"
    }
  }
}
...and this configuration...
[sinks.my_sink_id]
type = "influxdb_metrics"
inputs = [ "my-source-or-transform-id" ]
---
sinks:
  my_sink_id:
    type: influxdb_metrics
    inputs:
      - my-source-or-transform-id
{
  "sinks": {
    "my_sink_id": {
      "type": "influxdb_metrics",
      "inputs": [
        "my-source-or-transform-id"
      ]
    }
  }
}
...this Vector event is produced:
users,metric_type=set,host=my-host.local value=3 154218295000000001

Summary

Given this event...
{
  "metric": {
    "kind": "absolute",
    "name": "requests",
    "summary": {
      "count": 6,
      "quantiles": [
        {
          "upper_limit": 0.01,
          "value": 1.5
        },
        {
          "upper_limit": 0.5,
          "value": 2
        },
        {
          "upper_limit": 0.99,
          "value": 3
        }
      ],
      "sum": 12.1
    },
    "tags": {
      "host": "my-host.local"
    }
  }
}
...and this configuration...
[sinks.my_sink_id]
type = "influxdb_metrics"
inputs = [ "my-source-or-transform-id" ]
---
sinks:
  my_sink_id:
    type: influxdb_metrics
    inputs:
      - my-source-or-transform-id
{
  "sinks": {
    "my_sink_id": {
      "type": "influxdb_metrics",
      "inputs": [
        "my-source-or-transform-id"
      ]
    }
  }
}
...this Vector event is produced:
requests,metric_type=summary,host=my-host.local count=6i,quantile_0.01=1.5,quantile_0.5=2,quantile_0.99=3,sum=12.1 1542182950000000011

How it works

Health checks

Health checks ensure that the downstream service is accessible and ready to accept data. This check is performed upon sink initialization. If the health check fails an error will be logged and Vector will proceed to start.

Require health checks

If you’d like to exit immediately upon a health check failure, you can pass the --require-healthy flag:

vector --config /etc/vector/vector.toml --require-healthy

Disable health checks

If you’d like to disable health checks for this sink you can set the healthcheck option to false.

Partitioning

Vector supports dynamic configuration values through a simple template syntax. If an option supports templating, it will be noted with a badge and you can use event fields to create dynamic values. For example:

[sinks.my-sink]
dynamic_option = "application={{ application_id }}"

In the above example, the application_id for each event will be used to partition outgoing data.

Rate limits & adapative concurrency

Adaptive Request Concurrency (ARC)

Adaptive Requst Concurrency is a feature of Vector that does away with static rate limits and automatically optimizes HTTP concurrency limits based on downstream service responses. The underlying mechanism is a feedback loop inspired by TCP congestion control algorithms. Checkout the announcement blog post,

We highly recommend enabling this feature as it improves performance and reliability of Vector and the systems it communicates with.

To enable, set the request.concurrency option to adaptive:

[sinks.my-sink]
  request.concurrency = "adaptive"

Static rate limits

If Adaptive Request Concurrency is not for you, you can manually set static rate limits with the request.rate_limit_duration_secs, request.rate_limit_num, and request.concurrency options:

[sinks.my-sink]
  request.rate_limit_duration_secs = 1
  request.rate_limit_num = 10
  request.concurrency = 10

Retry policy

Vector will retry failed requests (status == 429, >= 500, and != 501). Other responses will not be retried. You can control the number of retry attempts and backoff rate with the request.retry_attempts and request.retry_backoff_secs options.

State

This component is stateless, meaning its behavior is consistent across each input.