Datadog metrics

Publish metric events to Datadog

status: stable delivery: at-least-once egress: batch state: stateless

Configuration

Example configurations

{
  "sinks": {
    "my_sink_id": {
      "type": "datadog_metrics",
      "inputs": "my-source-or-transform-id",
      "api_key": "${DATADOG_API_KEY_ENV_VAR}",
      "default_namespace": "service"
    }
  }
}
[sinks.my_sink_id]
type = "datadog_metrics"
inputs = "my-source-or-transform-id"
api_key = "${DATADOG_API_KEY_ENV_VAR}"
default_namespace = "service"
---
sinks:
  my_sink_id:
    type: datadog_metrics
    inputs: my-source-or-transform-id
    api_key: ${DATADOG_API_KEY_ENV_VAR}
    healthcheck: null
    default_namespace: service
{
  "sinks": {
    "my_sink_id": {
      "type": "datadog_metrics",
      "inputs": "my-source-or-transform-id",
      "api_key": "${DATADOG_API_KEY_ENV_VAR}",
      "endpoint": "127.0.0.1:8080",
      "region": "us",
      "default_namespace": "service"
    }
  }
}
[sinks.my_sink_id]
type = "datadog_metrics"
inputs = "my-source-or-transform-id"
api_key = "${DATADOG_API_KEY_ENV_VAR}"
endpoint = "127.0.0.1:8080"
region = "us"
default_namespace = "service"
---
sinks:
  my_sink_id:
    type: datadog_metrics
    inputs: my-source-or-transform-id
    api_key: ${DATADOG_API_KEY_ENV_VAR}
    endpoint: 127.0.0.1:8080
    region: us
    batch: null
    healthcheck: null
    request: null
    default_namespace: service

api_key

required string
Datadog API key

batch

optional object
Configures the sink batching behavior.

batch.max_events

optional uint
The maximum size of a batch, in events, before it is flushed.
default: 20 (events)

batch.timeout_secs

optional uint
The maximum age of a batch before it is flushed.
default: 1 (seconds)

default_namespace

common optional string
Used as a namespace for metrics that don’t have it. A namespace will be prefixed to a metric’s name.

endpoint

optional string
The endpoint to send data to.

healthcheck

common optional object
Health check options for the sink.

healthcheck.enabled

optional bool
Enables/disables the healthcheck upon Vector boot.
default: true

inputs

required [string]

A list of upstream source or transform IDs. Wildcards (*) are supported but must be the last character in the ID.

See configuration for more info.

Array string literal
Examples
[
  "my-source-or-transform-id",
  "prefix-*"
]

region

optional string enum
The region to send data to.
Enum options string literal
OptionDescription
euEurope
usUnited States

request

optional object
Configures the sink request behavior.

request.adaptive_concurrency

optional object
Configure the adaptive concurrency algorithms. These values have been tuned by optimizing simulated results. In general you should not need to adjust these.

request.concurrency

optional uint
The maximum number of in-flight requests allowed at any given time.
default: 5 (requests)

request.rate_limit_duration_secs

optional uint
The time window, in seconds, used for the rate_limit_num option.
default: 1 (seconds)

request.rate_limit_num

optional uint
The maximum number of requests allowed within the rate_limit_duration_secs time window.
default: 5

request.retry_attempts

optional uint
The maximum number of retries to make for failed requests. The default, for all intents and purposes, represents an infinite number of retries.
default: 1.8446744073709552e+19

request.retry_initial_backoff_secs

optional uint
The amount of time to wait before attempting the first retry for a failed request. Once, the first retry has failed the fibonacci sequence will be used to select future backoffs.
default: 1 (seconds)

request.retry_max_duration_secs

optional uint
The maximum amount of time, in seconds, to wait between retries.
default: 10 (seconds)

request.timeout_secs

optional uint
The maximum time a request can take before being aborted. It is highly recommended that you do not lower this value below the service’s internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60 (seconds)

Telemetry

Metrics

link

events_in_total

counter
The number of events accepted by this component either from tagged origin like file and uri, or cumulatively from other origins.
component_kind required
The Vector component kind.
component_name required
The Vector component name.
component_type required
The Vector component type.
container_name optional
The name of the container from which the event originates.
file optional
The file from which the event originates.
mode optional
The connection mode used by the component.
peer_addr optional
The IP from which the event originates.
peer_path optional
The pathname from which the event originates.
pod_name optional
The name of the pod from which the event originates.
uri optional
The sanitized URI from which the event originates.

events_out_total

counter
The total number of events emitted by this component.
component_kind required
The Vector component kind.
component_name required
The Vector component name.
component_type required
The Vector component type.

How it works

Health checks

Health checks ensure that the downstream service is accessible and ready to accept data. This check is performed upon sink initialization. If the health check fails an error will be logged and Vector will proceed to start.

Require health checks

If you’d like to exit immediately upon a health check failure, you can pass the --require-healthy flag:

vector --config /etc/vector/vector.toml --require-healthy

Disable health checks

If you’d like to disable health checks for this sink you can set the healthcheck option to false.

Partitioning

Vector supports dynamic configuration values through a simple template syntax. If an option supports templating, it will be noted with a badge and you can use event fields to create dynamic values. For example:

[sinks.my-sink]
dynamic_option = "application={{ application_id }}"

In the above example, the application_id for each event will be used to partition outgoing data.

Rate limits & adapative concurrency

Adaptive Request Concurrency (ARC)

Adaptive Requst Concurrency is a feature of Vector that does away with static rate limits and automatically optimizes HTTP concurrency limits based on downstream service responses. The underlying mechanism is a feedback loop inspired by TCP congestion control algorithms. Checkout the announcement blog post,

We highly recommend enabling this feature as it improves performance and reliability of Vector and the systems it communicates with.

To enable, set the request.concurrency option to adaptive:

[sinks.my-sink]
  request.concurrency = "adaptive"

Static rate limits

If Adaptive Request Concurrency is not for you, you can manually set static rate limits with the request.rate_limit_duration_secs, request.rate_limit_num, and request.concurrency options:

[sinks.my-sink]
  request.rate_limit_duration_secs = 1
  request.rate_limit_num = 10
  request.concurrency = 10

Retry policy

Vector will retry failed requests (status == 429, >= 500, and != 501). Other responses will not be retried. You can control the number of retry attempts and backoff rate with the request.retry_attempts and request.retry_backoff_secs options.

State

This component is stateless, meaning its behavior is consistent across each input.