Datadog Metrics Sink

The Vector datadog_metrics sink sends metrics to Datadog metrics.

Configuration

[sinks.my_sink_id]
type = "datadog_metrics" # required
inputs = ["my-source-or-transform-id"] # required
api_key = "${DATADOG_API_KEY_ENV_VAR}" # required
default_namespace = "service" # optional, no default
healthcheck = true # optional, default
  • commonrequiredstring

    api_key

    Datadog API key

    • View examples
  • optionaltable

    batch

    Configures the sink batching behavior.

    • commonoptionaluint

      max_events

      The maximum size of a batch, in events, before it is flushed.

      • Default: 20 (events)
    • commonoptionaluint

      timeout_secs

      The maximum age of a batch before it is flushed.

      • Default: 1 (seconds)
  • commonoptionalstring

    default_namespace

    Used as a namespace for metrics that don't have it. A namespace will be prefixed to a metric's name.

    • View examples
  • optionalstring

    endpoint

    The endpoint to send data to.

    • Only relevant when: region is not set
    • View examples
  • commonoptionalbool

    healthcheck

    Enables/disables the sink healthcheck upon Vector boot. See Health checks for more info.

    • Default: true
    • View examples
  • optionaltable

    request

    Configures the sink request behavior.

    • optionaltable

      adaptive_concurrency

      Configure the adaptive concurrency algorithms. These values have been tuned by optimizing simulated results. In general you should not need to adjust these.

      • optionalfloat
        decrease_ratio

        The fraction of the current value to set the new concurrency limit when decreasing the limit. Valid values are greater than 0 and less than 1. Smaller values cause the algorithm to scale back rapidly when latency increases. Note that the new limit is rounded down after applying this ratio.

        • Default: 0.9
      • optionalfloat
        ewma_alpha

        The adaptive concurrency algorithm uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. This value controls how heavily new measurements are weighted compared to older ones. Valid values are greater than 0 and less than 1. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.

        • Default: 0.7
      • optionalfloat
        rtt_threshold_ratio

        When comparing the past RTT average to the current measurements, we ignore changes that are less than this ratio higher than the past RTT. Valid values are greater than or equal to 0. Larger values cause the algorithm to ignore larger increases in the RTT.

        • Default: 0.05
    • commonoptionaluint

      concurrency

      The maximum number of in-flight requests allowed at any given time, or "auto" to allow Vector to automatically set the limit based on current network and service conditions.

      • Default: 5 (requests)
    • commonoptionaluint

      rate_limit_duration_secs

      The time window, in seconds, used for the rate_limit_num option.

      • Default: 1 (seconds)
    • commonoptionaluint

      rate_limit_num

      The maximum number of requests allowed within the rate_limit_duration_secs time window.

      • Default: 5
    • optionaluint

      retry_attempts

      The maximum number of retries to make for failed requests. The default, for all intents and purposes, represents an infinite number of retries.

      • Default: 18446744073709552000
    • optionaluint

      retry_initial_backoff_secs

      The amount of time to wait before attempting the first retry for a failed request. Once, the first retry has failed the fibonacci sequence will be used to select future backoffs.

      • Default: 1 (seconds)
    • optionaluint

      retry_max_duration_secs

      The maximum amount of time, in seconds, to wait between retries.

      • Default: 10 (seconds)
    • commonoptionaluint

      timeout_secs

      The maximum time a request can take before being aborted. It is highly recommended that you do not lower this value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.

      • Default: 60 (seconds)

Telemetry

This component provides the following metrics that can be retrieved through the internal_metrics source. See the metrics section in the monitoring page for more info.

  • counter

    processed_events_total

    The total number of events processed by this component. This metric includes the following tags:

    • component_kind - The Vector component kind.

    • component_name - The Vector component ID.

    • component_type - The Vector component type.

    • file - The file that produced the error

    • instance - The Vector instance identified by host and port.

    • job - The name of the job producing Vector metrics.

  • counter

    processed_bytes_total

    The total number of bytes processed by the component. This metric includes the following tags:

    • component_kind - The Vector component kind.

    • component_name - The Vector component ID.

    • component_type - The Vector component type.

    • instance - The Vector instance identified by host and port.

    • job - The name of the job producing Vector metrics.

How It Works

Health checks

Health checks ensure that the downstream service is accessible and ready to accept data. This check is performed upon sink initialization. If the health check fails an error will be logged and Vector will proceed to start.

Require health checks

If you'd like to exit immediately upon a health check failure, you can pass the --require-healthy flag:

vector --config /etc/vector/vector.toml --require-healthy

Disable health checks

If you'd like to disable health checks for this sink you can set the healthcheck option to false.

Partitioning

Vector supports dynamic configuration values through a simple template syntax. If an option supports templating, it will be noted with a badge and you can use event fields to create dynamic values. For example:

vector.toml
[sinks.my-sink]
dynamic_option = "application={{ application_id }}"

In the above example, the application_id for each event will be used to partition outgoing data.

Rate limits & adapative concurrency

Adaptive Request Concurrency (ARC)

Adaptive Requst Concurrency is a feature of Vector that does away with static rate limits and automatically optimizes HTTP concurrency limits based on downstream service responses. The underlying mechanism is a feedback loop inspired by TCP congestion control algorithms. Checkout the announcement blog post,

We highly recommend enabling this feature as it improves performance and reliability of Vector and the systems it communicates with.

To enable, set the request.concurrency option to adaptive:

vector.toml
[sinks.my-sink]
request.concurrency = "adaptive"

Static rate limits

If Adaptive Request Concurrency is not for you, you can manually set static rate limits with the request.rate_limit_duration_secs, request.rate_limit_num, and request.concurrency options:

vector.toml
[sinks.my-sink]
request.rate_limit_duration_secs = 1
request.rate_limit_num = 10
request.concurrency = 10

Retry policy

Vector will retry failed requests (status == 429, >= 500, and != 501). Other responses will not be retried. You can control the number of retry attempts and backoff rate with the request.retry_attempts and request.retry_backoff_secs options.