Honeycomb

Deliver log events to Honeycomb

status: beta delivery: at-least-once egress: batch state: stateless

Configuration

Example configurations

{
  "sinks": {
    "my_sink_id": {
      "type": "honeycomb",
      "inputs": "my-source-or-transform-id",
      "api_key": "${HONEYCOMB_API_KEY}",
      "dataset": "my-honeycomb-dataset"
    }
  }
}
[sinks.my_sink_id]
type = "honeycomb"
inputs = "my-source-or-transform-id"
api_key = "${HONEYCOMB_API_KEY}"
dataset = "my-honeycomb-dataset"
---
sinks:
  my_sink_id:
    type: honeycomb
    inputs: my-source-or-transform-id
    api_key: ${HONEYCOMB_API_KEY}
    encoding: null
    healthcheck: null
    dataset: my-honeycomb-dataset
{
  "sinks": {
    "my_sink_id": {
      "type": "honeycomb",
      "inputs": "my-source-or-transform-id",
      "api_key": "${HONEYCOMB_API_KEY}",
      "dataset": "my-honeycomb-dataset"
    }
  }
}
[sinks.my_sink_id]
type = "honeycomb"
inputs = "my-source-or-transform-id"
api_key = "${HONEYCOMB_API_KEY}"
dataset = "my-honeycomb-dataset"
---
sinks:
  my_sink_id:
    type: honeycomb
    inputs: my-source-or-transform-id
    api_key: ${HONEYCOMB_API_KEY}
    buffer: null
    batch: null
    encoding: null
    healthcheck: null
    request: null
    dataset: my-honeycomb-dataset

api_key

required string
The team key that will be used to authenticate against Honeycomb.

batch

optional object
Configures the sink batching behavior.

batch.max_bytes

optional uint
The maximum size of a batch, in bytes, before it is flushed.
default: 5.24288e+06 (bytes)

batch.timeout_secs

optional uint
The maximum age of a batch before it is flushed.
default: 1 (seconds)

buffer

optional object
Configures the sink specific buffer behavior.

buffer.max_events

optional uint
The maximum number of events allowed in the buffer.
Relevant when: type = "memory"
default: 500 (events)

buffer.max_size

required uint
The maximum size of the buffer on the disk.
Relevant when: type = "disk"

buffer.type

optional string enum literal
The buffer’s type and storage mechanism.
Enum options
OptionDescription
diskStores the sink’s buffer on disk. This is less performant, but durable. Data will not be lost between restarts. Will also hold data in memory to enhance performance. WARNING: This may stall the sink if disk performance isn’t on par with the throughput. For comparison, AWS gp2 volumes are usually too slow for common cases.
memoryStores the sink’s buffer in memory. This is more performant, but less durable. Data will be lost if Vector is restarted forcefully.
default: memory

buffer.when_full

optional string enum literal
The behavior when the buffer becomes full.
Enum options
OptionDescription
blockApplies back pressure when the buffer is full. This prevents data loss, but will cause data to pile up on the edge.
drop_newestDrops new data as it’s received. This data is lost. This should be used when performance is the highest priority.
default: block

dataset

required string
The dataset that Vector will send logs to.

encoding

required object
Configures the encoding specific sink behavior.

encoding.except_fields

optional array
Prevent the sink from encoding the specified fields.

encoding.only_fields

optional array
Makes the sink encode only the specified fields.

encoding.timestamp_format

optional string enum literal
How to format event timestamps.
Enum options
OptionDescription
rfc3339Formats as a RFC3339 string
unixFormats as a unix timestamp
default: rfc3339

healthcheck

common optional object
Health check options for the sink.

healthcheck.enabled

optional bool
Enables/disables the healthcheck upon Vector boot.
default: true

inputs

required [string]

A list of upstream source or transform IDs. Wildcards (*) are supported but must be the last character in the ID.

See configuration for more info.

Array string literal
Examples
[
  "my-source-or-transform-id",
  "prefix-*"
]

request

optional object
Configures the sink request behavior.

request.adaptive_concurrency

optional object
Configure the adaptive concurrency algorithms. These values have been tuned by optimizing simulated results. In general you should not need to adjust these.

request.concurrency

optional uint
The maximum number of in-flight requests allowed at any given time.
default: 5 (requests)

request.rate_limit_duration_secs

optional uint
The time window, in seconds, used for the rate_limit_num option.
default: 1 (seconds)

request.rate_limit_num

optional uint
The maximum number of requests allowed within the rate_limit_duration_secs time window.
default: 5

request.retry_attempts

optional uint
The maximum number of retries to make for failed requests. The default, for all intents and purposes, represents an infinite number of retries.
default: 1.8446744073709552e+19

request.retry_initial_backoff_secs

optional uint
The amount of time to wait before attempting the first retry for a failed request. Once, the first retry has failed the fibonacci sequence will be used to select future backoffs.
default: 1 (seconds)

request.retry_max_duration_secs

optional uint
The maximum amount of time, in seconds, to wait between retries.
default: 10 (seconds)

request.timeout_secs

optional uint
The maximum time a request can take before being aborted. It is highly recommended that you do not lower this value below the service’s internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
default: 60 (seconds)

Telemetry

Metrics

link

events_in_total

counter
The number of events accepted by this component either from tagged origin like file and uri, or cumulatively from other origins.
component_kind required
The Vector component kind.
component_name required
The Vector component name.
component_type required
The Vector component type.
container_name optional
The name of the container from which the event originates.
file optional
The file from which the event originates.
mode optional
The connection mode used by the component.
peer_addr optional
The IP from which the event originates.
peer_path optional
The pathname from which the event originates.
pod_name optional
The name of the pod from which the event originates.
uri optional
The sanitized URI from which the event originates.

events_out_total

counter
The total number of events emitted by this component.
component_kind required
The Vector component kind.
component_name required
The Vector component name.
component_type required
The Vector component type.

How it works

Buffers and batches

This component buffers & batches data as shown in the diagram above. You’ll notice that Vector treats these concepts differently, instead of treating them as global concepts, Vector treats them as sink specific concepts. This isolates sinks, ensuring services disruptions are contained and delivery guarantees are honored.

Batches are flushed when 1 of 2 conditions are met:

  1. The batch age meets or exceeds the configured timeout_secs.
  2. The batch size meets or exceeds the configured max_size or max_events.

Buffers are controlled via the buffer.* options.

Health checks

Health checks ensure that the downstream service is accessible and ready to accept data. This check is performed upon sink initialization. If the health check fails an error will be logged and Vector will proceed to start.

Require health checks

If you’d like to exit immediately upon a health check failure, you can pass the --require-healthy flag:

vector --config /etc/vector/vector.toml --require-healthy

Disable health checks

If you’d like to disable health checks for this sink you can set the healthcheck option to false.

Partitioning

Vector supports dynamic configuration values through a simple template syntax. If an option supports templating, it will be noted with a badge and you can use event fields to create dynamic values. For example:

[sinks.my-sink]
dynamic_option = "application={{ application_id }}"

In the above example, the application_id for each event will be used to partition outgoing data.

Rate limits & adapative concurrency

Adaptive Request Concurrency (ARC)

Adaptive Requst Concurrency is a feature of Vector that does away with static rate limits and automatically optimizes HTTP concurrency limits based on downstream service responses. The underlying mechanism is a feedback loop inspired by TCP congestion control algorithms. Checkout the announcement blog post,

We highly recommend enabling this feature as it improves performance and reliability of Vector and the systems it communicates with.

To enable, set the request.concurrency option to adaptive:

[sinks.my-sink]
  request.concurrency = "adaptive"

Static rate limits

If Adaptive Request Concurrency is not for you, you can manually set static rate limits with the request.rate_limit_duration_secs, request.rate_limit_num, and request.concurrency options:

[sinks.my-sink]
  request.rate_limit_duration_secs = 1
  request.rate_limit_num = 10
  request.concurrency = 10

Retry policy

Vector will retry failed requests (status == 429, >= 500, and != 501). Other responses will not be retried. You can control the number of retry attempts and backoff rate with the request.retry_attempts and request.retry_backoff_secs options.

Setup

  1. Register for a free account at honeycomb.io

  2. Once registered, create a new dataset and when presented with log shippers select the curl option and use the key provided with the curl example.

State

This component is stateless, meaning its behavior is consistent across each input.