OpenTelemetry
Send OTLP data through HTTP.
Requirements
Configuration
Example configurations
{
"sinks": {
"my_sink_id": {
"type": "opentelemetry",
"inputs": [
"my-source-or-transform-id"
]
}
}
}
[sinks.my_sink_id]
type = "opentelemetry"
inputs = [ "my-source-or-transform-id" ]
sinks:
my_sink_id:
type: opentelemetry
inputs:
- my-source-or-transform-id
{
"sinks": {
"my_sink_id": {
"type": "opentelemetry",
"inputs": [
"my-source-or-transform-id"
]
}
}
}
[sinks.my_sink_id]
type = "opentelemetry"
inputs = [ "my-source-or-transform-id" ]
sinks:
my_sink_id:
type: opentelemetry
inputs:
- my-source-or-transform-id
buffer
optional objectConfigures the buffering behavior for this sink.
More information about the individual buffer types, and buffer behavior, can be found in the Buffering Model section.
buffer.max_events
optional uinttype = "memory"
500
buffer.max_size
required uintThe maximum size of the buffer on disk.
Must be at least ~256 megabytes (268435488 bytes).
type = "disk"
buffer.type
optional string literal enumOption | Description |
---|---|
disk | Events are buffered on disk. This is less performant, but more durable. Data that has been synchronized to disk will not be lost if Vector is restarted forcefully or crashes. Data is synchronized to disk every 500ms. |
memory | Events are buffered in memory. This is more performant, but less durable. Data will be lost if Vector is restarted forcefully or crashes. |
memory
buffer.when_full
optional string literal enumOption | Description |
---|---|
block | Wait for free space in the buffer. This applies backpressure up the topology, signalling that sources should slow down the acceptance/consumption of events. This means that while no data is lost, data will pile up at the edge. |
drop_newest | Drops the event instead of waiting for free space in buffer. The event will be intentionally dropped. This mode is typically used when performance is the highest priority, and it is preferable to temporarily lose events rather than cause a slowdown in the acceptance/consumption of events. |
block
healthcheck
optional objecthealthcheck.enabled
optional booltrue
inputs
required [string]A list of upstream source or transform IDs.
Wildcards (*
) are supported.
See configuration for more info.
protocol
required objectprotocol.acknowledgements
optional objectControls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
protocol.acknowledgements.enabled
optional boolWhether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by all connected sinks before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
protocol.auth
optional objectConfiguration of the authentication strategy for HTTP requests.
HTTP authentication should be used with HTTPS only, as the authentication credentials are passed as an HTTP header without any additional encryption beyond what is provided by the transport itself.
protocol.auth.password
required string literalstrategy = "basic"
protocol.auth.strategy
required string literal enumOption | Description |
---|---|
basic | Basic authentication. The username and password are concatenated and encoded via base64. |
bearer | Bearer authentication. The bearer token value (OAuth2, JWT, etc.) is passed as-is. |
protocol.auth.token
required string literalstrategy = "bearer"
protocol.auth.user
required string literalstrategy = "basic"
protocol.batch
optional objectprotocol.batch.max_bytes
optional uintThe maximum size of a batch that is processed by a sink.
This is based on the uncompressed size of the batched events, before they are serialized/compressed.
1e+07
(bytes)protocol.batch.max_events
optional uintprotocol.batch.timeout_secs
optional float1
(seconds)protocol.compression
optional string literal enumCompression configuration.
All compression algorithms use the default compression level unless otherwise specified.
none
protocol.encoding
required objectprotocol.encoding.avro
required objectcodec = "avro"
protocol.encoding.avro.schema
required string literalprotocol.encoding.cef
required objectcodec = "cef"
protocol.encoding.cef.device_event_class_id
required string literalprotocol.encoding.cef.device_product
required string literalprotocol.encoding.cef.device_vendor
required string literalprotocol.encoding.cef.device_version
required string literalprotocol.encoding.cef.extensions
optional objectprotocol.encoding.cef.name
required string literalprotocol.encoding.cef.severity
required string literalThis is a path that points to the field of a log event that reflects importance of the event. Reflects importance of the event.
It must point to a number from 0 to 10. 0 = Lowest, 10 = Highest. Equals to “cef.severity” by default.
protocol.encoding.cef.version
required string literal enumOption | Description |
---|---|
V0 | CEF specification version 0.1. |
V1 | CEF specification version 1.x. |
protocol.encoding.codec
required string literal enumOption | Description |
---|---|
avro | Encodes an event as an Apache Avro message. |
cef | Encodes an event as a CEF (Common Event Format) formatted message. |
csv | Encodes an event as a CSV message. This codec must be configured with fields to encode. |
gelf | Encodes an event as a GELF message. This codec is experimental for the following reason: The GELF specification is more strict than the actual Graylog receiver.
Vector’s encoder currently adheres more strictly to the GELF spec, with
the exception that some characters such as Other GELF codecs such as Loki’s, use a Go SDK that is maintained by Graylog, and is much more relaxed than the GELF spec. Going forward, Vector will use that Go SDK as the reference implementation, which means the codec may continue to relax the enforcement of specification. |
json | Encodes an event as JSON. |
logfmt | Encodes an event as a logfmt message. |
native | Encodes an event in the native Protocol Buffers format. This codec is experimental. |
native_json | Encodes an event in the native JSON format. This codec is experimental. |
protobuf | Encodes an event as a Protobuf message. |
raw_message | No encoding. This encoding uses the Be careful if you are modifying your log events (for example, by using a |
text | Plain text encoding. This encoding uses the Be careful if you are modifying your log events (for example, by using a |
protocol.encoding.csv
required objectcodec = "csv"
protocol.encoding.csv.capacity
optional uint8192
protocol.encoding.csv.delimiter
optional ascii_char,
protocol.encoding.csv.double_quote
optional boolEnable double quote escapes.
This is enabled by default, but it may be disabled. When disabled, quotes in field data are escaped instead of doubled.
true
protocol.encoding.csv.escape
optional ascii_charThe escape character to use when writing CSV.
In some variants of CSV, quotes are escaped using a special escape character like \ (instead of escaping quotes by doubling them).
To use this, double_quotes
needs to be disabled as well otherwise it is ignored.
"
protocol.encoding.csv.fields
required [string]Configures the fields that will be encoded, as well as the order in which they appear in the output.
If a field is not present in the event, the output will be an empty string.
Values of type Array
, Object
, and Regex
are not supported and the
output will be an empty string.
protocol.encoding.csv.quote
optional ascii_char"
protocol.encoding.csv.quote_style
optional string literal enumOption | Description |
---|---|
always | Always puts quotes around every field. |
necessary | Puts quotes around fields only when necessary. They are necessary when fields contain a quote, delimiter, or record terminator. Quotes are also necessary when writing an empty record (which is indistinguishable from a record with one empty field). |
never | Never writes quotes, even if it produces invalid CSV data. |
non_numeric | Puts quotes around all fields that are non-numeric. Namely, when writing a field that does not parse as a valid float or integer, then quotes are used even if they aren’t strictly necessary. |
necessary
protocol.encoding.except_fields
optional [string]protocol.encoding.metric_tag_values
optional string literal enumControls how metric tag values are encoded.
When set to single
, only the last non-bare value of tags are displayed with the
metric. When set to full
, all metric tags are exposed as separate assignments.
codec = "json" or codec = "text"
Option | Description |
---|---|
full | All tags are exposed as arrays of either string or null values. |
single | Tag values are exposed as single strings, the same as they were before this config option. Tags with multiple values show the last assigned value, and null values are ignored. |
single
protocol.encoding.only_fields
optional [string]protocol.encoding.protobuf
required objectcodec = "protobuf"
protocol.encoding.protobuf.desc_file
required string literalThe path to the protobuf descriptor set file.
This file is the output of protoc -o <path> ...
protocol.encoding.protobuf.message_type
required string literalprotocol.encoding.timestamp_format
optional string literal enumOption | Description |
---|---|
rfc3339 | Represent the timestamp as a RFC 3339 timestamp. |
unix | Represent the timestamp as a Unix timestamp. |
unix_float | Represent the timestamp as a Unix timestamp in floating point. |
unix_ms | Represent the timestamp as a Unix timestamp in milliseconds. |
unix_ns | Represent the timestamp as a Unix timestamp in nanoseconds. |
unix_us | Represent the timestamp as a Unix timestamp in microseconds |
protocol.framing
optional objectprotocol.framing.character_delimited
required objectmethod = "character_delimited"
protocol.framing.character_delimited.delimiter
required ascii_charprotocol.framing.length_delimited
required objectmethod = "length_delimited"
true
4
protocol.framing.length_delimited.max_frame_length
optional uint8.388608e+06
protocol.framing.method
required string literal enumOption | Description |
---|---|
bytes | Event data is not delimited at all. |
character_delimited | Event data is delimited by a single ASCII (7-bit) character. |
length_delimited | Event data is prefixed with its length in bytes. The prefix is a 32-bit unsigned integer, little endian. |
newline_delimited | Event data is delimited by a newline (LF) character. |
protocol.headers
optional objectprotocol.headers.*
required string literalprotocol.method
optional string literal enumHTTP method.
The HTTP method to use when making the request.
Option | Description |
---|---|
delete | DELETE. |
get | GET. |
head | HEAD. |
options | OPTIONS. |
patch | PATCH. |
post | POST. |
put | PUT. |
trace | TRACE. |
post
protocol.payload_prefix
optional string literalA string to prefix the payload with.
This option is ignored if the encoding is not character delimited JSON.
If specified, the payload_suffix
must also be specified and together they must produce a valid JSON object.
protocol.payload_suffix
optional string literalA string to suffix the payload with.
This option is ignored if the encoding is not character delimited JSON.
If specified, the payload_prefix
must also be specified and together they must produce a valid JSON object.
protocol.request
optional objectprotocol.request.adaptive_concurrency
optional objectConfiguration of adaptive concurrency parameters.
These parameters typically do not require changes from the default, and incorrect values can lead to meta-stable or unstable performance and sink behavior. Proceed with caution.
protocol.request.adaptive_concurrency.decrease_ratio
optional floatThe fraction of the current value to set the new concurrency limit when decreasing the limit.
Valid values are greater than 0
and less than 1
. Smaller values cause the algorithm to scale back rapidly
when latency increases.
Note that the new limit is rounded down after applying this ratio.
0.9
protocol.request.adaptive_concurrency.ewma_alpha
optional floatThe weighting of new measurements compared to older measurements.
Valid values are greater than 0
and less than 1
.
ARC uses an exponentially weighted moving average (EWMA) of past RTT measurements as a reference to compare with the current RTT. Smaller values cause this reference to adjust more slowly, which may be useful if a service has unusually high response variability.
0.4
The initial concurrency limit to use. If not specified, the initial limit will be 1 (no concurrency).
It is recommended to set this value to your service’s average limit if you’re seeing that it takes a
long time to ramp up adaptive concurrency after a restart. You can find this value by looking at the
adaptive_concurrency_limit
metric.
1
The maximum concurrency limit.
The adaptive request concurrency limit will not go above this bound. This is put in place as a safeguard.
200
Scale of RTT deviations which are not considered anomalous.
Valid values are greater than or equal to 0
, and we expect reasonable values to range from 1.0
to 3.0
.
When calculating the past RTT average, we also compute a secondary “deviation” value that indicates how variable those values are. We use that deviation when comparing the past RTT average to the current measurements, so we can ignore increases in RTT that are within an expected range. This factor is used to scale up the deviation to an appropriate range. Larger values cause the algorithm to ignore larger increases in the RTT.
2.5
protocol.request.concurrency
optional string literal enum uintConfiguration for outbound request concurrency.
This can be set either to one of the below enum values or to a positive integer, which denotes a fixed concurrency limit.
Option | Description |
---|---|
adaptive | Concurrency will be managed by Vector’s Adaptive Request Concurrency feature. |
none | A fixed concurrency of 1. Only one request can be outstanding at any given time. |
adaptive
protocol.request.headers
optional objectprotocol.request.headers.*
required string literalprotocol.request.rate_limit_duration_secs
optional uintrate_limit_num
option.1
(seconds)protocol.request.rate_limit_num
optional uintrate_limit_duration_secs
time window.9.223372036854776e+18
(requests)protocol.request.retry_attempts
optional uint9.223372036854776e+18
(retries)protocol.request.retry_initial_backoff_secs
optional uintThe amount of time to wait before attempting the first retry for a failed request.
After the first retry has failed, the fibonacci sequence is used to select future backoffs.
1
(seconds)protocol.request.retry_jitter_mode
optional string literal enumOption | Description |
---|---|
Full | Full jitter. The random delay is anywhere from 0 up to the maximum current delay calculated by the backoff strategy. Incorporating full jitter into your backoff strategy can greatly reduce the likelihood of creating accidental denial of service (DoS) conditions against your own systems when many clients are recovering from a failure state. |
None | No jitter. |
Full
protocol.request.retry_max_duration_secs
optional uint30
(seconds)protocol.request.timeout_secs
optional uintThe time a request can take before being aborted.
Datadog highly recommends that you do not lower this value below the service’s internal timeout, as this could create orphaned requests, pile on retries, and result in duplicate data downstream.
60
(seconds)protocol.tls
optional objectprotocol.tls.alpn_protocols
optional [string]Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
protocol.tls.ca_file
optional string literalAbsolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
protocol.tls.crt_file
optional string literalAbsolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
protocol.tls.key_file
optional string literalAbsolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
protocol.tls.key_pass
optional string literalPassphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
protocol.tls.server_name
optional string literalServer name to use when using Server Name Indication (SNI).
Only relevant for outgoing connections.
protocol.tls.verify_certificate
optional boolEnables certificate verification. For components that create a server, this requires that the client connections have a valid client certificate. For components that initiate requests, this validates that the upstream has a valid certificate.
If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
protocol.tls.verify_hostname
optional boolEnables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
protocol.type
required string literal enumOption | Description |
---|---|
http | Send data over HTTP. |
protocol.uri
required string literalThe full URI to make HTTP requests to.
This should include the protocol and host, but can also include the port, path, and any other valid part of a URI.
Telemetry
Metrics
linkbuffer_byte_size
gaugebuffer_discarded_events_total
counterbuffer_events
gaugebuffer_received_event_bytes_total
counterbuffer_received_events_total
counterbuffer_sent_event_bytes_total
counterbuffer_sent_events_total
countercomponent_discarded_events_total
counterfilter
transform, or false if due to an error.component_errors_total
countercomponent_received_event_bytes_total
countercomponent_received_events_count
histogramA histogram of the number of events passed in each internal batch in Vector’s internal topology.
Note that this is separate than sink-level batching. It is mostly useful for low level debugging performance issues in Vector due to small internal batches.
component_received_events_total
countercomponent_sent_bytes_total
countercomponent_sent_event_bytes_total
countercomponent_sent_events_total
counterutilization
gaugeHow it works
Buffers
buffer.*
options.Quickstart
This sink is a wrapper over the HTTP sink. The following is an example of how you can push OTEL logs to an OTEL collector.
- The Vector config:
sources:
generate_syslog:
type: "demo_logs"
format: "syslog"
count: 100000
interval: 1
transforms:
remap_syslog:
inputs: ["generate_syslog"]
type: "remap"
source: |
syslog = parse_syslog!(.message)
.timestamp_nanos = to_unix_timestamp!(syslog.timestamp, unit: "nanoseconds")
.body = syslog
.service_name = syslog.appname
.resource_attributes.source_type = .source_type
.resource_attributes.host.hostname = syslog.hostname
.resource_attributes.service.name = syslog.appname
.attributes.syslog.procid = syslog.procid
.attributes.syslog.facility = syslog.facility
.attributes.syslog.version = syslog.version
.severity_text = if includes(["emerg", "err", "crit", "alert"], syslog.severity) {
"ERROR"
} else if syslog.severity == "warning" {
"WARN"
} else if syslog.severity == "debug" {
"DEBUG"
} else if includes(["info", "notice"], syslog.severity) {
"INFO"
} else {
syslog.severity
}
.scope_name = syslog.msgid
del(.message)
del(.timestamp)
del(.service)
del(.source_type)
sinks:
emit_syslog:
inputs: ["remap_syslog"]
type: opentelemetry
protocol:
type: http
uri: http://localhost:5318/v1/logs
method: post
encoding:
codec: json
framing:
method: newline_delimited
headers:
content-type: application/json
- Sample OTEL collector config:
receivers:
otlp:
protocols:
http:
endpoint: "0.0.0.0:5318"
exporters:
debug:
otlp:
endpoint: localhost:4317
tls:
insecure: true
processors:
batch: {}
service:
pipelines:
logs:
receivers: [otlp]
processors: [batch]
exporters: [debug]
- Run the OTEL instance:
./otelcol --config ./otel/config.yaml
- Run Vector:
VECTOR_LOG=debug cargo run -- --config /path/to/vector/config.yaml