Prometheus Exporter
Output metric events to a Prometheus exporter running on the host
Alias
This component was previously called the prometheus
sink. Make sure to update your
Vector configuration to accommodate the name change:
[sinks.my_prometheus_exporter_sink]
+type = "prometheus_exporter"
-type = "prometheus"
Warnings
tag_cardinality_limit
transform as a way
to protect against this.Configuration
Example configurations
{
"sinks": {
"my_sink_id": {
"type": "prometheus_exporter",
"inputs": [
"my-source-or-transform-id"
]
}
}
}
[sinks.my_sink_id]
type = "prometheus_exporter"
inputs = [ "my-source-or-transform-id" ]
sinks:
my_sink_id:
type: prometheus_exporter
inputs:
- my-source-or-transform-id
{
"sinks": {
"my_sink_id": {
"type": "prometheus_exporter",
"inputs": [
"my-source-or-transform-id"
],
"address": "0.0.0.0:9598",
"flush_period_secs": 60
}
}
}
[sinks.my_sink_id]
type = "prometheus_exporter"
inputs = [ "my-source-or-transform-id" ]
address = "0.0.0.0:9598"
flush_period_secs = 60
sinks:
my_sink_id:
type: prometheus_exporter
inputs:
- my-source-or-transform-id
address: 0.0.0.0:9598
flush_period_secs: 60
acknowledgements
optional objectControls how acknowledgements are handled for this sink.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
acknowledgements.enabled
optional boolWhether or not end-to-end acknowledgements are enabled.
When enabled for a sink, any source connected to that sink, where the source supports end-to-end acknowledgements as well, waits for events to be acknowledged by all connected sinks before acknowledging them at the source.
Enabling or disabling acknowledgements at the sink level takes precedence over any global
acknowledgements
configuration.
address
optional string literalThe address to expose for scraping.
The metrics are exposed at the typical Prometheus exporter path, /metrics
.
0.0.0.0:9598
auth
optional objectConfiguration of the authentication strategy for HTTP requests.
HTTP authentication should be used with HTTPS only, as the authentication credentials are passed as an HTTP header without any additional encryption beyond what is provided by the transport itself.
auth.password
required string literalstrategy = "basic"
auth.strategy
required string literal enumOption | Description |
---|---|
basic | Basic authentication. The username and password are concatenated and encoded via base64. |
bearer | Bearer authentication. The bearer token value (OAuth2, JWT, etc.) is passed as-is. |
buckets
optional [float][0.005 0.01 0.025 0.05 0.1 0.25 0.5 1 2.5 5 10]
buffer
optional objectConfigures the buffering behavior for this sink.
More information about the individual buffer types, and buffer behavior, can be found in the Buffering Model section.
buffer.max_events
optional uinttype = "memory"
500
buffer.max_size
required uintThe maximum size of the buffer on disk.
Must be at least ~256 megabytes (268435488 bytes).
type = "disk"
buffer.type
optional string literal enumOption | Description |
---|---|
disk | Events are buffered on disk. This is less performant, but more durable. Data that has been synchronized to disk will not be lost if Vector is restarted forcefully or crashes. Data is synchronized to disk every 500ms. |
memory | Events are buffered in memory. This is more performant, but less durable. Data will be lost if Vector is restarted forcefully or crashes. |
memory
buffer.when_full
optional string literal enumOption | Description |
---|---|
block | Wait for free space in the buffer. This applies backpressure up the topology, signalling that sources should slow down the acceptance/consumption of events. This means that while no data is lost, data will pile up at the edge. |
drop_newest | Drops the event instead of waiting for free space in buffer. The event will be intentionally dropped. This mode is typically used when performance is the highest priority, and it is preferable to temporarily lose events rather than cause a slowdown in the acceptance/consumption of events. |
block
default_namespace
optional string literalThe default namespace for any metrics sent.
This namespace is only used if a metric has no existing namespace. When a namespace is
present, it is used as a prefix to the metric name, and separated with an underscore (_
).
It should follow the Prometheus naming conventions.
distributions_as_summaries
optional boolWhether or not to render distributions as an aggregated histogram or aggregated summary.
While distributions as a lossless way to represent a set of samples for a metric is supported, Prometheus clients (the application being scraped, which is this sink) must aggregate locally into either an aggregated histogram or aggregated summary.
false
flush_period_secs
optional uintThe interval, in seconds, on which metrics are flushed.
On the flush interval, if a metric has not been seen since the last flush interval, it is considered expired and is removed.
Be sure to configure this value higher than your client’s scrape interval.
60
(seconds)healthcheck
optional objecthealthcheck.enabled
optional booltrue
inputs
required [string]A list of upstream source or transform IDs.
Wildcards (*
) are supported.
See configuration for more info.
quantiles
optional [float][0.5 0.75 0.9 0.95 0.99]
suppress_timestamp
optional boolSuppresses timestamps on the Prometheus output.
This can sometimes be useful when the source of metrics leads to their timestamps being too far in the past for Prometheus to allow them, such as when aggregating metrics over long time periods, or when replaying old metrics from a disk buffer.
false
tls
optional objecttls.alpn_protocols
optional [string]Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
tls.ca_file
optional string literalAbsolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
tls.crt_file
optional string literalAbsolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
tls.enabled
optional boolWhether or not to require TLS for incoming or outgoing connections.
When enabled and used for incoming connections, an identity certificate is also required. See tls.crt_file
for
more information.
tls.key_file
optional string literalAbsolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
tls.key_pass
optional string literalPassphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
tls.server_name
optional string literalServer name to use when using Server Name Indication (SNI).
Only relevant for outgoing connections.
tls.verify_certificate
optional boolEnables certificate verification. For components that create a server, this requires that the client connections have a valid client certificate. For components that initiate requests, this validates that the upstream has a valid certificate.
If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
tls.verify_hostname
optional boolEnables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
Telemetry
Metrics
linkbuffer_byte_size
gaugebuffer_discarded_events_total
counterbuffer_events
gaugebuffer_received_event_bytes_total
counterbuffer_received_events_total
counterbuffer_sent_event_bytes_total
counterbuffer_sent_events_total
countercomponent_discarded_events_total
counterfilter
transform, or false if due to an error.component_errors_total
countercomponent_received_event_bytes_total
countercomponent_received_events_count
histogramA histogram of the number of events passed in each internal batch in Vector’s internal topology.
Note that this is separate than sink-level batching. It is mostly useful for low level debugging performance issues in Vector due to small internal batches.
component_received_events_total
countercomponent_sent_bytes_total
countercomponent_sent_event_bytes_total
countercomponent_sent_events_total
counterhttp_server_handler_duration_seconds
histogramhttp_server_requests_received_total
counterhttp_server_responses_sent_total
counterutilization
gaugeExamples
Counter
Given this event...{
"metric": {
"counter": {
"value": 1.5
},
"kind": "incremental",
"name": "logins",
"tags": {
"host": "my-host.local"
}
}
}
sinks:
my_sink_id:
type: prometheus_exporter
inputs:
- my-source-or-transform-id
default_namespace: service
[sinks.my_sink_id]
type = "prometheus_exporter"
inputs = [ "my-source-or-transform-id" ]
default_namespace = "service"
{
"sinks": {
"my_sink_id": {
"type": "prometheus_exporter",
"inputs": [
"my-source-or-transform-id"
],
"default_namespace": "service"
}
}
}
# HELP service_logins logins
# TYPE service_logins counter
service_logins{host="my-host.local"} 1.5
Gauge
Given this event...{
"metric": {
"gauge": {
"value": 1.5
},
"kind": "absolute",
"name": "memory_rss",
"namespace": "app",
"tags": {
"host": "my-host.local"
}
}
}
sinks:
my_sink_id:
type: prometheus_exporter
inputs:
- my-source-or-transform-id
[sinks.my_sink_id]
type = "prometheus_exporter"
inputs = [ "my-source-or-transform-id" ]
{
"sinks": {
"my_sink_id": {
"type": "prometheus_exporter",
"inputs": [
"my-source-or-transform-id"
]
}
}
}
# HELP app_memory_rss memory_rss
# TYPE app_memory_rss gauge
app_memory_rss{host="my-host.local"} 1.5
Histogram
Given this event...{
"metric": {
"histogram": {
"buckets": [
{
"count": 0,
"upper_limit": 0.005
},
{
"count": 1,
"upper_limit": 0.01
},
{
"count": 0,
"upper_limit": 0.025
},
{
"count": 1,
"upper_limit": 0.05
},
{
"count": 0,
"upper_limit": 0.1
},
{
"count": 0,
"upper_limit": 0.25
},
{
"count": 0,
"upper_limit": 0.5
},
{
"count": 0,
"upper_limit": 1
},
{
"count": 0,
"upper_limit": 2.5
},
{
"count": 0,
"upper_limit": 5
},
{
"count": 0,
"upper_limit": 10
}
],
"count": 2,
"sum": 0.789
},
"kind": "absolute",
"name": "response_time_s",
"tags": {}
}
}
sinks:
my_sink_id:
type: prometheus_exporter
inputs:
- my-source-or-transform-id
[sinks.my_sink_id]
type = "prometheus_exporter"
inputs = [ "my-source-or-transform-id" ]
{
"sinks": {
"my_sink_id": {
"type": "prometheus_exporter",
"inputs": [
"my-source-or-transform-id"
]
}
}
}
# HELP response_time_s response_time_s
# TYPE response_time_s histogram
response_time_s_bucket{le="0.005"} 0
response_time_s_bucket{le="0.01"} 1
response_time_s_bucket{le="0.025"} 0
response_time_s_bucket{le="0.05"} 1
response_time_s_bucket{le="0.1"} 0
response_time_s_bucket{le="0.25"} 0
response_time_s_bucket{le="0.5"} 0
response_time_s_bucket{le="1.0"} 0
response_time_s_bucket{le="2.5"} 0
response_time_s_bucket{le="5.0"} 0
response_time_s_bucket{le="10.0"} 0
response_time_s_bucket{le="+Inf"} 0
response_time_s_sum 0.789
response_time_s_count 2
Distribution to histogram
Given this event...{
"metric": {
"distribution": {
"samples": [
{
"rate": 4,
"value": 0
},
{
"rate": 2,
"value": 1
},
{
"rate": 1,
"value": 4
}
],
"statistic": "histogram"
},
"kind": "incremental",
"name": "request_retries",
"tags": {
"host": "my-host.local"
}
}
}
sinks:
my_sink_id:
type: prometheus_exporter
inputs:
- my-source-or-transform-id
buckets:
- 0
- 1
- 3
[sinks.my_sink_id]
type = "prometheus_exporter"
inputs = [ "my-source-or-transform-id" ]
buckets = [ 0, 1, 3 ]
{
"sinks": {
"my_sink_id": {
"type": "prometheus_exporter",
"inputs": [
"my-source-or-transform-id"
],
"buckets": [
0,
1,
3
]
}
}
}
# HELP request_retries request_retries
# TYPE request_retries histogram
request_retries_bucket{host="my-host.local",le="0"} 4
request_retries_bucket{host="my-host.local",le="1"} 6
request_retries_bucket{host="my-host.local",le="3"} 6
request_retries_bucket{host="my-host.local",le="+Inf"} 7
request_retries_sum{host="my-host.local"} 6
request_retries_count{host="my-host.local"} 7
Distribution to summary
Given this event...{
"metric": {
"distribution": {
"samples": [
{
"rate": 3,
"value": 0
},
{
"rate": 2,
"value": 1
},
{
"rate": 1,
"value": 4
}
],
"statistic": "summary"
},
"kind": "incremental",
"name": "request_retries",
"tags": {}
}
}
sinks:
my_sink_id:
type: prometheus_exporter
inputs:
- my-source-or-transform-id
quantiles:
- 0.5
- 0.75
- 0.95
[sinks.my_sink_id]
type = "prometheus_exporter"
inputs = [ "my-source-or-transform-id" ]
quantiles = [ 0.5, 0.75, 0.95 ]
{
"sinks": {
"my_sink_id": {
"type": "prometheus_exporter",
"inputs": [
"my-source-or-transform-id"
],
"quantiles": [
0.5,
0.75,
0.95
]
}
}
}
# HELP request_retries request_retries
# TYPE request_retries summary
request_retries{quantile="0.5"} 0
request_retries{quantile="0.75"} 1
request_retries{quantile="0.95"} 4
request_retries_sum 6
request_retries_count 6
request_retries_min 0
request_retries_max 4
request_retries_avg 1
Summary
Given this event...{
"metric": {
"kind": "absolute",
"name": "requests",
"summary": {
"count": 6,
"quantiles": [
{
"upper_limit": 0.01,
"value": 1.5
},
{
"upper_limit": 0.5,
"value": 2
},
{
"upper_limit": 0.99,
"value": 3
}
],
"sum": 12
},
"tags": {
"host": "my-host.local"
}
}
}
sinks:
my_sink_id:
type: prometheus_exporter
inputs:
- my-source-or-transform-id
[sinks.my_sink_id]
type = "prometheus_exporter"
inputs = [ "my-source-or-transform-id" ]
{
"sinks": {
"my_sink_id": {
"type": "prometheus_exporter",
"inputs": [
"my-source-or-transform-id"
]
}
}
}
# HELP requests requests
# TYPE requests summary
requests{host="my-host.local",quantile="0.01"} 1.5
requests{host="my-host.local",quantile="0.5"} 2
requests{host="my-host.local",quantile="0.99"} 3
requests_sum{host="my-host.local"} 12
requests_count{host="my-host.local"} 6
How it works
Buffers
buffer.*
options.Duplicate tag names
Histogram Buckets
Memory Usage
prometheus_exporter
sink aggregates
metrics in memory which keeps the memory footprint to a minimum if Prometheus
fails to scrape the Vector instance over an extended period of time. The
downside is that data will be lost if Vector is restarted. This is by design of
Prometheus' pull model approach, but is worth noting if restart Vector
frequently.