Prometheus Exporter
Output metric events to a Prometheus exporter running on the host
Alias
This component was previously called the prometheus
sink. Make sure to update your
Vector configuration to accommodate the name change:
[sinks.my_prometheus_exporter_sink]
+type = "prometheus_exporter"
-type = "prometheus"
Warnings
tag_cardinality_limit
transform as a way
to protect against this.Configuration
Example configurations
{
"sinks": {
"my_sink_id": {
"type": "prometheus_exporter",
"inputs": [
"my-source-or-transform-id"
],
"address": "0.0.0.0:9598",
"default_namespace": "service",
"acknowledgements": null
}
}
}
[sinks.my_sink_id]
type = "prometheus_exporter"
inputs = [ "my-source-or-transform-id" ]
address = "0.0.0.0:9598"
default_namespace = "service"
---
sinks:
my_sink_id:
type: prometheus_exporter
inputs:
- my-source-or-transform-id
address: 0.0.0.0:9598
default_namespace: service
acknowledgements: null
{
"sinks": {
"my_sink_id": {
"type": "prometheus_exporter",
"inputs": [
"my-source-or-transform-id"
],
"address": "0.0.0.0:9598",
"auth": null,
"buckets": [
0.005
],
"flush_period_secs": 60,
"default_namespace": "service",
"quantiles": [
0.5
],
"distributions_as_summaries": null,
"buffer": null,
"acknowledgements": null,
"tls": null,
"suppress_timestamp": null,
"healthcheck": null
}
}
}
[sinks.my_sink_id]
type = "prometheus_exporter"
inputs = [ "my-source-or-transform-id" ]
address = "0.0.0.0:9598"
buckets = [ 0.005 ]
flush_period_secs = 60
default_namespace = "service"
quantiles = [ 0.5 ]
---
sinks:
my_sink_id:
type: prometheus_exporter
inputs:
- my-source-or-transform-id
address: 0.0.0.0:9598
auth: null
buckets:
- 0.005
flush_period_secs: 60
default_namespace: service
quantiles:
- 0.5
distributions_as_summaries: null
buffer: null
acknowledgements: null
tls: null
suppress_timestamp: null
healthcheck: null
acknowledgements
common optional objectacknowledgement
settings.acknowledgements.enabled
common optional boolfalse
address
required string literal/metrics
auth
optional objectauth.password
required string literalauth.strategy
required string literal enumOption | Description |
---|---|
basic | The basic authentication strategy. |
bearer | The bearer token authentication strategy. |
auth.token
required string literalauth.user
required string literalbuckets
optional [float][0.005 0.01 0.025 0.05 0.1 0.25 0.5 1 2.5 5 10]
buffer
optional objectConfigures the buffering behavior for this sink.
More information about the individual buffer types, and buffer behavior, can be found in the Buffering Model section.
buffer.max_events
optional uinttype = "memory"
500
buffer.max_size
required uintThe maximum size of the buffer on disk.
Must be at least ~256 megabytes (268435488 bytes).
type = "disk_v1" or type = "disk"
buffer.type
optional string literal enumOption | Description |
---|---|
disk | Events are buffered on disk. (version 2) This is less performant, but more durable. Data that has been synchronized to disk will not be lost if Vector is restarted forcefully or crashes. Data is synchronized to disk every 500ms. |
disk_v1 | Events are buffered on disk. (version 1) This is less performant, but more durable. Data that has been synchronized to disk will not be lost if Vector is restarted forcefully or crashes. |
memory | Events are buffered in memory. This is more performant, but less durable. Data will be lost if Vector is restarted forcefully or crashes. |
memory
buffer.when_full
optional string literal enumOption | Description |
---|---|
block | Wait for free space in the buffer. This applies backpressure up the topology, signalling that sources should slow down the acceptance/consumption of events. This means that while no data is lost, data will pile up at the edge. |
drop_newest | Drops the event instead of waiting for free space in buffer. The event will be intentionally dropped. This mode is typically used when performance is the highest priority, and it is preferable to temporarily lose events rather than cause a slowdown in the acceptance/consumption of events. |
block
default_namespace
common optional string literaldistributions_as_summaries
optional boolfalse
flush_period_secs
optional uint60
(seconds)healthcheck
optional objecthealthcheck.enabled
optional booltrue
inputs
required [string]A list of upstream source or transform IDs.
Wildcards (*
) are supported.
See configuration for more info.
quantiles
optional [float][0.5 0.75 0.9 0.95 0.99]
suppress_timestamp
optional boolfalse
tls
optional objecttls.ca_file
optional string literalAbsolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
tls.crt_file
optional string literalAbsolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
tls.enabled
optional boolWhether or not to require TLS for incoming/outgoing connections.
When enabled and used for incoming connections, an identity certificate is also required. See tls.crt_file
for
more information.
false
tls.key_file
optional string literalAbsolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
tls.key_pass
optional string literalPassphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
tls.verify_certificate
optional boolEnables certificate verification.
If enabled, certificates must be valid in terms of not being expired, as well as being issued by a trusted issuer. This verification operates in a hierarchical manner, checking that not only the leaf certificate (the certificate presented by the client/server) is valid, but also that the issuer of that certificate is valid, and so on until reaching a root certificate.
Relevant for both incoming and outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
false
Telemetry
Metrics
linkbuffer_byte_size
gaugecomponent_id
instead. The value is the same as component_id
.buffer_discarded_events_total
countercomponent_id
instead. The value is the same as component_id
.buffer_events
gaugecomponent_id
instead. The value is the same as component_id
.buffer_received_event_bytes_total
countercomponent_id
instead. The value is the same as component_id
.buffer_received_events_total
countercomponent_id
instead. The value is the same as component_id
.buffer_sent_event_bytes_total
countercomponent_id
instead. The value is the same as component_id
.buffer_sent_events_total
countercomponent_id
instead. The value is the same as component_id
.component_received_event_bytes_total
countercomponent_id
instead. The value is the same as component_id
.component_received_events_count
histogramA histogram of the number of events passed in each internal batch in Vector’s internal topology.
Note that this is separate than sink-level batching. It is mostly useful for low level debugging performance issues in Vector due to small internal batches.
component_id
instead. The value is the same as component_id
.component_received_events_total
countercomponent_id
instead. The value is the same as component_id
.events_in_total
countercomponent_received_events_total
instead.component_id
instead. The value is the same as component_id
.utilization
gaugecomponent_id
instead. The value is the same as component_id
.Examples
Counter
Given this event...{
"metric": {
"counter": {
"value": 1.5
},
"kind": "incremental",
"name": "logins",
"tags": {
"host": "my-host.local"
}
}
}
[sinks.my_sink_id]
type = "prometheus_exporter"
inputs = [ "my-source-or-transform-id" ]
default_namespace = "service"
---
sinks:
my_sink_id:
type: prometheus_exporter
inputs:
- my-source-or-transform-id
default_namespace: service
{
"sinks": {
"my_sink_id": {
"type": "prometheus_exporter",
"inputs": [
"my-source-or-transform-id"
],
"default_namespace": "service"
}
}
}
# HELP service_logins logins
# TYPE service_logins counter
service_logins{host="my-host.local"} 1.5
Gauge
Given this event...{
"metric": {
"gauge": {
"value": 1.5
},
"kind": "absolute",
"name": "memory_rss",
"namespace": "app",
"tags": {
"host": "my-host.local"
}
}
}
[sinks.my_sink_id]
type = "prometheus_exporter"
inputs = [ "my-source-or-transform-id" ]
---
sinks:
my_sink_id:
type: prometheus_exporter
inputs:
- my-source-or-transform-id
{
"sinks": {
"my_sink_id": {
"type": "prometheus_exporter",
"inputs": [
"my-source-or-transform-id"
]
}
}
}
# HELP app_memory_rss memory_rss
# TYPE app_memory_rss gauge
app_memory_rss{host="my-host.local"} 1.5
Histogram
Given this event...{
"metric": {
"histogram": {
"buckets": [
{
"count": 0,
"upper_limit": 0.005
},
{
"count": 1,
"upper_limit": 0.01
},
{
"count": 0,
"upper_limit": 0.025
},
{
"count": 1,
"upper_limit": 0.05
},
{
"count": 0,
"upper_limit": 0.1
},
{
"count": 0,
"upper_limit": 0.25
},
{
"count": 0,
"upper_limit": 0.5
},
{
"count": 0,
"upper_limit": 1
},
{
"count": 0,
"upper_limit": 2.5
},
{
"count": 0,
"upper_limit": 5
},
{
"count": 0,
"upper_limit": 10
}
],
"count": 2,
"sum": 0.789
},
"kind": "absolute",
"name": "response_time_s",
"tags": {}
}
}
[sinks.my_sink_id]
type = "prometheus_exporter"
inputs = [ "my-source-or-transform-id" ]
---
sinks:
my_sink_id:
type: prometheus_exporter
inputs:
- my-source-or-transform-id
{
"sinks": {
"my_sink_id": {
"type": "prometheus_exporter",
"inputs": [
"my-source-or-transform-id"
]
}
}
}
# HELP response_time_s response_time_s
# TYPE response_time_s histogram
response_time_s_bucket{le="0.005"} 0
response_time_s_bucket{le="0.01"} 1
response_time_s_bucket{le="0.025"} 0
response_time_s_bucket{le="0.05"} 1
response_time_s_bucket{le="0.1"} 0
response_time_s_bucket{le="0.25"} 0
response_time_s_bucket{le="0.5"} 0
response_time_s_bucket{le="1.0"} 0
response_time_s_bucket{le="2.5"} 0
response_time_s_bucket{le="5.0"} 0
response_time_s_bucket{le="10.0"} 0
response_time_s_bucket{le="+Inf"} 0
response_time_s_sum 0.789
response_time_s_count 2
Distribution to histogram
Given this event...{
"metric": {
"distribution": {
"samples": [
{
"rate": 4,
"value": 0
},
{
"rate": 2,
"value": 1
},
{
"rate": 1,
"value": 4
}
],
"statistic": "histogram"
},
"kind": "incremental",
"name": "request_retries",
"tags": {
"host": "my-host.local"
}
}
}
[sinks.my_sink_id]
type = "prometheus_exporter"
inputs = [ "my-source-or-transform-id" ]
buckets = [ 0, 1, 3 ]
---
sinks:
my_sink_id:
type: prometheus_exporter
inputs:
- my-source-or-transform-id
buckets:
- 0
- 1
- 3
{
"sinks": {
"my_sink_id": {
"type": "prometheus_exporter",
"inputs": [
"my-source-or-transform-id"
],
"buckets": [
0,
1,
3
]
}
}
}
# HELP request_retries request_retries
# TYPE request_retries histogram
request_retries_bucket{host="my-host.local",le="0"} 4
request_retries_bucket{host="my-host.local",le="1"} 6
request_retries_bucket{host="my-host.local",le="3"} 6
request_retries_bucket{host="my-host.local",le="+Inf"} 7
request_retries_sum{host="my-host.local"} 6
request_retries_count{host="my-host.local"} 7
Distribution to summary
Given this event...{
"metric": {
"distribution": {
"samples": [
{
"rate": 3,
"value": 0
},
{
"rate": 2,
"value": 1
},
{
"rate": 1,
"value": 4
}
],
"statistic": "summary"
},
"kind": "incremental",
"name": "request_retries",
"tags": {}
}
}
[sinks.my_sink_id]
type = "prometheus_exporter"
inputs = [ "my-source-or-transform-id" ]
quantiles = [ 0.5, 0.75, 0.95 ]
---
sinks:
my_sink_id:
type: prometheus_exporter
inputs:
- my-source-or-transform-id
quantiles:
- 0.5
- 0.75
- 0.95
{
"sinks": {
"my_sink_id": {
"type": "prometheus_exporter",
"inputs": [
"my-source-or-transform-id"
],
"quantiles": [
0.5,
0.75,
0.95
]
}
}
}
# HELP request_retries request_retries
# TYPE request_retries summary
request_retries{quantile="0.5"} 0
request_retries{quantile="0.75"} 1
request_retries{quantile="0.95"} 4
request_retries_sum 6
request_retries_count 6
request_retries_min 0
request_retries_max 4
request_retries_avg 1
Summary
Given this event...{
"metric": {
"kind": "absolute",
"name": "requests",
"summary": {
"count": 6,
"quantiles": [
{
"upper_limit": 0.01,
"value": 1.5
},
{
"upper_limit": 0.5,
"value": 2
},
{
"upper_limit": 0.99,
"value": 3
}
],
"sum": 12
},
"tags": {
"host": "my-host.local"
}
}
}
[sinks.my_sink_id]
type = "prometheus_exporter"
inputs = [ "my-source-or-transform-id" ]
---
sinks:
my_sink_id:
type: prometheus_exporter
inputs:
- my-source-or-transform-id
{
"sinks": {
"my_sink_id": {
"type": "prometheus_exporter",
"inputs": [
"my-source-or-transform-id"
]
}
}
}
# HELP requests requests
# TYPE requests summary
requests{host="my-host.local",quantile="0.01"} 1.5
requests{host="my-host.local",quantile="0.5"} 2
requests{host="my-host.local",quantile="0.99"} 3
requests_sum{host="my-host.local"} 12
requests_count{host="my-host.local"} 6
How it works
Buffers
buffer.*
options.Duplicate tag names
Histogram Buckets
Memory Usage
prometheus_exporter
sink aggregates
metrics in memory which keeps the memory footprint to a minimum if Prometheus
fails to scrape the Vector instance over an extended period of time. The
downside is that data will be lost if Vector is restarted. This is by design of
Prometheus' pull model approach, but is worth noting if restart Vector
frequently.