Support for the Prometheus remote write protocol
Interoperability with the Prometheus ecosystem.
We’re big fans of Prometheus at Timber, and as an extension of our Kubernetes integration we wanted to better understand how Vector could assist Prometheus operators. As noted in the Kubernetes highlight, it is our intent to be the only tool needed to collect and process all Kubernetes observability data, and working with Prometheus is core to our metrics strategy. As a result, 0.11.0 includes two new components that assist with a variety of Prometheus use cases:
Get Started
Backing up Prometheus data
Using the new prometheus_remote_write
source,
Prometheus operators can route a stream of Prometheus data to the archiving
solution of their choice. For this use case we recommend object stores for their
cheap and durable qualities:
[sources.prometheus]
type = "prometheus_remote_write"
[transforms.convert]
type = "metric_to_log"
inputs = ["prometheus"]
[sinks.backup]
type = "aws_s3"
inputs = ["convert"]
Swap aws_s3
with gcp_cloud_storage
or other object stores.
Long-term, highly-available Prometheus setups
For large setups it’s common to couple Prometheus with another solution designed for long term storage and querying. This separates the concerns of fast short term querying and long-term low cost archiving.
With the new prometheus_remote_write
source,
users can only retain near-term data in Prometheus and store long term metrics
in databases like M3 (Chronosphere),
Victoria metrics, and Timescale.
Using Prometheus as a centralized export proxy
Not using Prometheus? It’s very common to use Prometheus as a central proxy for exporting all metrics data. This is especially relevant in ecosystems like Kubernetes where Prometheus is tightly integrated.
To get started, setup the new
prometheus_remote_write
source and send
your metrics to Datadog, New Relic, Influx,
Elasticsearch, and more:
[sources.prometheus]
type = "prometheus_remote_write"
[sinks.datadog]
type = "datadog_metrics"
inputs = ["prometheus"]