Monitoring & Observing

Vector strives to be a good example of observability and therefore includes various facilities to observe and monitor Vector itself. It is intentionally designed to be composable and fit into your existing workflows.

Logs

Vector has taken care to add quality, structured logs throughout it's codebase. In this section we'll show you how to access and route them.

Accessing logs

Linux
YUM
sudo journalctl -fu vector
explain this command

Routing logs

By default, Vector emits its logs over STDOUT. This allows you to redirect logs through system-level utilities like any other service. If you're using a process manager, like Systemd, collection of logs should be handled for you and made available through utilities like Journald. This means you can collect Vector's logs like other logs on your host. In the case of Systemd/Journald, you could use Vector's journald source:

[sources.vector_logs]
type = "journald"
include_units = ["vector"]

Configuring logs

Levels

Log levels can be adjusted when starting Vector via the following methods:

MethodDescription
-v flagDrops the log level to debug.
-vv flagDrops the log level to trace.
-q flagRaises the log level to warn.
-qq flagRaises the log level to error.
-qqq flagTurns logging off.
LOG=<level> env varSet the log level. Must be one of trace, debug, info, warn, error, off.

Stacktraces

You can enable full error backtraces by setting the RUST_BACKTRACE=full environment variable. More on this in the Troubleshooting guide.

Metrics

Metrics catalogue

Vector provides a rich catalogue of metrics. This catalogue is documented in the output section within the internal_metrics source:

Metrics Catalogue

Accessing metrics

Linux
YUM
vector top
explain this command

Vector provides a top subcommand that provides insight into a running Vector instance via Vector's Graphql API.

Routing metrics

Vector provides an internal_metrics source that can be used to collect Vector's own metrics. This enables you to tap into the full power of Vector to process and send Vector's metrics to any supported sink.

Configuring metrics

Vector's metrics can be configured by defining an internal_metrics source. For example, to send Vector's internal metrics to Prometheus you would define a pipeline like:

[sources.internal]
type = "internal_metrics"
[sinks.prometheus]
type = "prometheus_exporter"
inputs = ["internal"]
address = "0.0.0.0:9598"

Troubleshooting

Please refer to our troubleshooting guide:

Troubleshooting Guide

How it works

Event-driven observability

For the curious, Vector implements an event-driven observability strategy that ensures consistent and correlated telemetry data. You can read more about our approach in RFC 2064.

Log rate-limiting

Vector rate limits log events in the hot path. This is to your benefit as it allows you to get granular insight without the risk of saturating IO and disrupting the service. The trade-off is that repetitive logs will not be logged.