Monitoring and observing Vector

Use logs and metrics generated by Vector itself in your Vector topology

Although Vector is primarily used to handle observability data from a wide variety of sources, we also strive to make Vector highly observable itself. To that end, Vector provides two sources, internal_logs and internal_metrics, that you can use to handle logs and metrics produced by Vector just like you would logs and metrics from any other source.

Logs

Vector provides clear, informative, well-structured logs via the internal_logs source. This section shows you how to use them in your Vector topology.

Which logs Vector pipes through the internal_logs source is determined by the log level, which defaults to info.

In addition to the internal_logs source, Vector also writes its logs to stderr, which can be captured by Kubernetes, SystemD, or however you are running Vector.

Accessing logs

You can access Vector’s logs by adding an internal_logs source to your topology. Here’s an example configuration that takes Vector’s logs and pipes them to the console as plain text:

[sources.vector_logs]
type = "internal_logs"

[sinks.console]
type = "console"
inputs = ["vector_logs"]
encoding.codec = "text"

Using Vector logs

Once Vector logs enter your topology through the internal_logs source, you can treat them like logs from any other system, i.e. you can transform them and send them off to any number of sinks. The configuration below, for example, transforms Vector’s logs using the remap transform and Vector Remap Language and then stores those logs in ClickHouse:

[sources.vector_logs]
type = "internal_logs"

[transforms.modify]
type = "remap"
inputs = ["vector_logs"]

# Reformat the timestamp to Unix time
source = '''
  .timestamp = to_unix_timestamp!(to_timestamp!(.timestamp))
'''

[sinks.database]
type = "clickhouse"
inputs = ["modify"]
host = "http://localhost:8123"
table = "vector-log-data"

Configuring logs

Levels

Vector logs at the info level by default. You can set a different level when starting up your instance using either command-line flags or the VECTOR_LOG environment variable. The table below details these options:

MethodDescription
-v flagDrops the log level to debug
-vv flagDrops the log level to trace
-q flagRaises the log level to warn
-qq flagRaises the log level to error
-qqq flagDisables logging
VECTOR_LOG=<level> environment variableSet the log level. Must be one of trace, debug, info, warn, error, off.

Stack traces

You can enable full error backtraces by setting the RUST_BACKTRACE=full environment variable. More on this in the Troubleshooting guide.

Metrics

You can monitor metrics produced by Vector using the internal_metrics source. As with Vector’s internal logs, you can configure an internal_metrics source and use the piped-in metrics however you wish. Here’s an example configuration that delivers Vector’s metrics to a Prometheus remote write endpoint.

[sources.vector_metrics]
type = "internal_metrics"

[sinks.prometheus]
type = ["prometheus_remote_write"]
endpoint = ["https://localhost:8087/"]
inputs = ["vector_metrics"]

Metrics catalogue

The table below provides a list of internal metrics provided by Vector. See the docs for the internal_metrics source for more detailed information about the available metrics.

NameDescriptionData type
adaptive_concurrency_averaged_rttThe average round-trip time (RTT) for the current window.histogram
adaptive_concurrency_in_flightThe number of outbound requests currently awaiting a response.histogram
adaptive_concurrency_limitThe concurrency limit that the adaptive concurrency feature has decided on for this current window.histogram
adaptive_concurrency_observed_rttThe observed round-trip time (RTT) for requests.histogram
aggregate_events_recorded_totalThe number of events recorded by the aggregate transform.counter
aggregate_failed_updatesThe number of failed metric updates, incremental adds, encountered by the aggregate transform.counter
aggregate_flushes_totalThe number of flushes done by the aggregate transform.counter
api_started_totalThe number of times the Vector GraphQL API has been started.counter
buffer_byte_sizeThe number of bytes current in the buffer.gauge
buffer_discarded_events_totalThe number of events dropped by this non-blocking buffer.counter
buffer_eventsThe number of events currently in the buffer.gauge
buffer_received_event_bytes_totalThe number of bytes received by this buffer.counter
buffer_received_events_totalThe number of events received by this buffer.counter
buffer_send_duration_secondsThe duration spent sending a payload to this buffer.histogram
buffer_sent_event_bytes_totalThe number of bytes sent by this buffer.counter
buffer_sent_events_totalThe number of events sent by this buffer.counter
build_infoHas a fixed value of 1.0. Contains build information such as Rust and Vector versions.gauge
checkpoints_totalThe total number of files checkpointed.counter
checksum_errors_totalThe total number of errors identifying files via checksum.counter
collect_completed_totalThe total number of metrics collections completed for this component.counter
collect_duration_secondsThe duration spent collecting of metrics for this component.histogram
command_executed_totalThe total number of times a command has been executed.counter
command_execution_duration_secondsThe command execution duration in seconds.histogram
component_discarded_events_totalThe number of events dropped by this component.counter
component_errors_totalThe total number of errors encountered by this component.counter
component_received_bytesThe size in bytes of each event received by the source.histogram
component_received_bytes_totalThe number of raw bytes accepted by this component from source origins.counter
component_received_event_bytes_totalThe number of event bytes accepted by this component either from tagged origins like file and uri, or cumulatively from other origins.counter
component_received_events_count

A histogram of the number of events passed in each internal batch in Vector’s internal topology.

Note that this is separate than sink-level batching. It is mostly useful for low level debugging performance issues in Vector due to small internal batches.

histogram
component_received_events_totalThe number of events accepted by this component either from tagged origins like file and uri, or cumulatively from other origins.counter
component_sent_bytes_totalThe number of raw bytes sent by this component to destination sinks.counter
component_sent_event_bytes_totalThe total number of event bytes emitted by this component.counter
component_sent_events_totalThe total number of events emitted by this component.counter
connection_established_totalThe total number of times a connection has been established.counter
connection_read_errors_totalThe total number of errors reading datagram.counter
connection_send_errors_totalThe total number of errors sending data via the connection.counter
connection_shutdown_totalThe total number of times the connection has been shut down.counter
container_processed_events_totalThe total number of container events processed.counter
containers_unwatched_totalThe total number of times Vector stopped watching for container logs.counter
containers_watched_totalThe total number of times Vector started watching for container logs.counter
datadog_logs_received_in_totalNumber of Datadog logs received.counter
datadog_metrics_received_in_totalNumber of Datadog metrics received.counter
events_discarded_totalThe total number of events discarded by this component.counter
files_added_totalThe total number of files Vector has found to watch.counter
files_deleted_totalThe total number of files deleted.counter
files_resumed_totalThe total number of times Vector has resumed watching a file.counter
files_unwatched_totalThe total number of times Vector has stopped watching a file.counter
grpc_server_handler_duration_secondsThe duration spent handling a gRPC request.histogram
grpc_server_messages_received_totalThe total number of gRPC messages received.counter
grpc_server_messages_sent_totalThe total number of gRPC messages sent.counter
http_client_requests_sent_totalThe total number of sent HTTP requests, tagged with the request method.counter
http_client_response_rtt_secondsThe round-trip time (RTT) of HTTP requests, tagged with the response code.histogram
http_client_responses_totalThe total number of HTTP requests, tagged with the response code.counter
http_client_rtt_secondsThe round-trip time (RTT) of HTTP requests.histogram
http_requests_totalThe total number of HTTP requests issued by this component.counter
http_server_handler_duration_secondsThe duration spent handling a HTTP request.histogram
http_server_requests_received_totalThe total number of HTTP requests received.counter
http_server_responses_sent_totalThe total number of HTTP responses sent.counter
internal_metrics_cardinalityThe total number of metrics emitted from the internal metrics registry.gauge
internal_metrics_cardinality_totalThe total number of metrics emitted from the internal metrics registry. This metric is deprecated in favor of internal_metrics_cardinality.counter
invalid_record_totalThe total number of invalid records that have been discarded.counter
k8s_docker_format_parse_failures_totalThe total number of failures to parse a message as a JSON object.counter
k8s_format_picker_edge_cases_totalThe total number of edge cases encountered while picking format of the Kubernetes log message.counter
kafka_consumed_messages_bytes_totalTotal number of message bytes (including framing) received from Kafka brokers.counter
kafka_consumed_messages_totalTotal number of messages consumed, not including ignored messages (due to offset, etc), from Kafka brokers.counter
kafka_consumer_lagThe Kafka consumer lag.gauge
kafka_produced_messages_bytes_totalTotal number of message bytes (including framing, such as per-Message framing and MessageSet/batch framing) transmitted to Kafka brokers.counter
kafka_produced_messages_totalTotal number of messages transmitted (produced) to Kafka brokers.counter
kafka_queue_messagesCurrent number of messages in producer queues.gauge
kafka_queue_messages_bytesCurrent total size of messages in producer queues.gauge
kafka_requests_bytes_totalTotal number of bytes transmitted to Kafka brokers.counter
kafka_requests_totalTotal number of requests sent to Kafka brokers.counter
kafka_responses_bytes_totalTotal number of bytes received from Kafka brokers.counter
kafka_responses_totalTotal number of responses received from Kafka brokers.counter
lua_memory_used_bytesThe total memory currently being used by the Lua runtime.gauge
metadata_refresh_failed_totalThe total number of failed efforts to refresh AWS EC2 metadata.counter
metadata_refresh_successful_totalThe total number of AWS EC2 metadata refreshes.counter
open_connectionsThe number of current open connections to Vector.gauge
protobuf_decode_errors_totalThe total number of Protocol Buffers errors thrown during communication between Vector instances.counter
quit_totalThe total number of times the Vector instance has quit.counter
reloaded_totalThe total number of times the Vector instance has been reloaded.counter
send_errors_totalThe total number of errors sending messages.counter
source_lag_time_secondsThe difference between the timestamp recorded in each event and the time when it was ingested, expressed as fractional seconds.histogram
splunk_pending_acksThe number of outstanding Splunk HEC indexer acknowledgement acks.gauge
sqs_message_delete_succeeded_totalThe total number of successful deletions of SQS messages.counter
sqs_message_processing_succeeded_totalThe total number of SQS messages successfully processed.counter
sqs_message_receive_succeeded_totalThe total number of times successfully receiving SQS messages.counter
sqs_message_received_messages_totalThe total number of received SQS messages.counter
sqs_s3_event_record_ignored_totalThe total number of times an S3 record in an SQS message was ignored (for an event that was not ObjectCreated).counter
stale_events_flushed_totalThe number of stale events that Vector has flushed.counter
started_totalThe total number of times the Vector instance has been started.counter
stdin_reads_failed_totalThe total number of errors reading from stdin.counter
stopped_totalThe total number of times the Vector instance has been stopped.counter
streams_totalThe total number of streams.counter
tag_value_limit_exceeded_totalThe total number of events discarded because the tag has been rejected after hitting the configured value_limit.counter
timestamp_parse_errors_totalThe total number of errors encountered parsing RFC 3339 timestamps.counter
uptime_secondsThe total number of seconds the Vector instance has been up.gauge
utf8_convert_errors_totalThe total number of errors converting bytes to a UTF-8 string in UDP mode.counter
utilizationA ratio from 0 to 1 of the load on a component. A value of 0 would indicate a completely idle component that is simply waiting for input. A value of 1 would indicate a that is never idle. This value is updated every 5 seconds.gauge
value_limit_reached_totalThe total number of times new values for a key have been rejected because the value limit has been reached.counter
windows_service_install_totalThe total number of times the Windows service has been installed.counter
windows_service_restart_totalThe total number of times the Windows service has been restarted.counter
windows_service_start_totalThe total number of times the Windows service has been started.counter
windows_service_stop_totalThe total number of times the Windows service has been stopped.counter
windows_service_uninstall_totalThe total number of times the Windows service has been uninstalled.counter

Troubleshooting

More information in our troubleshooting guide:

How it works

Event-driven observability

Vector employs an event-driven observability strategy that ensures consistent and correlated telemetry data. You can read more about our approach in RFC 2064.

Log rate limiting

Vector rate limits log events in the hot path. This enables you to get granular insight without the risk of saturating IO and disrupting the service. The trade-off is that repetitive logs aren’t logged.