Collect, transform, and route all your logs and metrics with one simple tool.
Deploy Vector in a variety of roles to suit your use case.
Get data from point A to point B without patching tools together.
Learn more about the distributed deployment topology for Vector
Learn more about the centralized deployment topology for Vector
Learn more about the stream-based deployment topology for Vector
A simple, composable format enables you to build flexible pipelines
sources:
datadog_agent:
type: "datadog_agent"
address: "0.0.0.0:80"
transforms:
remove_sensitive_user_info:
type: "remap"
inputs: ["datadog_agent"]
source: |
redact(., filters: ["us_social_security_number"])
sinks:
datadog_backend:
type: "datadog_logs"
inputs: ["remove_sensitive_user_info"]
default_api_key: "${DATADOG_API_KEY}"
sources:
kafka_in:
type: "kafka"
bootstrap_servers: "10.14.22.123:9092,10.14.23.332:9092"
group_id: "vector-logs"
key_field: "message"
topics: ["logs-*"]
transforms:
json_parse:
type: "remap"
inputs: ["kafka_in"]
source: |
parsed, err = parse_json(.message)
if err != null {
log(err, level: "error")
}
. |= object(parsed) ?? {}
sinks:
elasticsearch_out:
type: "elasticsearch"
inputs: ["json_parse"]
endpoint: "http://10.24.32.122:9000"
index: "logs-via-kafka"
sources:
k8s_in:
type: "kubernetes_logs"
sinks:
aws_s3_out:
type: "aws_s3"
inputs: ["k8s_in"]
bucket: "k8s-logs"
region: "us-east-1"
compression: "gzip"
encoding:
codec: "json"
sources:
splunk_hec_in:
type: "splunk_hec"
address: "0.0.0.0:8080"
token: "${SPLUNK_HEC_TOKEN}"
sinks:
datadog_out:
type: "datadog_logs"
inputs: ["splunk_hec_in"]
default_api_key: "${DATADOG_API_KEY}"
Install with a one-liner:
curl --proto '=https' --tlsv1.2 -sSfL https://sh.vector.dev | bash
curl --proto '=https' --tlsv1.2 -sSfL https://sh.vector.dev | bash -s -- -y
Or choose your preferred method:
A wide range of sources, transforms, and sinks to choose from
Sign up to receive emails on the latest Vector content and new releases
Thank you for joining our Updates Newsletter