Deployment Roles

Vector is an end-to-end observability data platform designed to collect, process, and route data. This means that Vector serves all roles in building your pipeline. Vector can be deployed as an agent, sidecar, or aggregator. You combine these roles to form topologies which are covered in the next section. In this section we'll cover each role in detail and help you understand when to use each.

Agent

Daemon

Vector daemon deployment strategyVector daemon deployment strategy
1. Your service logs to STDOUT
STDOUT follows the 12 factor principles.
2. STDOUT is captured
STDOUT is captured by your platform.
3. Vector collects & fans-out data
Vector collects data from your platform.

The daemon role is designed to collect all data on a single host. This is the recommended role for data collection since it the most efficient use of host resources. Vector implements a directed acyclic graph topology model, enabling the collection and processing from mutliple services.

Sidecar

Vector Sidecar Deployment StrategyVector sidecar deployment strategy.
1. Your service logs to a shared resource
Such as a file on a shared volume or anything Vector can access.
2. Vector ingests the data
Such as tailing a file on a shared volume.
3. Vector forwards the data
Vector forwards the data to one or more downstream services.

The sidecar role couples Vector with each service, focused on data collection for that individual service only. While the deamon role is recommended, the sidecar role is beneficial when you want to shift reponsibility of data collection to the service owner. And, in some cases, it can be simpler to manage.

Aggregator

Vector Service Deployment StrategyVector service deployment strategy
1. Vector receives data
Vector receives data from another upstream Vector instance.
2. Vector processes data
Vector parses, transforms, and enriches data.
3. Vector fans-out data
Vector receives data from another upstream Vector instance.

The aggregator role is designed for central processing, collecting data from multiple upstream sources and performing cross-host aggregation and analysis.

For Vector, this role should be reserved for exactly that: cross-host aggregation and analysis. Vector is unique in the fact that it can serve both as an agent and aggregator. This makes it possible to distribute processing along the edge (recommended). We highly recommend pushing processing to the edge when possible since it is more efficient and easier to manage.