Vector is an end-to-end data pipeline designed to collect, process, and route data. This means that Vector serves all roles in building your pipeline. You can deploy it as an agent, sidecar, or aggregator. You combine these roles to form topologies. In this section, we’ll cover each role in detail and help you understand when to use each.
The aggregator role is designed for central processing, collecting data from multiple upstream sources and performing cross-host aggregation and analysis.
For Vector, this role should be reserved for exactly that: cross-host aggregation and analysis. Vector is unique in the fact that it can serve both as an agent and aggregator. This makes it possible to distribute processing along the edge (recommended). We highly recommend pushing processing to the edge when possible since it is more efficient and easier to manage.