Deployment topologies

In the previous section we covered the various deployment strategies used to collect and forward data. You combine these strategies to form topologies. This section showcases common topologies and the pros and cons of each. Use these as guidelines to build your own.


The simplest topology. In a distributed setup, Vector communicates directly with your downstream services from your client nodes.


  • Simple. Fewer moving parts
  • Elastic. Easily scales with your app. Resources grow as you scale.


  • Less efficient. Depending on the complexity of your pipelines, this will use more local resources, which could disrupt the performance of other applications on the same host.
  • Less durable. Because data is buffered on the host it is more likely you’ll lose buffered data in the event of an unrecoverable crash. Often times this is the most important and useful data.
  • More downstream stress. Downstream services will receive more requests with smaller payloads that could potentially disrupt stability of these services.
  • Reduced downstream stability. You risk overloading downstream services if you need to scale up quickly or exceed the capacity a downstream service can handle.
  • Lacks multi-host context. Lacks awareness of other hosts and eliminates the ability to perform operations across hosts, such as reducing logs to global metrics. This is typically a concern for very large deployments where individual host metrics are less useful.


A good balance of simplicity, stability, and control. For many use cases, a centralized deployment topology is a good compromise between the distributed and stream-based topologies, as it offers many of the advantages of a stream-based topology, such as a clean separation of responsibilities, without the management overheard incurred by a stream-based setup, which often involves using Vector in conjunction with a system like Apache Kafka or Apache Pulsar.


  • More efficient. Centralized topologies are typically more efficient for client nodes and downstream services. Vector agents do less work and thus use fewer resources. In addition, in this topology the centralized Vector service buffers data, provides better compression, and sends optimized requests downstream.
  • More reliable. Vector protects downstream services from volume spikes by buffering and flushing data at smoothed-out intervals.
  • Has multi-host context. Because your data is centralized, you can perform operations across hosts, such as reducing logs to global metrics. This can be advantageous for large deployments in which metrics aggregated across many hosts are more informative than isolated per-host metrics.


  • More complex. A centralized topology has more moving parts, as you need to run Vector in both the agent and aggregator roles.
  • Less durable. Agent nodes are designed to get data off of the machine as quickly as possible. While this is fine for some use cases, it does bear the possibility of data loss since the central Vector service could go down and thus lose any buffered data. If this type of outage is unacceptable for your requirements, we recommend running a stream-based topology instead.

Stream based

The most durable and elastic topology. This topology is typically adopted for very large streams with teams that are familiar with running a stream-based service such as Kafka.


  • Most durable and reliable. Stream services, like Kafka, are designed for high durability and reliability, replicating data across multiple nodes.
  • Most efficient. Vector agents are doing less, making them more efficient, and Vector services do not have to worry about durability, which can be tuned towards performance.
  • Ability to re-stream. Re-stream your data depending on your stream’s retention period.
  • Cleaner separation of responsibilities. Vector is used solely as a routing layer and is not responsible for durability. Durability is delegated to a purpose-built service that you can switch and evolve over time.


  • Increased management overhead. Managing a stream service, such as Kafka, is a complex endeavor and generally requires an experienced team to setup and manage properly.
  • More complex. This topology is complex and requires a deeper understand of managing production-grade streams.
  • More expensive. In addition the management cost, the added stream cluster will require more resources which will increase operational cost.