Before you begin, it's useful to become familiar with the basic concepts that comprise Vector. These concepts are used throughout the documentation and are helpful to understand as you proceed.


"Component" is the generic term we use for sources, transforms, and sinks. You compose components to create pipelines, allowing you to ingest, transform, and send data.

View all components


The purpose of Vector is to collect data from various sources in various shapes. Vector is designed to pull and receive data from these sources depending on the source type. As Vector ingests data it proceeds to normalize that data into a record (see next section). This sets the stage for easy and consistent processing of your data. Examples of sources include file, syslog, tcp, and stdin.

View all sources


A "transform" is anything that modifies an event or the stream as a whole, such as a parser, filter, sampler, or aggregator. This term is purposefully generic to help simplify the concepts Vector is built on.

View all transforms


A sink is a destination for events. Each sink's design and transmission method is dictated by the downstream service it is interacting with. For example, the tcp sink will stream individual records, while the aws_s3 sink will buffer and flush data.

View all sinks


"Events" are the generic term Vector uses to represent all data (logs and metrics) flowing through Vector. This is covered in detail in the data model section.

View data model


"Pipelines" are the end result of connecting sources, transforms, and sinks. You can see a full example of a pipeline in the configuration section.

View configuration