It's worth getting familiar with the basic concepts that comprise Vector as they are used throughout the documentation. This knowledge will be helpful as you proceed and is also cool to brag about amongst friends.


"Component" is the generic term we use for sources, transforms, and sinks. You compose components to create pipelines, allowing you to ingest, transform, and send data.

View all components


Vector would be junk if it couldn't ingest data. A "source" defines where Vector should pull data from, or how it should receive data pushed to it. A pipeline can have any number of sources, and as they ingest data they proceed to normalize it into events (see next section). This sets the stage for easy and consistent processing of your data. Examples of sources include file, syslog, socket, and stdin.

View all sources


A "transform" is responsible for mutating events as they are transported by Vector. This might involve parsing, filtering, sampling, or aggregating. You can have any number of transforms in your pipeline and how they are composed is up to you.

View all transforms


A "sink" is a destination for events. Each sink's design and transmission method is dictated by the downstream service it is interacting with. For example, the socket sink will stream individual events, while the aws_s3 sink will buffer and flush data.

View all sinks


All items (both logs and metrics) passing through Vector are known as an "event", which is explained in detail in the data model section.

View data model


A "pipeline" is the end result of connecting sources, transforms, and sinks. You can see a full example of a pipeline in the configuration section.

View configuration