The individual pieces of data flowing through Vector are known as events. Events are arbitrarily wide, and deep, structured pieces of data. They have no requirements or limitations. Ideally, events contain enough rich information to derive any type of monitoring data from it.
Vector defines subtypes for events. This is necessary to establish domain specific requriements enabling interoperability with existing monitoring and observability systems.
Why Not Just Events?
We, very much, like the idea of an event only world, one where every service is perfectly instrumented with events that contain rich data and context. Unfortunately, that is not the case; exisiting services usually emit metrics, traces, and logs of varying quality. By designing Vector to meet services where they are (current state), we serve as a bridge to newer standards. This is why we place "events" at the top of our data model, where logs and metrics are derived.
Finally, a sophisticated data model that accounts for the various data types
allows for correct interoperability between observability systems. For
example, a pipeline with a
statsd source and a
prometheus sink would not
be possible without the correct internal metrics data types.