The Vector team is pleased to announce version 0.15.0!
This release includes a number of new components for collecting and sending data using Vector:
datadog_eventssink for sending events to Datadog’s event stream.
dnstapsource for collecting events from a DNS server via the dnstap protocol.
fluentsource for collecting logs forwarded by FluentD, Fluent Bit, or other services capable of forwarding using the fluent protocol such as Docker.
logstashsource for collecting logs forwarded by Logstash, Elastic Beats, or other services capable of forwarding using the lumberjack protocol such as Docker.
azure_blobsink for forwarding logs to Azure’s Blob Storage.
redissink for forwarding logs to Redis.
eventstoredb_metricssource for collecting metrics from EventStoreDB.
binaryencoding support for http source
content_md5when writing objects to work with S3 object locking
datadog_search, condition type
fingerprint.linesoption to use multiple lines to calculate fingeprints
parse_logfmt+ minor related fix
hosttag to internal_metrics
--config-dirto read configuration from directories
graphsubcommand for generating graph in DOT format
parse_apache_logto handle lack of thread id
buildto only output warnings when
parse_sysloghandles non structured messages
refereron nginx parser
to_intnow truncates floats
We’ve heard from a number of users that they’d like improved delivery guarantees for events flowing through Vector. We are working on a feature to allow, for components that are able to support it, to only acknowledging data flowing into source components after that data has been sent by any associated sinks. For example, this would avoid acknowledging messages in Kafka until the data in those messages has been sent via all associated sinks.
This release includes support in additional source and sink components that support acknowledgements, but it has not yet been fully documented and tested. We expect to officially release this with 0.16.0.
We are hard at work at expanding the ability to run Vector as an aggregator in Kubernetes. This will allow you to build end-to-end observability pipelines in Kubernetes with Vector. Distributing processing on the edge, centralizing it with an aggregator, or both. If you are interested in beta testing, please join our chat and let us know.
We do expect this to be released with 0.16.0.