Send logs from Kafka to AWS S3

A simple guide to send logs from Kafka to AWS S3 in just a few minutes.
type: tutorialdomain: sourcesdomain: sinkssource: kafkasink: aws_s3

Logs are an essential part of observing any service; without them you are flying blind. But collecting and analyzing them can be a real challenge -- especially at scale. Not only do you need to solve the basic task of collecting your logs, but you must do it in a reliable, performant, and robust manner. Nothing is more frustrating than having your logs pipeline fall on it's face during an outage, or even worse, disrupt more important services!

Fear not! In this guide we'll show you how to send send logs from Kafka to AWS S3 and build a logs pipeline that will be the backbone of your observability strategy.


What is Kafka?

Apache Kafka is an open source project for a distributed publish-subscribe messaging system rethought as a distributed commit log. Kafka stores messages in topics that are partitioned and replicated across multiple brokers in a cluster. Producers send messages to topics from which consumers read. This makes it an excellent candidate for durably storing logs and metrics data.

What is AWS S3?

Amazon Simple Storage Service (Amazon S3) is a scalable, high-speed, web-based cloud storage service designed for online backup and archiving of data and applications on Amazon Web Services. It is very commonly used to store log data.


How This Guide Works

We'll be using Vector to accomplish this task. Vector is a popular open-source utility for building observability pipelines. It's written in Rust, making it lightweight, ultra-fast and highly reliable. And we'll be deploying Vector as a service.

The service deployment strategy treats Vector like a separate service. It is designed to receive data from an upstream source and fan-out to one or more destinations. For this guide, Vector will collect data from Kafka via Vector's kafka source. The following diagram demonstrates how it works.

Vector Service Deployment StrategyVector service deployment strategy
1. Vector collects data from Kafka
Vector will consume one or more Kafka topics.
2. Vector processes data
Vector parses, transforms, and enriches data.
3. Vector transmits data to AWS S3
Vector will send logs to AWS S3.

What We'll Accomplish

To be clear, here's everything we'll accomplish in this short guide:

  • Consume one or more Kafka topics.
    • Checkpoint your position to ensure data is not lost between restarts.
    • Enrich your logs with useful Kafka context.
  • Send logs to AWS S3.
    • Dynamically partition logs across different key prefixes.
    • Compress and batch data to reduce storage cost and imrpove throughput.
    • Optionally adjust ACL and encryption settings.
    • Automatically retry failed requests, with backoff.
    • Buffer your data in-memory or on-disk for performance and durability.
  • All in just a few minutes!


  1. Install Vector

    curl --proto '=https' --tlsv1.2 -sSf | sh
    explain this command

    Or choose your preferred method.

  2. Configure Vector

    cat <<-VECTORCFG > vector.toml
    bootstrap_servers = "," # required
    group_id = "consumer-group-name" # required
    topics = ["^(prefix1|prefix2)-.+", "topic-1", "topic-2"] # required
    type = "kafka" # required
    bucket = "my-bucket" # required
    inputs = ["in"] # required
    region = "us-east-1" # required, required when endpoint = ""
    type = "aws_s3" # required
    explain this command
  3. Start Vector

    vector --config vector.toml

    That's it! Simple and to the point. Hit ctrl+c to exit.

Next Steps

Vector is powerful utility and we're just scratching the surface in this guide. Here are a few pages we recommend that demonstrate the power and flexibility of Vector:

Vector Github repo 4k
Vector is free and open-source!
Vector getting started series
Go from zero to production in under 10 minutes!
Vector documentation
Thoughtful, detailed docs that respect your time.