Send logs from Docker to AWS Kinesis Firehose

A simple guide to send logs from Docker to AWS Kinesis Firehose in just a few minutes.
type: tutorialdomain: platformsdomain: sinksplatform: dockersource: docker_logssink: aws_kinesis_firehose

Logs are an essential part of observing any service; without them you'll have significant blind spots. But collecting and analyzing them can be a real challenge -- especially at scale. Not only do you need to solve the basic task of collecting your logs, but you must do it in a reliable, performant, and robust manner. Nothing is more frustrating than having your logs pipeline fall on it's face during an outage, or even worse, cause the outage!

Fear not! In this guide we'll build an observability pipeline that will send logs from Docker to AWS Kinesis Firehose.

Background

What is Docker?

Docker is an open platform for developing, shipping, and running applications and services. Docker enables you to separate your services from your infrastructure so you can ship quickly. With Docker, you can manage your infrastructure in the same ways you manage your services. By taking advantage of Docker's methodologies for shipping, testing, and deploying code quickly, you can significantly reduce the delay between writing code and running it in production.

What is AWS Kinesis Firehose?

Amazon Kinesis Data Firehose is a fully managed service for delivering real-time streaming data to destinations such as Amazon Simple Storage Service (Amazon S3), Amazon Redshift, Amazon Elasticsearch Service (Amazon ES), and Splunk.

Strategy

How This Guide Works

We'll be using [Vector][urls.vector_website] to accomplish this task. Vector is a popular open-source observability data pipeline. It's written in Rust, making it lightweight, ultra-fast and highly reliable. And we'll be deploying Vector as a agent.

Vector daemon deployment strategyVector daemon deployment strategy
1. Your service logs to STDOUT
STDOUT follows the 12 factor principles.
2. STDOUT is captured
STDOUT is captured and sent to the Docker platform.
3. Vector collects & fans-out data
Vector will sends logs to [AWS Kinesis Firehose](https://aws.amazon.com/kinesis/data-firehose/).

What We'll Accomplish

We'll build an observability data pipeline that:

  • Collects logs from Docker.
    • Enriches data with useful Docker context.
    • Efficiently collects data and checkpoints read positions to ensure data is not lost between restarts.
    • Merges multi-line logs into one event.
  • Sends logs to AWS Kinesis Firehose.
    • Buffers data in-memory or on-disk for performance and durability.
    • Compresses data to optimize bandwidth.
    • Automatically retries failed requests, with backoff.
    • Securely transmits data via Transport Layer Security (TLS).
    • Batches data to maximize throughput.

All in just a few minutes!

Tutorial

0.13.X
debian
  1. Configure Vector

    none
  2. Start Vector

    docker run \
    -d \
    -v ~/vector.toml:/etc/vector/vector.toml:ro \
    -p 8383:8383 \
    timberio/vector:0.13.X-debian
  3. Observe Vector

    docker logs -f $(docker ps -aqf "name=vector")
    explain this command

Next Steps

Vector is powerful tool and we're just scratching the surface in this guide. Here are a few pages we recommend that demonstrate the power and flexibility of Vector:

Vector Github repo 4k
Vector is free and open-source!
Vector quickstart
Get setup in just a few minutes
Vector documentation
Everything you need to know about Vector

[urls.vector_website]: