Send logs from STDIN to AWS Cloudwatch

A simple guide to send logs from STDIN to AWS Cloudwatch in just a few minutes.
type: tutorialdomain: sourcesdomain: sinkssource: stdinsink: aws_cloudwatch_logs

Logs are an essential part of observing any service; without them you are flying blind. But collecting and analyzing them can be a real challenge -- especially at scale. Not only do you need to solve the basic task of collecting your logs, but you must do it in a reliable, performant, and robust manner. Nothing is more frustrating than having your logs pipeline fall on it's face during an outage, or even worse, disrupt more important services!

Fear not! In this guide we'll show you how to send send logs from STDIN to AWS Cloudwatch and build a logs pipeline that will be the backbone of your observability strategy.

Background

What is AWS Cloudwatch Logs?

Amazon CloudWatch is a monitoring and management service that provides data and actionable insights for AWS, hybrid, and on-premises applications and infrastructure resources. With CloudWatch, you can collect and access all your performance and operational data in form of logs and metrics from a single platform.

Strategy

How This Guide Works

We'll be using Vector to accomplish this task. Vector is a popular open-source utility for building observability pipelines. It's written in Rust, making it lightweight, ultra-fast and highly reliable. And we'll be deploying Vector as a sidecar.

The sidecar deployment strategy is designed to collect data from a single service. Vector has a tight 1 to 1 coupling with each service. Typically data is collected by tailing local files via Vector's file source, but can be collected through any of Vector's sources. The following diagram demonstrates how it works.

Vector Sidecar Deployment StrategyVector sidecar deployment strategy.
1. Your service logs to a shared resource
Such as a file on a shared volume or anything Vector can access.
2. Vector ingests the data
Vector ingests the data through undefined
3. Vector forwards the data
Vector will send logs to AWS Cloudwatch.

What We'll Accomplish

To be clear, here's everything we'll accomplish in this short guide:

  • Accept new line delimited log data through STDIN.
    • Automatically enrich logs with host-level context.
  • Send logs to AWS Cloudwatch.
    • Dynamically partition logs across CloudWatch groups and streams.
    • Batch data to maximize throughput.
    • Automatically retry failed requests, with backoff.
    • Buffer your data in-memory or on-disk for performance and durability.
  • All in just a few minutes!

Tutorial

  1. Install Vector

    curl --proto '=https' --tlsv1.2 -sSf https://sh.vector.dev | sh
    explain this command

    Or choose your preferred method.

  2. Configure Vector

    cat <<-VECTORCFG > vector.toml
    [sources.in]
    type = "stdin" # required
    [sinks.out]
    # Encoding
    encoding.codec = "json" # required
    # General
    group_name = "group-name" # required
    inputs = ["in"] # required
    region = "us-east-1" # required, required when endpoint = ""
    stream_name = "{{ host }}" # required
    type = "aws_cloudwatch_logs" # required
    VECTORCFG
    explain this command
  3. Start Vector

    vector --config vector.toml

    That's it! Simple and to the point. Hit ctrl+c to exit.

Next Steps

Vector is powerful utility and we're just scratching the surface in this guide. Here are a few pages we recommend that demonstrate the power and flexibility of Vector:

Vector Github repo 4k
Vector is free and open-source!
Vector getting started series
Go from zero to production in under 10 minutes!
Vector documentation
Thoughtful, detailed docs that respect your time.