aws_cloudwatch_logs sink
The aws_cloudwatch_logs
sink batches log
events to AWS CloudWatch Logs via the PutLogEvents
API endpoint.
Configuration
- Common
- Advanced
[sinks.my_sink_id]# REQUIRED - Generaltype = "aws_cloudwatch_logs" # example, must be: "aws_cloudwatch_logs"inputs = ["my-source-id"] # exampleendpoint = "127.0.0.0:5000" # examplegroup_name = "{{ file }}" # exampleregion = "us-east-1" # examplestream_name = "{{ instance_id }}" # example# REQUIRED - requestsencoding = "json" # example, enum# OPTIONAL - Generalcreate_missing_group = true # defaultcreate_missing_stream = true # default
Options
batch_size
The maximum size of a batch before it is flushed. See Buffers & Batches for more info.
1049000
batch_timeout
The maximum age of a batch before it is flushed. See Buffers & Batches for more info.
1
buffer
Configures the sink specific buffer.
max_size
The maximum size of the buffer on the disk.
type
The buffer's type / location. disk
buffers are persistent and will be retained between restarts.
"memory"
"memory"
"disk"
when_full
The behavior when the buffer becomes full.
"block"
"block"
"drop_newest"
create_missing_group
Dynamically create a log group if it does not already exist. This will ignorecreate_missing_stream
directly after creating the group and will create the first stream.
true
create_missing_stream
Dynamically create a log stream if it does not already exist.
true
encoding
The encoding format used to serialize the events before outputting.
"json"
"text"
endpoint
Custom endpoint for use with AWS-compatible services.
group_name
The group name of the target CloudWatch Logs stream. See Partitioning and Template Syntax for more info.
healthcheck
Enables/disables the sink healthcheck upon start. See Health Checks for more info.
true
region
The AWS region of the target CloudWatch Logs stream resides.
request_in_flight_limit
The maximum number of in-flight requests allowed at any given time. See Rate Limits for more info.
5
request_rate_limit_duration_secs
The window used for therequest_rate_limit_num
option See Rate Limits for more info.
1
request_rate_limit_num
The maximum number of requests allowed within therequest_rate_limit_duration_secs
window. See Rate Limits for more info.
5
request_retry_attempts
The maximum number of retries to make for failed requests. See Retry Policy for more info.
5
request_retry_backoff_secs
The amount of time to wait before attempting a failed request again. See Retry Policy for more info.
1
request_timeout_secs
The maximum time a request can take before being aborted. It is highly recommended that you do not lower value below the service's internal timeout, as this could create orphaned requests, pile on retries, and result in deuplicate data downstream.
30
stream_name
The stream name of the target CloudWatch Logs stream. See Partitioning and Template Syntax for more info.
Env Vars
AWS_ACCESS_KEY_ID
Used for AWS authentication when communicating with AWS services. See relevant AWS components for more info. See Authentication for more info.
AWS_SECRET_ACCESS_KEY
Used for AWS authentication when communicating with AWS services. See relevant AWS components for more info. See Authentication for more info.
Output
The aws_cloudwatch_logs
sink batches log
events to AWS CloudWatch Logs via the PutLogEvents
API endpoint.
Batches are flushed via the batch_size
or
batch_timeout
options. You can learn more in the buffers &
batches section.
For example:
POST / HTTP/1.1Host: logs.<region>.<domain>X-Amz-Date: <date>Accept: application/jsonContent-Type: application/x-amz-json-1.1Content-Length: <byte_size>Connection: Keep-AliveX-Amz-Target: Logs_20140328.PutLogEvents{"logGroupName": "<group_name>","logStreamName": "<stream_name>","logEvents": [{"timestamp": <log_timestamp>,"message": "<json_encoded_log>"},{"timestamp": <log_timestamp>,"message": "<json_encoded_log>"},{"timestamp": <log_timestamp>,"message": "<json_encoded_log>"}]}
How It Works
Authentication
Vector checks for AWS credentials in the following order:
- Environment variables
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
. - The
credential_process
command in the AWS config file. (usually located at~/.aws/config
) - The AWS credentials file. (usually located at
~/.aws/credentials
) - The IAM instance profile. (will only work if running on an EC2 instance with an instance profile/role)
If credentials are not found the healtcheck will fail and an error will be logged.
Obtaining an access key
In general, we recommend using instance profiles/roles whenever possible. In cases where this is not possible you can generate an AWS access key for any user within your AWS account. AWS provides a detailed guide on how to do this.
Buffers & Batches
The aws_cloudwatch_logs
sink buffers & batches data as
shown in the diagram above. You'll notice that Vector treats these concepts
differently, instead of treating them as global concepts, Vector treats them
as sink specific concepts. This isolates sinks, ensuring services disruptions
are contained and delivery guarantees are honored.
Batches are flushed when 1 of 2 conditions are met:
- The batch age meets or exceeds the configured
batch_timeout
(default:1 seconds
). - The batch size meets or exceeds the configured
batch_size
(default:1049000 bytes
).
Buffers are controlled via the buffer.*
options.
Environment Variables
Environment variables are supported through all of Vector's configuration.
Simply add ${MY_ENV_VAR}
in your Vector configuration file and the variable
will be replaced before being evaluated.
You can learn more in the Environment Variables section.
Health Checks
Health checks ensure that the downstream service is accessible and ready to accept data. This check is performed upon sink initialization. If the health check fails an error will be logged and Vector will proceed to start.
Require Health Checks
If you'd like to exit immediately upon a health check failure, you can
pass the --require-healthy
flag:
vector --config /etc/vector/vector.toml --require-healthy
Disable Health Checks
If you'd like to disable health checks for this sink you can set thehealthcheck
option to false
.
Partitioning
Partitioning is controlled via thegroup_name
andstream_name
options and allows you to dynamically partition data on the fly.
You'll notice that Vector's template sytax is supported
for these options, enabling you to use field values as the partition's key.
Rate Limits
Vector offers a few levers to control the rate and volume of requests to the
downstream service. Start with therequest_rate_limit_duration_secs
andrequest_rate_limit_num
options to ensure Vector does not exceed the specified
number of requests in the specified window. You can further control the pace at
which this window is saturated with therequest_in_flight_limit
option, which
will guarantee no more than the specified number of requests are in-flight at
any given time.
Please note, Vector's defaults are carefully chosen and it should be rare that you need to adjust these. If you found a good reason to do so please share it with the Vector team by opening an issie.
Retry Policy
Vector will retry failed requests (status == 429
, >= 500
, and != 501
).
Other responses will not be retried. You can control the number of retry
attempts and backoff rate with therequest_retry_attempts
andrequest_retry_backoff_secs
options.
Template Syntax
Thegroup_name
andstream_name
options
support Vector's template syntax,
enabling dynamic values derived from the event's data. This syntax accepts
strptime specifiers as well as the
{{ field_name }}
syntax for accessing event fields. For example:
[sinks.my_aws_cloudwatch_logs_sink_id]# ...group_name = "{{ file }}"group_name = "ec2/{{ instance_id }}"group_name = "group-name"# ...
You can read more about the complete syntax in the template syntax section.