Azure Blob Storage
Store your observability data in Azure Blob Storage
Configuration
Example configurations
{
"sinks": {
"my_sink_id": {
"type": "azure_blob",
"inputs": [
"my-source-or-transform-id"
],
"connection_string": "DefaultEndpointsProtocol=https;AccountName=mylogstorage;AccountKey=storageaccountkeybase64encoded;EndpointSuffix=core.windows.net",
"container_name": "my-logs",
"blob_prefix": "blob/%F/",
"acknowledgements": null,
"batch": null,
"compression": "gzip",
"encoding": {
"codec": "json"
},
"healthcheck": null
}
}
}
[sinks.my_sink_id]
type = "azure_blob"
inputs = [ "my-source-or-transform-id" ]
connection_string = "DefaultEndpointsProtocol=https;AccountName=mylogstorage;AccountKey=storageaccountkeybase64encoded;EndpointSuffix=core.windows.net"
container_name = "my-logs"
blob_prefix = "blob/%F/"
compression = "gzip"
[sinks.my_sink_id.encoding]
codec = "json"
---
sinks:
my_sink_id:
type: azure_blob
inputs:
- my-source-or-transform-id
connection_string: DefaultEndpointsProtocol=https;AccountName=mylogstorage;AccountKey=storageaccountkeybase64encoded;EndpointSuffix=core.windows.net
container_name: my-logs
blob_prefix: blob/%F/
acknowledgements: null
batch: null
compression: gzip
encoding:
codec: json
healthcheck: null
{
"sinks": {
"my_sink_id": {
"type": "azure_blob",
"inputs": [
"my-source-or-transform-id"
],
"connection_string": "DefaultEndpointsProtocol=https;AccountName=mylogstorage;AccountKey=storageaccountkeybase64encoded;EndpointSuffix=core.windows.net",
"container_name": "my-logs",
"blob_prefix": "blob/%F/",
"blob_append_uuid": true,
"buffer": null,
"acknowledgements": null,
"batch": null,
"compression": "gzip",
"encoding": {
"codec": "json"
},
"healthcheck": null,
"request": null,
"blob_time_format": "%s"
}
}
}
[sinks.my_sink_id]
type = "azure_blob"
inputs = [ "my-source-or-transform-id" ]
connection_string = "DefaultEndpointsProtocol=https;AccountName=mylogstorage;AccountKey=storageaccountkeybase64encoded;EndpointSuffix=core.windows.net"
container_name = "my-logs"
blob_prefix = "blob/%F/"
blob_append_uuid = true
compression = "gzip"
blob_time_format = "%s"
[sinks.my_sink_id.encoding]
codec = "json"
---
sinks:
my_sink_id:
type: azure_blob
inputs:
- my-source-or-transform-id
connection_string: DefaultEndpointsProtocol=https;AccountName=mylogstorage;AccountKey=storageaccountkeybase64encoded;EndpointSuffix=core.windows.net
container_name: my-logs
blob_prefix: blob/%F/
blob_append_uuid: true
buffer: null
acknowledgements: null
batch: null
compression: gzip
encoding:
codec: json
healthcheck: null
request: null
blob_time_format: "%s"
acknowledgements
common optional objectacknowledgement
settings.acknowledgements.enabled
optional boolfalse
batch
common optional objectbatch.max_bytes
optional uintbatch.max_events
optional uintbatch.timeout_secs
optional float300
(seconds)blob_append_uuid
optional booltrue
blob_prefix
common optional string template/
if you want this to be the root azure storage “folder”."date/%F/"
"date/%F/hour/%H/"
"year=%Y/month=%m/day=%d/"
"kubernetes/{{ metadata.cluster }}/{{ metadata.application_name }}/"
blob/%F/
blob_time_format
optional string strftimestrftime
specifiers are supported.%s
buffer
optional objectbuffer.max_events
optional uinttype = "memory"
500
(events)buffer.type
optional string literal enumOption | Description |
---|---|
disk | Stores the sink’s buffer on disk. This is less performant, but durable. Data will not be lost between restarts. Will also hold data in memory to enhance performance. WARNING: This may stall the sink if disk performance isn’t on par with the throughput. For comparison, AWS gp2 volumes are usually too slow for common cases. |
memory | Stores the sink’s buffer in memory. This is more performant, but less durable. Data will be lost if Vector is restarted forcefully. |
memory
buffer.when_full
optional string literal enumOption | Description |
---|---|
block | Applies back pressure when the buffer is full. This prevents data loss, but will cause data to pile up on the edge. |
drop_newest | Drops new data as it’s received. This data is lost. This should be used when performance is the highest priority. |
block
compression
common optional string literal enumThe compression strategy used to compress the encoded event data before transmission.
Some cloud storage API clients and browsers will handle decompression transparently, so files may not always appear to be compressed depending how they are accessed.
Option | Description |
---|---|
gzip | Gzip standard DEFLATE compression. |
none | No compression. |
gzip
connection_string
required string literalcontainer_name
required string literalencoding
required objectencoding.codec
optional string literal enumOption | Description |
---|---|
ndjson | Newline delimited list of JSON encoded events. |
text | Newline delimited list of messages generated from the message key from each event. |
encoding.except_fields
optional [string]encoding.only_fields
optional [string]encoding.timestamp_format
optional string literal enumOption | Description |
---|---|
rfc3339 | Formats as a RFC3339 string |
unix | Formats as a unix timestamp |
rfc3339
inputs
required [string]A list of upstream source or transform
IDs. Wildcards (*
) are supported.
See configuration for more info.
request
optional objectrequest.adaptive_concurrency
optional objectrequest.adaptive_concurrency.decrease_ratio
optional float0.9
request.adaptive_concurrency.ewma_alpha
optional float0.7
request.adaptive_concurrency.rtt_deviation_scale
optional float2
request.concurrency
optional uintrequest.rate_limit_duration_secs
optional uintrate_limit_num
option.1
(seconds)request.rate_limit_num
optional uintrate_limit_duration_secs
time window.250
request.retry_attempts
optional uint1.8446744073709552e+19
request.retry_initial_backoff_secs
optional uint1
(seconds)request.retry_max_duration_secs
optional uint3600
(seconds)request.timeout_secs
optional uint60
(seconds)Telemetry
Metrics
linkbuffer_byte_size
gaugecomponent_id
instead. The value is the same as component_id
.buffer_discarded_events_total
countercomponent_id
instead. The value is the same as component_id
.buffer_events
gaugecomponent_id
instead. The value is the same as component_id
.buffer_received_event_bytes_total
countercomponent_id
instead. The value is the same as component_id
.buffer_received_events_total
countercomponent_id
instead. The value is the same as component_id
.buffer_sent_event_bytes_total
countercomponent_id
instead. The value is the same as component_id
.buffer_sent_events_total
countercomponent_id
instead. The value is the same as component_id
.component_received_event_bytes_total
countercomponent_id
instead. The value is the same as component_id
.component_received_events_total
countercomponent_id
instead. The value is the same as component_id
.component_sent_event_bytes_total
countercomponent_id
instead. The value is the same as component_id
.component_sent_events_total
countercomponent_id
instead. The value is the same as component_id
.events_discarded_total
counterevents_in_total
countercomponent_received_events_total
instead.component_id
instead. The value is the same as component_id
.http_error_response_total
counterhttp_request_errors_total
counterprocessed_bytes_total
countercomponent_id
instead. The value is the same as component_id
.processing_errors_total
countercomponent_id
instead. The value is the same as component_id
.utilization
gaugecomponent_id
instead. The value is the same as component_id
.How it works
Buffers and batches
This component buffers & batches data as shown in the diagram above. You’ll notice that Vector treats these concepts differently, instead of treating them as global concepts, Vector treats them as sink specific concepts. This isolates sinks, ensuring services disruptions are contained and delivery guarantees are honored.
Batches are flushed when 1 of 2 conditions are met:
- The batch age meets or exceeds the configured
timeout_secs
. - The batch size meets or exceeds the configured
max_bytes
ormax_events
.
Buffers are controlled via the buffer.*
options.
Health checks
Require health checks
If you’d like to exit immediately upon a health check failure, you can pass the
--require-healthy
flag:
vector --config /etc/vector/vector.toml --require-healthy
Disable health checks
healthcheck
option to
false
.Object naming
By default, Vector names your blobs different based on whether or not the blobs are compressed.
Here is the format without compression:
<key_prefix><timestamp>-<uuidv4>.log
Here’s an example blob name without compression:
blob/2021-06-23/1560886634-fddd7a0e-fad9-4f7e-9bce-00ae5debc563.log
And here is the format with compression:
<key_prefix><timestamp>-<uuidv4>.log.gz
An example blob name with compression:
blob/2021-06-23/1560886634-fddd7a0e-fad9-4f7e-9bce-00ae5debc563.log.gz
Vector appends a UUIDV4 token to ensure there are no name conflicts in the unlikely event that two Vector instances are writing data at the same time.
You can control the resulting name via the blob_prefix
,
blob_time_format
, and blob_append_uuid
options.
Partitioning
Vector supports dynamic configuration values through a simple template syntax. If an option supports templating, it will be noted with a badge and you can use event fields to create dynamic values. For example:
[sinks.my-sink]
dynamic_option = "application={{ application_id }}"
In the above example, the application_id
for each event will be
used to partition outgoing data.
Rate limits & adapative concurrency
Adaptive Request Concurrency (ARC)
Adaptive Request Concurrency is a feature of Vector that does away with static concurrency limits and automatically optimizes HTTP concurrency based on downstream service responses. The underlying mechanism is a feedback loop inspired by TCP congestion control algorithms. Checkout the announcement blog post,
We highly recommend enabling this feature as it improves performance and reliability of Vector and the systems it communicates with. As such, we have made it the default, and no further configuration is required.
Static concurrency
If Adaptive Request Concurrency is not for you, you can manually set static concurrency
limits by specifying an integer for request.concurrency
:
[sinks.my-sink]
request.concurrency = 10
Rate limits
In addition to limiting request concurrency, you can also limit the overall request
throughput via the request.rate_limit_duration_secs
and request.rate_limit_num
options.
[sinks.my-sink]
request.rate_limit_duration_secs = 1
request.rate_limit_num = 10
These will apply to both adaptive
and fixed request.concurrency
values.