Splunk HEC logs
Deliver log data to Splunk’s HTTP Event Collector
Alias
This component was previously called the splunk_hec
sink. Make sure to update your
Vector configuration to accommodate the name change:
[sinks.my_splunk_hec_logs_sink]
+type = "splunk_hec_logs"
-type = "splunk_hec"
Configuration
Example configurations
{
"sinks": {
"my_sink_id": {
"type": "splunk_hec_logs",
"inputs": [
"my-source-or-transform-id"
],
"acknowledgements": null,
"endpoint": "https://http-inputs-hec.splunkcloud.com",
"host_key": "hostname",
"indexed_fields": [
"field1"
],
"compression": "none",
"encoding": {
"codec": "json"
},
"healthcheck": null,
"default_token": "${SPLUNK_HEC_TOKEN}"
}
}
}
[sinks.my_sink_id]
type = "splunk_hec_logs"
inputs = [ "my-source-or-transform-id" ]
endpoint = "https://http-inputs-hec.splunkcloud.com"
host_key = "hostname"
indexed_fields = [ "field1" ]
compression = "none"
default_token = "${SPLUNK_HEC_TOKEN}"
[sinks.my_sink_id.encoding]
codec = "json"
---
sinks:
my_sink_id:
type: splunk_hec_logs
inputs:
- my-source-or-transform-id
acknowledgements: null
endpoint: https://http-inputs-hec.splunkcloud.com
host_key: hostname
indexed_fields:
- field1
compression: none
encoding:
codec: json
healthcheck: null
default_token: ${SPLUNK_HEC_TOKEN}
{
"sinks": {
"my_sink_id": {
"type": "splunk_hec_logs",
"inputs": [
"my-source-or-transform-id"
],
"acknowledgements": null,
"endpoint": "https://http-inputs-hec.splunkcloud.com",
"host_key": "hostname",
"index": "{{ host }}",
"indexed_fields": [
"field1"
],
"source": "{{ file }}",
"buffer": null,
"batch": null,
"compression": "none",
"encoding": {
"codec": "json"
},
"healthcheck": null,
"request": null,
"tls": null,
"proxy": null,
"default_token": "${SPLUNK_HEC_TOKEN}",
"sourcetype": "{{ sourcetype }}"
}
}
}
[sinks.my_sink_id]
type = "splunk_hec_logs"
inputs = [ "my-source-or-transform-id" ]
endpoint = "https://http-inputs-hec.splunkcloud.com"
host_key = "hostname"
index = "{{ host }}"
indexed_fields = [ "field1" ]
source = "{{ file }}"
compression = "none"
default_token = "${SPLUNK_HEC_TOKEN}"
sourcetype = "{{ sourcetype }}"
[sinks.my_sink_id.encoding]
codec = "json"
---
sinks:
my_sink_id:
type: splunk_hec_logs
inputs:
- my-source-or-transform-id
acknowledgements: null
endpoint: https://http-inputs-hec.splunkcloud.com
host_key: hostname
index: "{{ host }}"
indexed_fields:
- field1
source: "{{ file }}"
buffer: null
batch: null
compression: none
encoding:
codec: json
healthcheck: null
request: null
tls: null
proxy: null
default_token: ${SPLUNK_HEC_TOKEN}
sourcetype: "{{ sourcetype }}"
acknowledgements
common optional objectacknowledgement
settings.acknowledgements.enabled
common optional boolfalse
acknowledgements.indexer_acknowledgements_enabled
optional booltrue
acknowledgements.max_pending_acks
optional uint1e+06
acknowledgements.query_interval
optional uint1
.10
(seconds)acknowledgements.retry_limit
optional uint1
.30
batch
optional objectbatch.max_bytes
common optional uintbatch.max_events
common optional uintbatch.timeout_secs
common optional float1
(seconds)buffer
optional objectbuffer.max_events
common optional uinttype = "memory"
500
(events)buffer.max_size
required uintThe maximum size of the buffer on the disk. Must be at least 128 megabytes (134217728 bytes).
Note that during normal disk buffer operation, the disk buffer can create one additional 128 megabyte block so the minimum disk space required is actually 256 megabytes.
type = "disk"
buffer.type
common optional string literal enumOption | Description |
---|---|
disk | Stores the sink’s buffer on disk. This is less performant, but durable. Data will not be lost between restarts. Will also hold data in memory to enhance performance. WARNING: This may stall the sink if disk performance isn’t on par with the throughput. For comparison, AWS gp2 volumes are usually too slow for common cases. |
memory | Stores the sink’s buffer in memory. This is more performant, but less durable. Data will be lost if Vector is restarted forcefully. |
memory
buffer.when_full
optional string literal enumOption | Description |
---|---|
block | Applies back pressure when the buffer is full. This prevents data loss, but will cause data to pile up on the edge. |
drop_newest | Drops new data as it’s received. This data is lost. This should be used when performance is the highest priority. |
block
compression
common optional string literal enumThe compression strategy used to compress the encoded event data before transmission.
Some cloud storage API clients and browsers will handle decompression transparently, so files may not always appear to be compressed depending how they are accessed.
Option | Description |
---|---|
gzip | Gzip standard DEFLATE compression. |
none
default_token
required string literalencoding
required objectConfigures the encoding specific sink behavior.
Note: When data in encoding
is malformed, currently only a very generic error “data did not match any variant of untagged enum EncodingConfig” is reported. Follow this issue to track progress on improving these error messages.
encoding.codec
required string literal enumOption | Description |
---|---|
json | JSON encoded event. |
text | The message field from the event. |
encoding.except_fields
optional [string]encoding.only_fields
optional [string]encoding.timestamp_format
optional string literal enumOption | Description |
---|---|
rfc3339 | Formats as a RFC3339 string |
unix | Formats as a unix timestamp |
rfc3339
endpoint
required string literalhealthcheck
common optional objecthealthcheck.enabled
common optional booltrue
host_key
common optional string literalhost_key
option.index
optional string templateindexed_fields
common optional [string]inputs
required [string]A list of upstream source or transform
IDs. Wildcards (*
) are supported.
See configuration for more info.
proxy
optional objectproxy.http
optional string literalproxy.https
optional string literalproxy.no_proxy
optional [string]A list of hosts to avoid proxying. Allowed patterns here include:
Pattern | Example match |
---|---|
Domain names | example.com matches requests to example.com |
Wildcard domains | .example.com matches requests to example.com and its subdomains |
IP addresses | 127.0.0.1 matches requests to 127.0.0.1 |
CIDR blocks | 192.168.0.0./16 matches requests to any IP addresses in this range |
Splat | * matches all hosts |
request
optional objectrequest.adaptive_concurrency
optional objectrequest.adaptive_concurrency.decrease_ratio
optional float0.9
request.adaptive_concurrency.ewma_alpha
optional float0.7
request.adaptive_concurrency.rtt_deviation_scale
optional float2
request.concurrency
common optional uintrequest.rate_limit_duration_secs
common optional uintrate_limit_num
option.1
(seconds)request.rate_limit_num
common optional uintrate_limit_duration_secs
time window.9.223372036854776e+18
request.retry_attempts
optional uint1.8446744073709552e+19
request.retry_initial_backoff_secs
optional uint1
(seconds)request.retry_max_duration_secs
optional uint3600
(seconds)request.timeout_secs
common optional uint60
(seconds)source
optional string templatesourcetype
optional string templatetls
optional objecttls.ca_file
optional string literaltls.crt_file
common optional string literalkey_file
must also be set.tls.key_file
common optional string literalcrt_file
must also be set.tls.key_pass
optional string literalkey_file
is set.tls.verify_certificate
optional booltrue
(the default), Vector will validate the TLS certificate of the remote host.true
tls.verify_hostname
optional booltrue
(the default), Vector will validate the configured remote host name against the remote host’s TLS certificate. Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.true
Telemetry
Metrics
linkbuffer_byte_size
gaugecomponent_id
instead. The value is the same as component_id
.buffer_discarded_events_total
countercomponent_id
instead. The value is the same as component_id
.buffer_events
gaugecomponent_id
instead. The value is the same as component_id
.buffer_received_event_bytes_total
countercomponent_id
instead. The value is the same as component_id
.buffer_received_events_total
countercomponent_id
instead. The value is the same as component_id
.buffer_sent_event_bytes_total
countercomponent_id
instead. The value is the same as component_id
.buffer_sent_events_total
countercomponent_id
instead. The value is the same as component_id
.component_discarded_events_total
countercomponent_id
instead. The value is the same as component_id
.component_errors_total
countercomponent_id
instead. The value is the same as component_id
.component_received_event_bytes_total
countercomponent_id
instead. The value is the same as component_id
.component_received_events_count
histogramcomponent_id
instead. The value is the same as component_id
.component_received_events_total
countercomponent_id
instead. The value is the same as component_id
.component_sent_bytes_total
countercomponent_id
instead. The value is the same as component_id
.component_sent_event_bytes_total
countercomponent_id
instead. The value is the same as component_id
.component_sent_events_total
countercomponent_id
instead. The value is the same as component_id
.events_in_total
countercomponent_received_events_total
instead.component_id
instead. The value is the same as component_id
.events_out_total
countercomponent_sent_events_total
instead.component_id
instead. The value is the same as component_id
.http_request_errors_total
counterprocessed_events_total
countercomponent_received_events_total
and
component_sent_events_total
metrics.component_id
instead. The value is the same as component_id
.requests_received_total
countercomponent_id
instead. The value is the same as component_id
.utilization
gaugecomponent_id
instead. The value is the same as component_id
.How it works
Buffers and batches
This component buffers & batches data as shown in the diagram above. You’ll notice that Vector treats these concepts differently, instead of treating them as global concepts, Vector treats them as sink specific concepts. This isolates sinks, ensuring services disruptions are contained and delivery guarantees are honored.
Batches are flushed when 1 of 2 conditions are met:
- The batch age meets or exceeds the configured
timeout_secs
. - The batch size meets or exceeds the configured
max_bytes
ormax_events
.
Buffers are controlled via the buffer.*
options.
Health checks
Require health checks
If you’d like to exit immediately upon a health check failure, you can pass the
--require-healthy
flag:
vector --config /etc/vector/vector.toml --require-healthy
Disable health checks
healthcheck
option to
false
.Indexer Acknowledgements
To provide more accurate end-to-end acknowledgements, this sink will automatically integrate (unless explicitly disabled) with
Splunk HEC indexer acknowledgements
if the provided Splunk HEC token has the feature enabled. In other words, if ackID
’s are present in Splunk
HEC responses, this sink will store and query for the status of said ackID
’s to confirm that data has been successfully
delivered. Upstream sources with the Vector end-to-end acknowledgements feature enabled will wait for this sink to confirm
delivery of events before acknowledging receipt.
The Splunk channel required for indexer acknowledgements is created using a randomly generated UUID. By default, this sink uses the
recommended Splunk indexer acknowledgements client behavior: querying for ack statuses every 10 seconds for a maximum of 30 attempts
(5 minutes) per ackID
.
Partitioning
Vector supports dynamic configuration values through a simple template syntax. If an option supports templating, it will be noted with a badge and you can use event fields to create dynamic values. For example:
[sinks.my-sink]
dynamic_option = "application={{ application_id }}"
In the above example, the application_id
for each event will be
used to partition outgoing data.
Rate limits & adaptive concurrency
Adaptive Request Concurrency (ARC)
Adaptive Request Concurrency is a feature of Vector that does away with static concurrency limits and automatically optimizes HTTP concurrency based on downstream service responses. The underlying mechanism is a feedback loop inspired by TCP congestion control algorithms. Checkout the announcement blog post,
We highly recommend enabling this feature as it improves performance and reliability of Vector and the systems it communicates with. As such, we have made it the default, and no further configuration is required.
Static concurrency
If Adaptive Request Concurrency is not for you, you can manually set static concurrency
limits by specifying an integer for request.concurrency
:
[sinks.my-sink]
request.concurrency = 10
Rate limits
In addition to limiting request concurrency, you can also limit the overall request
throughput via the request.rate_limit_duration_secs
and request.rate_limit_num
options.
[sinks.my-sink]
request.rate_limit_duration_secs = 1
request.rate_limit_num = 10
These will apply to both adaptive
and fixed request.concurrency
values.
Retry policy
request.retry_attempts
and
request.retry_backoff_secs
options.Splunk HEC Channel Header
X-Splunk-Request-Channel
header with a randomly generated UUID as the channel value.
Splunk requires a channel value when using indexer acknowledgements, but also accepts
channel values when indexer acknowledgements is disabled. Thus, this channel value is included regardless of indexer
acknowledgement settings.