HTTP Server
Receive observability data from an HTTP client request
Alias
This component was previously called the http
source. Make sure to update your
Vector configuration to accommodate the name change:
[sources.my_http_server_source]
+type = "http_server"
-type = "http"
Configuration
Example configurations
{
"sources": {
"my_source_id": {
"type": "http_server",
"address": "0.0.0.0:80"
}
}
}
[sources.my_source_id]
type = "http_server"
address = "0.0.0.0:80"
sources:
my_source_id:
type: http_server
address: 0.0.0.0:80
{
"sources": {
"my_source_id": {
"type": "http_server",
"address": "0.0.0.0:80",
"encoding": "binary",
"headers": [
"User-Agent"
],
"host_key": "hostname",
"method": "POST",
"path": "/",
"path_key": "path",
"query_parameters": [
"application"
],
"response_code": 200,
"strict_path": true
}
}
}
[sources.my_source_id]
type = "http_server"
address = "0.0.0.0:80"
encoding = "binary"
headers = [ "User-Agent" ]
host_key = "hostname"
method = "POST"
path = "/"
path_key = "path"
query_parameters = [ "application" ]
response_code = 200
strict_path = true
sources:
my_source_id:
type: http_server
address: 0.0.0.0:80
encoding: binary
headers:
- User-Agent
host_key: hostname
method: POST
path: /
path_key: path
query_parameters:
- application
response_code: 200
strict_path: true
acknowledgements
optional objectControls how acknowledgements are handled by this source.
This setting is deprecated in favor of enabling acknowledgements
at the global or sink level.
Enabling or disabling acknowledgements at the source level has no effect on acknowledgement behavior.
See End-to-end Acknowledgements for more information on how event acknowledgement is handled.
acknowledgements.enabled
optional booladdress
required string literalThe socket address to listen for connections on.
It must include a port.
auth
optional objectauth.password
required string literalauth.username
required string literaldecoding
optional objectdecoding.avro
required objectcodec = "avro"
decoding.avro.schema
required string literalThe Avro schema definition.
Please note that the following [apache_avro::types::Value
] variants are currently not supported:
Date
Decimal
Duration
Fixed
TimeMillis
decoding.avro.strip_schema_id_prefix
required booldecoding.codec
required string literal enumOption | Description |
---|---|
avro | Decodes the raw bytes as as an Apache Avro message. |
bytes | Uses the raw bytes as-is. |
gelf | Decodes the raw bytes as a GELF message. This codec is experimental for the following reason: The GELF specification is more strict than the actual Graylog receiver.
Vector’s decoder currently adheres more strictly to the GELF spec, with
the exception that some characters such as Other GELF codecs such as Loki’s, use a Go SDK that is maintained by Graylog, and is much more relaxed than the GELF spec. Going forward, Vector will use that Go SDK as the reference implementation, which means the codec may continue to relax the enforcement of specification. |
influxdb | Decodes the raw bytes as an Influxdb Line Protocol message. |
json | Decodes the raw bytes as JSON. |
native | Decodes the raw bytes as native Protocol Buffers format. This codec is experimental. |
native_json | Decodes the raw bytes as native JSON format. This codec is experimental. |
protobuf | Decodes the raw bytes as protobuf. |
syslog | Decodes the raw bytes as a Syslog message. Decodes either as the RFC 3164-style format (“old” style) or the RFC 5424-style format (“new” style, includes structured data). |
vrl | Decodes the raw bytes as a string and passes them as input to a VRL program. |
decoding.gelf
optional objectcodec = "gelf"
decoding.gelf.lossy
optional boolDetermines whether or not to replace invalid UTF-8 sequences instead of failing.
When true, invalid UTF-8 sequences are replaced with the U+FFFD REPLACEMENT CHARACTER
.
true
decoding.influxdb
optional objectcodec = "influxdb"
decoding.influxdb.lossy
optional boolDetermines whether or not to replace invalid UTF-8 sequences instead of failing.
When true, invalid UTF-8 sequences are replaced with the U+FFFD REPLACEMENT CHARACTER
.
true
decoding.json
optional objectcodec = "json"
decoding.json.lossy
optional boolDetermines whether or not to replace invalid UTF-8 sequences instead of failing.
When true, invalid UTF-8 sequences are replaced with the U+FFFD REPLACEMENT CHARACTER
.
true
decoding.native_json
optional objectcodec = "native_json"
decoding.native_json.lossy
optional boolDetermines whether or not to replace invalid UTF-8 sequences instead of failing.
When true, invalid UTF-8 sequences are replaced with the U+FFFD REPLACEMENT CHARACTER
.
true
decoding.protobuf
optional objectcodec = "protobuf"
decoding.protobuf.desc_file
optional string literaldecoding.protobuf.message_type
optional string literaldecoding.syslog
optional objectcodec = "syslog"
decoding.syslog.lossy
optional boolDetermines whether or not to replace invalid UTF-8 sequences instead of failing.
When true, invalid UTF-8 sequences are replaced with the U+FFFD REPLACEMENT CHARACTER
.
true
decoding.vrl
required objectcodec = "vrl"
decoding.vrl.source
required string literal.
target will be used as the decoding result.
Compilation error or use of ‘abort’ in a program will result in a decoding error.decoding.vrl.timezone
optional string literalThe name of the timezone to apply to timestamp conversions that do not contain an explicit
time zone. The time zone name may be any name in the TZ database, or local
to indicate system local time.
If not set, local
will be used.
encoding
optional string literal enumThe expected encoding of received data.
For json
and ndjson
encodings, the fields of the JSON objects are output as separate fields.
Option | Description |
---|---|
binary | Binary. |
json | JSON. |
ndjson | Newline-delimited JSON. |
text | Plaintext. |
framing
optional objectFraming configuration.
Framing handles how events are separated when encoded in a raw byte form, where each event is a frame that must be prefixed, or delimited, in a way that marks where an event begins and ends within the byte stream.
framing.character_delimited
required objectmethod = "character_delimited"
framing.character_delimited.delimiter
required ascii_charframing.character_delimited.max_length
optional uintThe maximum length of the byte buffer.
This length does not include the trailing delimiter.
By default, there is no maximum length enforced. If events are malformed, this can lead to additional resource usage as events continue to be buffered in memory, and can potentially lead to memory exhaustion in extreme cases.
If there is a risk of processing malformed data, such as logs with user-controlled input, consider setting the maximum length to a reasonably large value as a safety net. This ensures that processing is not actually unbounded.
framing.chunked_gelf
optional objectmethod = "chunked_gelf"
framing.chunked_gelf.decompression
optional string literal enumOption | Description |
---|---|
Auto | Automatically detect the decompression method based on the magic bytes of the message. |
Gzip | Use Gzip decompression. |
None | Do not decompress the message. |
Zlib | Use Zlib decompression. |
Auto
framing.chunked_gelf.max_length
optional uintThe maximum length of a single GELF message, in bytes. Messages longer than this length will be dropped. If this option is not set, the decoder does not limit the length of messages and the per-message memory is unbounded.
Note that a message can be composed of multiple chunks and this limit is applied to the whole message, not to individual chunks.
This limit takes only into account the message’s payload and the GELF header bytes are excluded from the calculation. The message’s payload is the concatenation of all the chunks' payloads.
framing.chunked_gelf.pending_messages_limit
optional uintframing.chunked_gelf.timeout_secs
optional float5
framing.length_delimited
required objectmethod = "length_delimited"
framing.length_delimited.length_field_is_big_endian
optional booltrue
framing.length_delimited.length_field_length
optional uint4
framing.length_delimited.length_field_offset
optional uintframing.method
required string literal enumOption | Description |
---|---|
bytes | Byte frames are passed through as-is according to the underlying I/O boundaries (for example, split between messages or stream segments). |
character_delimited | Byte frames which are delimited by a chosen character. |
chunked_gelf | Byte frames which are chunked GELF messages. |
length_delimited | Byte frames which are prefixed by an unsigned big-endian 32-bit integer indicating the length. |
newline_delimited | Byte frames which are delimited by a newline character. |
octet_counting | Byte frames according to the octet counting format. |
framing.newline_delimited
optional objectmethod = "newline_delimited"
framing.newline_delimited.max_length
optional uintThe maximum length of the byte buffer.
This length does not include the trailing delimiter.
By default, there is no maximum length enforced. If events are malformed, this can lead to additional resource usage as events continue to be buffered in memory, and can potentially lead to memory exhaustion in extreme cases.
If there is a risk of processing malformed data, such as logs with user-controlled input, consider setting the maximum length to a reasonably large value as a safety net. This ensures that processing is not actually unbounded.
framing.octet_counting
optional objectmethod = "octet_counting"
framing.octet_counting.max_length
optional uintheaders
optional [string]A list of HTTP headers to include in the log event.
Accepts the wildcard (*
) character for headers matching a specified pattern.
Specifying “*” results in all headers included in the log event.
These headers are not included in the JSON payload if a field with a conflicting name exists.
host_key
optional string literalkeepalive
optional objectkeepalive.max_connection_age_jitter_factor
optional floatThe factor by which to jitter the max_connection_age_secs
value.
A value of 0.1 means that the actual duration will be between 90% and 110% of the specified maximum duration.
0.1
keepalive.max_connection_age_secs
optional uintThe maximum amount of time a connection may exist before it is closed by sending
a Connection: close
header on the HTTP response. Set this to a large value like
100000000
to “disable” this feature
Only applies to HTTP/0.9, HTTP/1.0, and HTTP/1.1 requests.
A random jitter configured by max_connection_age_jitter_factor
is added
to the specified duration to spread out connection storms.
300
(seconds)method
optional string literal enumOption | Description |
---|---|
DELETE | HTTP DELETE method. |
GET | HTTP GET method. |
HEAD | HTTP HEAD method. |
OPTIONS | HTTP OPTIONS method. |
PATCH | HTTP PATCH method. |
POST | HTTP POST method. |
PUT | HTTP Put method. |
POST
path_key
optional string literalpath
query_parameters
optional [string]A list of URL query parameters to include in the log event.
Accepts the wildcard (*
) character for query parameters matching a specified pattern.
Specifying “*” results in all query parameters included in the log event.
These override any values included in the body with conflicting names.
response_code
optional uint200
strict_path
optional boolWhether or not to treat the configured path
as an absolute path.
If set to true
, only requests using the exact URL path specified in path
are accepted. Otherwise,
requests sent to a URL path that starts with the value of path
are accepted.
With strict_path
set to false
and path
set to ""
, the configured HTTP source accepts requests from
any URL path.
true
tls
optional objecttls.alpn_protocols
optional [string]Sets the list of supported ALPN protocols.
Declare the supported ALPN protocols, which are used during negotiation with peer. They are prioritized in the order that they are defined.
tls.ca_file
optional string literalAbsolute path to an additional CA certificate file.
The certificate must be in the DER or PEM (X.509) format. Additionally, the certificate can be provided as an inline string in PEM format.
tls.crt_file
optional string literalAbsolute path to a certificate file used to identify this server.
The certificate must be in DER, PEM (X.509), or PKCS#12 format. Additionally, the certificate can be provided as an inline string in PEM format.
If this is set, and is not a PKCS#12 archive, key_file
must also be set.
tls.enabled
optional boolWhether or not to require TLS for incoming or outgoing connections.
When enabled and used for incoming connections, an identity certificate is also required. See tls.crt_file
for
more information.
tls.key_file
optional string literalAbsolute path to a private key file used to identify this server.
The key must be in DER or PEM (PKCS#8) format. Additionally, the key can be provided as an inline string in PEM format.
tls.key_pass
optional string literalPassphrase used to unlock the encrypted key file.
This has no effect unless key_file
is set.
tls.server_name
optional string literalServer name to use when using Server Name Indication (SNI).
Only relevant for outgoing connections.
tls.verify_certificate
optional boolEnables certificate verification. For components that create a server, this requires that the client connections have a valid client certificate. For components that initiate requests, this validates that the upstream has a valid certificate.
If enabled, certificates must not be expired and must be issued by a trusted issuer. This verification operates in a hierarchical manner, checking that the leaf certificate (the certificate presented by the client/server) is not only valid, but that the issuer of that certificate is also valid, and so on until the verification process reaches a root certificate.
Do NOT set this to false
unless you understand the risks of not verifying the validity of certificates.
tls.verify_hostname
optional boolEnables hostname verification.
If enabled, the hostname used to connect to the remote host must be present in the TLS certificate presented by the remote host, either as the Common Name or as an entry in the Subject Alternative Name extension.
Only relevant for outgoing connections.
Do NOT set this to false
unless you understand the risks of not verifying the remote hostname.
Outputs
<component_id>
Output Data
Logs
Warning
Structured
application/json
requestpath_key
configuration setting/
/logs/event712
2020-10-10T17:07:36.452332Z
Text
text/plain
requestHello world
path_key
configuration setting/
/logs/event712
http_server
2020-10-10T17:07:36.452332Z
Telemetry
Metrics
linkcomponent_discarded_events_total
counterfilter
transform, or false if due to an error.component_errors_total
countercomponent_received_bytes_total
countercomponent_received_event_bytes_total
countercomponent_received_events_count
histogramA histogram of the number of events passed in each internal batch in Vector’s internal topology.
Note that this is separate than sink-level batching. It is mostly useful for low level debugging performance issues in Vector due to small internal batches.
component_received_events_total
countercomponent_sent_event_bytes_total
countercomponent_sent_events_total
counterhttp_server_handler_duration_seconds
histogramhttp_server_requests_received_total
counterhttp_server_responses_sent_total
countersource_lag_time_seconds
histogramExamples
text/plain
Given this event...POST / HTTP/1.1
Content-Type: text/plain
User-Agent: my-service/v2.1
X-Forwarded-For: my-host.local
Hello world
sources:
my_source_id:
type: http_server
address: 0.0.0.0:80
encoding: text
headers:
- User-Agent
[sources.my_source_id]
type = "http_server"
address = "0.0.0.0:80"
encoding = "text"
headers = [ "User-Agent" ]
{
"sources": {
"my_source_id": {
"type": "http_server",
"address": "0.0.0.0:80",
"encoding": "text",
"headers": [
"User-Agent"
]
}
}
}
[{"log":{"User-Agent":"my-service/v2.1","host":"my-host.local","message":"Hello world","path":"/","source_type":"http_server","timestamp":"2020-10-10T17:07:36.452332Z"}}]
application/json
Given this event...POST /events HTTP/1.1
Content-Type: application/json
User-Agent: my-service/v2.1
X-Forwarded-For: my-host.local
{"key": "val"}
sources:
my_source_id:
type: http_server
address: 0.0.0.0:80
encoding: json
headers:
- User-Agent
path_key: vector_http_path
[sources.my_source_id]
type = "http_server"
address = "0.0.0.0:80"
encoding = "json"
headers = [ "User-Agent" ]
path_key = "vector_http_path"
{
"sources": {
"my_source_id": {
"type": "http_server",
"address": "0.0.0.0:80",
"encoding": "json",
"headers": [
"User-Agent"
],
"path_key": "vector_http_path"
}
}
}
[{"log":{"User-Agent":"my-service/v2.1","host":"my-host.local","key":"val","source_type":"http_server","timestamp":"2020-10-10T17:07:36.452332Z"}}]
How it works
Decompression
Content-Encoding
header.
Supported algorithms are gzip
, deflate
, snappy
, and zstd
.Transport Layer Security (TLS)
tls.*
options and/or via an
OpenSSL configuration file. The file location defaults to
/usr/local/ssl/openssl.cnf
or can be specified with the OPENSSL_CONF
environment variable.