Tag cardinality limit

Limit the cardinality of tags on metrics events as a safeguard against cardinality explosion

status: beta egress: stream state: stateful

Limits the cardinality of tags on metric events, protecting against accidental high cardinality usage that can commonly disrupt the stability of metrics storages.

The default behavior is to drop the tag from incoming metrics when the configured limit would be exceeded. Note that this is usually only useful when applied to incremental counter metrics and can have unintended effects when applied to other metric types. The default action to take can be modified with the limit_exceeded_action option.

Configuration

Example configurations

{
  "transforms": {
    "my_transform_id": {
      "type": "tag_cardinality_limit",
      "inputs": [
        "my-source-or-transform-id"
      ],
      "mode": "exact"
    }
  }
}
[transforms.my_transform_id]
type = "tag_cardinality_limit"
inputs = [ "my-source-or-transform-id" ]
mode = "exact"
transforms:
  my_transform_id:
    type: tag_cardinality_limit
    inputs:
      - my-source-or-transform-id
    mode: exact
{
  "transforms": {
    "my_transform_id": {
      "type": "tag_cardinality_limit",
      "inputs": [
        "my-source-or-transform-id"
      ],
      "cache_size_per_key": 5120,
      "limit_exceeded_action": "drop_tag",
      "mode": "exact",
      "value_limit": 500
    }
  }
}
[transforms.my_transform_id]
type = "tag_cardinality_limit"
inputs = [ "my-source-or-transform-id" ]
cache_size_per_key = 5_120
limit_exceeded_action = "drop_tag"
mode = "exact"
value_limit = 500
transforms:
  my_transform_id:
    type: tag_cardinality_limit
    inputs:
      - my-source-or-transform-id
    cache_size_per_key: 5120
    limit_exceeded_action: drop_tag
    mode: exact
    value_limit: 500

cache_size_per_key

optional uint

The size of the cache for detecting duplicate tags, in bytes.

The larger the cache size, the less likely it is to have a false positive, or a case where we allow a new value for tag even after we have reached the configured limits.

default: 5120
Relevant when: mode = "probabilistic"

graph

optional object

Extra graph configuration

Configure output for component when generated with graph command

graph.node_attributes

optional object

Node attributes to add to this component’s node in resulting graph

They are added to the node as provided

graph.node_attributes.*
required string literal
A single graph node attribute in graphviz DOT language.
Examples
{
  "color": "red",
  "name": "Example Node",
  "width": "5.0"
}

inputs

required [string]

A list of upstream source or transform IDs.

Wildcards (*) are supported.

See configuration for more info.

Array string literal
Examples
[
  "my-source-or-transform-id",
  "prefix-*"
]

limit_exceeded_action

optional string literal enum
Possible actions to take when an event arrives that would exceed the cardinality limit for one or more of its tags.
Enum options string literal
OptionDescription
drop_eventDrop the entire event itself.
drop_tagDrop the tag(s) that would exceed the configured limit.
default: drop_tag

mode

required string literal enum
Controls the approach taken for tracking tag cardinality.
Examples
"exact"
"probabilistic"
Enum options string literal
OptionDescription
exact

Tracks cardinality exactly.

This mode has higher memory requirements than probabilistic, but never falsely outputs metrics with new tags after the limit has been hit.

probabilistic

Tracks cardinality probabilistically.

This mode has lower memory requirements than exact, but may occasionally allow metric events to pass through the transform even when they contain new tags that exceed the configured limit. The rate at which this happens can be controlled by changing the value of cache_size_per_key.

value_limit

optional uint
How many distinct values to accept for any given key.
default: 500

Outputs

<component_id>

Default output stream of the component. Use this component’s ID as an input to downstream transforms and sinks.

Telemetry

Metrics

link

component_discarded_events_total

counter
The number of events dropped by this component.
component_id
The Vector component ID.
component_kind
The Vector component kind.
component_type
The Vector component type.
host optional
The hostname of the system Vector is running on.
intentional
True if the events were discarded intentionally, like a filter transform, or false if due to an error.
pid optional
The process ID of the Vector instance.

component_errors_total

counter
The total number of errors encountered by this component.
component_id
The Vector component ID.
component_kind
The Vector component kind.
component_type
The Vector component type.
error_type
The type of the error
host optional
The hostname of the system Vector is running on.
pid optional
The process ID of the Vector instance.
stage
The stage within the component at which the error occurred.

component_received_event_bytes_total

counter
The number of event bytes accepted by this component either from tagged origins like file and uri, or cumulatively from other origins.
component_id
The Vector component ID.
component_kind
The Vector component kind.
component_type
The Vector component type.
container_name optional
The name of the container from which the data originated.
file optional
The file from which the data originated.
host optional
The hostname of the system Vector is running on.
mode optional
The connection mode used by the component.
peer_addr optional
The IP from which the data originated.
peer_path optional
The pathname from which the data originated.
pid optional
The process ID of the Vector instance.
pod_name optional
The name of the pod from which the data originated.
uri optional
The sanitized URI from which the data originated.

component_received_events_count

histogram

A histogram of the number of events passed in each internal batch in Vector’s internal topology.

Note that this is separate than sink-level batching. It is mostly useful for low level debugging performance issues in Vector due to small internal batches.

component_id
The Vector component ID.
component_kind
The Vector component kind.
component_type
The Vector component type.
container_name optional
The name of the container from which the data originated.
file optional
The file from which the data originated.
host optional
The hostname of the system Vector is running on.
mode optional
The connection mode used by the component.
peer_addr optional
The IP from which the data originated.
peer_path optional
The pathname from which the data originated.
pid optional
The process ID of the Vector instance.
pod_name optional
The name of the pod from which the data originated.
uri optional
The sanitized URI from which the data originated.

component_received_events_total

counter
The number of events accepted by this component either from tagged origins like file and uri, or cumulatively from other origins.
component_id
The Vector component ID.
component_kind
The Vector component kind.
component_type
The Vector component type.
container_name optional
The name of the container from which the data originated.
file optional
The file from which the data originated.
host optional
The hostname of the system Vector is running on.
mode optional
The connection mode used by the component.
peer_addr optional
The IP from which the data originated.
peer_path optional
The pathname from which the data originated.
pid optional
The process ID of the Vector instance.
pod_name optional
The name of the pod from which the data originated.
uri optional
The sanitized URI from which the data originated.

component_sent_event_bytes_total

counter
The total number of event bytes emitted by this component.
component_id
The Vector component ID.
component_kind
The Vector component kind.
component_type
The Vector component type.
host optional
The hostname of the system Vector is running on.
output optional
The specific output of the component.
pid optional
The process ID of the Vector instance.

component_sent_events_total

counter
The total number of events emitted by this component.
component_id
The Vector component ID.
component_kind
The Vector component kind.
component_type
The Vector component type.
host optional
The hostname of the system Vector is running on.
output optional
The specific output of the component.
pid optional
The process ID of the Vector instance.

tag_value_limit_exceeded_total

counter
The total number of events discarded because the tag has been rejected after hitting the configured value_limit.
component_id
The Vector component ID.
component_kind
The Vector component kind.
component_type
The Vector component type.
host optional
The hostname of the system Vector is running on.
pid optional
The process ID of the Vector instance.

utilization

gauge
A ratio from 0 to 1 of the load on a component. A value of 0 would indicate a completely idle component that is simply waiting for input. A value of 1 would indicate a that is never idle. This value is updated every 5 seconds.
component_id
The Vector component ID.
component_kind
The Vector component kind.
component_type
The Vector component type.
host optional
The hostname of the system Vector is running on.
pid optional
The process ID of the Vector instance.

value_limit_reached_total

counter
The total number of times new values for a key have been rejected because the value limit has been reached.
component_id
The Vector component ID.
component_kind
The Vector component kind.
component_type
The Vector component type.
host optional
The hostname of the system Vector is running on.
pid optional
The process ID of the Vector instance.

Examples

Drop high-cardinality tag

Given this event...
[{"metric":{"counter":{"value":2},"kind":"incremental","name":"logins","tags":{"user_id":"user_id_1"}}},{"metric":{"counter":{"value":2},"kind":"incremental","name":"logins","tags":{"user_id":"user_id_2"}}}]
...and this configuration...
transforms:
  my_transform_id:
    type: tag_cardinality_limit
    inputs:
      - my-source-or-transform-id
    value_limit: 1
    limit_exceeded_action: drop_tag
[transforms.my_transform_id]
type = "tag_cardinality_limit"
inputs = [ "my-source-or-transform-id" ]
value_limit = 1
limit_exceeded_action = "drop_tag"
{
  "transforms": {
    "my_transform_id": {
      "type": "tag_cardinality_limit",
      "inputs": [
        "my-source-or-transform-id"
      ],
      "value_limit": 1,
      "limit_exceeded_action": "drop_tag"
    }
  }
}
...this Vector event is produced:
[{"metric":{"counter":{"value":2},"kind":"incremental","name":"logins","tags":{"user_id":"user_id_1"}}},{"metric":{"counter":{"value":2},"kind":"incremental","name":"logins","tags":{}}}]

How it works

Intended Usage

This transform is intended to be used as a protection mechanism to prevent upstream mistakes. Such as a developer accidentally adding a request_id tag. When this is happens, it is recommended to fix the upstream error as soon as possible. This is because Vector’s cardinality cache is held in memory and it will be erased when Vector is restarted. This will cause new tag values to pass through until the cardinality limit is reached again. For normal usage this should not be a common problem since Vector processes are normally long-lived.

Failed Parsing

This transform stores in memory a copy of the key for every tag on every metric event seen by this transform. In mode exact, a copy of every distinct value for each key is also kept in memory, until value_limit distinct values have been seen for a given key, at which point new values for that key will be rejected. So to estimate the memory usage of this transform in mode exact you can use the following formula:

(number of distinct field names in the tags for your metrics * average length of
the field names for the tags) + (number of distinct field names in the tags of
your metrics * `value_limit` * average length of the values of tags for your
metrics)

In mode probabilistic, rather than storing all values seen for each key, each distinct key has a bloom filter which can probabilistically determine whether a given value has been seen for that key. The formula for estimating memory usage in mode probabilistic is:

(number of distinct field names in the tags for your metrics * average length of
the field names for the tags) + (number of distinct field names in the tags of
-your metrics * `cache_size_per_key`)

The cache_size_per_key option controls the size of the bloom filter used for storing the set of acceptable values for any single key. The larger the bloom filter the lower the false positive rate, which in our case means the less likely we are to allow a new tag value that would otherwise violate a configured limit. If you want to know the exact false positive rate for a given cache_size_per_key and value_limit, there are many free on-line bloom filter calculators that can answer this. The formula is generally presented in terms of ’n’, ‘p’, ‘k’, and ’m’ where ’n’ is the number of items in the filter (value_limit in our case), ‘p’ is the probability of false positives (what we want to solve for), ‘k’ is the number of hash functions used internally, and ’m’ is the number of bits in the bloom filter. You should be able to provide values for just ’n’ and ’m’ and get back the value for ‘p’ with an optimal ‘k’ selected for you. Remember when converting from value_limit to the ’m’ value to plug into the calculator that value_limit is in bytes, and ’m’ is often presented in bits (1/8 of a byte).

Restarts

This transform’s cache is held in memory, and therefore, restarting Vector will reset the cache. This means that new values will be passed through until the cardinality limit is reached again. See intended usage for more info.

State

This component is stateful, meaning its behavior changes based on previous inputs (events). State is not preserved across restarts, therefore state-dependent behavior will reset between restarts and depend on the inputs (events) received since the most recent restart.