What should I set the keys of limits-config tenant-id of /etc/overrides.yaml

I am setting the retention setting to limits_config as below.

limits_config:
      retention_period: 8760h
      enforce_metric_name: false
      max_cache_freshness_per_query: 10m
      reject_old_samples: true
      reject_old_samples_max_age: 168h
      split_queries_by_interval: 15m
      per_tenant_override_config: /etc/limits/overrides.yaml
compactor:
      retention_enabled: true
      retention_delete_delay: 2h
      retention_delete_worker_count: 150

overrides.yaml is as below.

data:
  overrides.yaml: |
    overrides:
      "tenant-1":
        retention_period: 24h
      "tenant-2":
        retention_period: 24h
kind: ConfigMap

I want to ask tenant-1 should be matched to value of X-Scope-Org-Id when it is used at pushed to api ?
The documentation uses numbers such as 29 and 30 as keys, but I could not figure out what these values are based on.

Yes, those would be values that match X-Scope-Org-Id when injecting logs.

@tonyswumac Thanks your reply!
I understand key’s value. But It seemed that index and chunk data of tenant-1, tenant-2 are not deleted after 24h.
Do you have any idea what the cause might be?

Don’t see anything obvious. According to your configuration chunks should be deleted roughly after 26 hours. If they are not, check and make sure your compactor is actually reading the correct config file. Check the logs too and see if there is anything obvious there.

I observed state of chunks and index for a while, and then I confirmed the indexes were deleted by compactor. But chunks were not deleted yet.
does limits_config’s overrides configuration not apply rule to chunks?

It does, but there usually is a delay, controlled by retention_delete_delay. So you should see chunks removed after that time frame.

I figured out this phenomenon that I seemed chunks are not deleted.
Let me get to the straight point, the cause is I restarted loki-write and loki-read pods after configured limits_config soon.

So maybe the marker it points to chunks should be deleted was deleted from pods I think.

I confirmed S3 storage directory has chunks data, then some chunks created when after restarted pods were deleted.
And not deleted chunks were created when before restarted pods.

Thank you @tonyswumac . I solved my question completely.

1 Like