Setting Log Retention Policy - Grafana-loki

Hi Team,

fluent-bit daemonset is forwarding logs to loki running on same namespace and I’ve validated configuration as well.

auth_enabled: false
chunk_store_config:
chunk_cache_config:
memcached:
batch_size: 100
parallelism: 100
memcached_client:
addresses: dns+loki-memcachedchunks:11211
consistent_hash: true

compactor:
compaction_interval: 10m
delete_request_store: filesystem
retention_delete_delay: 2h
retention_delete_worker_count: 150
retention_enabled: true
working_directory: /bitnami/grafana-loki/loki/retention
distributor:
ring:
kvstore:
store: memberlist

ingester:
chunk_block_size: 262144
chunk_encoding: snappy
chunk_idle_period: 30m
chunk_retain_period: 1m
lifecycler:
ring:
kvstore:
store: memberlist
replication_factor: 1
wal:
dir: /bitnami/grafana-loki/wal
limits_config:
allow_structured_metadata: true
max_cache_freshness_per_query: 10m
reject_old_samples: true
reject_old_samples_max_age: 168h
retention_period: 336h
split_queries_by_interval: 15m
memberlist:
join_members:

  • loki-grafana-loki-gossip-ring
    querier:
    max_concurrent: 16
    query_range:
    align_queries_with_step: true
    cache_results: true
    max_retries: 5
    results_cache:
    cache:
    memcached_client:
    addresses: dns+loki-memcachedfrontend:11211
    consistent_hash: true
    max_idle_conns: 16
    timeout: 500ms
    update_interval: 1m
    query_scheduler:
    max_outstanding_requests_per_tenant: 32768
    ruler:
    alertmanager_url: https://alertmanager.xx
    external_url: https://alertmanager.xx
    ring:
    kvstore:
    store: memberlist
    rule_path: /tmp/loki/scratch
    storage:
    local:
    directory: /bitnami/grafana-loki/conf/rules
    type: local
    schema_config:
    configs:
  • from: “2020-10-24”
    index:
    period: 24h
    prefix: index_
    object_store: filesystem
    schema: v11
    store: boltdb-shipper
  • from: “2024-03-12”
    index:
    period: 24h
    prefix: index_
    object_store: filesystem
    schema: v12
    store: tsdb
  • from: “2024-04-23”
    index:
    period: 24h
    prefix: index_
    object_store: filesystem
    schema: v13
    store: tsdb
    server:
    grpc_listen_port: 9095
    http_listen_port: 3100
    storage_config:
    boltdb_shipper:
    active_index_directory: /bitnami/grafana-loki/loki/index
    cache_location: /bitnami/grafana-loki/loki/cache
    cache_ttl: 168h
    filesystem:
    directory: /bitnami/grafana-loki/chunks
    index_queries_cache_config: null
    tsdb_shipper:
    active_index_directory: /bitnami/grafana-loki/loki/tsdb-index
    cache_location: /bitnami/grafana-loki/loki/tsdb-cache
    table_manager:
    retention_deletes_enabled: false
    retention_period: 0s

In compacter, “retention_enabled: true” & under limits_config “retention_period: 336h”. But in grafana, I am not able to see older logs like 1 or 2 days logs.

I am I missing any configuration specific to retention?

@tonyswumac Appreciate your suggestions/inputs on this

Looks like you are running Loki with filesystem storage. Have you checked and made sure index files and chunks are being written to your filesystem storage? Any error from querier?

Also your configuration is not structured, a bit hard to read.

Hey sorry for the late response.

I see the files are available. But however the loading part is failing, as I can see from querier logs.

Error: level=error ts=2024-09-03T13:15:52.02135439Z caller=batch.go:726 org_id=fake traceID=45ce8c25b39bd51a msg=“error fetching chunks” err=“failed to load chunk ‘fake/2b5070ecfea7208a/MTkxYjdmMzNlNDk6MTkxYjgwNWI0NTU6ZWRiNTlhMmQ=’: open /bitnami/grafana-loki/chunks/fake/2b5070ecfea7208a/MTkxYjdmMzNlNDk6MTkxYjgwNWI0NTU6ZWRiNTlhMmQ=: no such file or directory”

level=error ts=2024-09-03T13:15:52.421916486Z caller=batch.go:726 org_id=fake traceID=28e1ac7f2e79b075 msg=“error fetching chunks” err=“failed to load chunk ‘fake/2b5070ecfea7208a/MTkxYjdmMzNlNDk6MTkxYjgwNWI0NTU6ZWRiNTlhMmQ=’: open /bitnami/grafana-loki/chunks/fake/2b5070ecfea7208a/MTkxYjdmMzNlNDk6MTkxYjgwNWI0NTU6ZWRiNTlhMmQ=: no such file or directory”

level=error ts=2024-09-03T13:15:52.423324215Z caller=errors.go:26 org_id=fake traceID=28e1ac7f2e79b075 message=“closing iterator” error=“failed to load chunk ‘fake/2b5070ecfea7208a/MTkxYjdmMzNlNDk6MTkxYjgwNWI0NTU6ZWRiNTlhMmQ=’: open /bitnami/grafana-loki/chunks/fake/2b5070ecfea7208a/MTkxYjdmMzNlNDk6MTkxYjgwNWI0NTU6ZWRiNTlhMmQ=: no such file or directory”

level=error ts=2024-09-03T13:15:53.244392222Z caller=batch.go:726 org_id=fake traceID=45ce8c25b39bd51a msg=“error fetching chunks” err=“failed to load chunk ‘fake/2b5070ecfea7208a/MTkxYjdlMTg0Zjg6MTkxYjdmMzNlNDk6YjhlNGE1OTA=’: open /bitnami/grafana-loki/chunks/fake/2b5070ecfea7208a/MTkxYjdlMTg0Zjg6MTkxYjdmMzNlNDk6YjhlNGE1OTA=: no such file or directory”

level=error ts=2024-09-03T13:15:53.490375766Z caller=batch.go:726 org_id=fake traceID=45ce8c25b39bd51a msg=“error fetching chunks” err=“failed to load chunk ‘fake/2b5070ecfea7208a/MTkxYjdmMzNlNDk6MTkxYjgwNWI0NTU6ZWRiNTlhMmQ=’: open /bitnami/grafana-loki/chunks/fake/2b5070ecfea7208a/MTkxYjdmMzNlNDk6MTkxYjgwNWI0NTU6ZWRiNTlhMmQ=: no such file or directory”

level=error ts=2024-09-03T13:15:54.444427029Z caller=batch.go:726 org_id=fake traceID=45ce8c25b39bd51a msg=“error fetching chunks” err=“failed to load chunk ‘fake/2b5070ecfea7208a/MTkxYjdmMzNlNDk6MTkxYjgwNWI0NTU6ZWRiNTlhMmQ=’: open /bitnami/grafana-loki/chunks/fake/2b5070ecfea7208a/MTkxYjdmMzNlNDk6MTkxYjgwNWI0NTU6ZWRiNTlhMmQ=: no such file or directory”

@tonyswumac Looks like it’s a limitation. As I am using local storage(file system) for persistence.

That’s only a limitation if you were using helm chart for your deployment. If you are, then yes I would say for a single instance you do not want to use the helm chart.

1 Like