Logs dissapearing after 3 hours in Grafana

I am meeting with issue where I try to display Loki logs in Grafana and they dissapear after 3 hours.

For storage i use ODF and loki configuration is current:

I tried switching between Loki 2.8.4 2.9.1 and latest but same issue occurs on all of them:

kind: ConfigMap
apiVersion: v1
metadata:
  name: loki
  namespace: "{{ $.Values.namespace }}"
  labels:
    app.kubernetes.io/instance: loki
    app.kubernetes.io/name: loki
data:
  config.yaml: |
    auth_enabled: true

    common:
      compactor_address: 'loki-backend'
      path_prefix: /var/loki
      replication_factor: 3
      storage:
        s3:
          s3: https://<access_key>:<secret_key>@<endpoint>/<bucket_name>/
          http_config:
            insecure_skip_verify: true
    frontend:
      scheduler_address: query-scheduler-discovery.{{ $.Values.namespace }}.svc.cluster.local.:9095
    frontend_worker:
      scheduler_address: query-scheduler-discovery.{{ $.Values.namespace }}.svc.cluster.local.:9095
    index_gateway:
      mode: ring
    limits_config:
      reject_old_samples: true
      reject_old_samples_max_age: 168h
      ingestion_rate_mb: 400
      ingestion_burst_size_mb: 400
      max_global_streams_per_user: 100000
      max_streams_per_user: 100000
      max_query_length: 0
      retention_period: 168h
    memberlist:
      join_members:
      - loki-memberlist
    querier:
      engine:
        timeout: 5m
      query_timeout: 5m
    query_range:
      align_queries_with_step: true
    ruler:
      storage:
        s3:
          s3: https://<access_key>:<secret_key>@<endpoint>/<bucket_name>/
          http_config:
            insecure_skip_verify: true
          s3forcepathstyle: true
        type: s3
    runtime_config:
      file: /etc/loki/runtime-config/runtime-config.yaml
    compactor:
      retention_delete_delay: 1m
      retention_enabled: true
    schema_config:
      configs:
      - from: "2024-03-27"
        index:
          period: 24h
          prefix: loki_index_
        object_store: s3
        schema: v12
        store: boltdb-shipper
    ingester:
      chunk_idle_period: 24h
      max_chunk_age: 48h
    server:
      grpc_listen_port: 9095
      http_listen_port: 3100
    storage_config:
      hedging:
        at: 250ms
        max_per_second: 20
        up_to: 3

At one point when i saw changing s3 7 days of logs appeared, but they dissapeared soon. Tried searching through logs for issues but didn’t find anything standing out. Can anyone assist, I can provide logs.

  1. Double check your chunks are actually landing in S3.

  2. The default value of query_ingester_within is 3 hours. If you are setting chunk_idle_period to 24h you’ll want to adjust query_ingester_within as well.

2 Likes