Seeking clarification on long-term persistence with object store


Recently stumbled upon Loki and have been grokking a deployment together on AWS EKS Fargate. After writing my own log exporter that is compatible w/ Fargate, I finally got logs rendering in Grafana.

Now, I am trying to configure Loki with S3 and boltdb-shipper so my indices and chunks live in s3 for the longterm. For compliance purposes, I’d like them to live indefinitely. I’ve read through the docs, but would like to validate some assumptions before rolling my configuration into production:

  • loki (boltdb-shipper(?)) will not delete indices or chunks stored in s3 – I must configure my own retention policy to delete them as I see fit. If I don’t do this, they will live forever.
  • a fresh install of loki would regenerate its logs based on indices & chunks stored in s3

Essentially what I seek to accomplish is running loki stateless with all indices & chunks shipped to s3 for longterm storage & compliance. I’ll include my config as well, in case it’s useful:

    chunk_idle_period: 3m
    chunk_block_size: 262144
    chunk_retain_period: 1m
    max_transfer_retries: 0
          store: inmemory
        replication_factor: 1
    enforce_metric_name: false
    retention_period: 24h
    reject_old_samples: true
    reject_old_samples_max_age: 18h
    working_directory: /data/loki/boltdb-shipper-compactor
    shared_store: aws
    compaction_interval: 10m
    retention_enabled: true
    retention_delete_delay: 2h
    retention_delete_worker_count: 150
    - from: 2022-01-01
      store: boltdb-shipper
      object_store: aws
      schema: v11
        prefix: loki_index_
        period: 24h
      s3: s3://REGION/MY-BUCKET
      sse_encryption: true
      active_index_directory: data/loki/index
      cache_location: data/loki/boltdb-cache
      shared_store: s3

@mitchell9 Is the configuration that you posted here working? Is this storing both indexes and chunks on S3? I’m asking this because I have a very similar trial setup, but it somehow only stores the index on S3, not the chunks.

I can only answer your question about retention: AFAIK, when you are storing your indexes and chunks on S3, you indeed have to setup Lifecycle rules to delete old objects.
BTW, even when you want to keep objects indefinitely , you can setup rules to move objects from Standard to Standard-IA to Glacier IR, based on their lifetime.

Have you tried to setup another Loki instance with this setup, and does it pull data straight from S3?

To answer my own question about ‘is it working’: I got your example working, with minimal modifications, because I’m working with loki 2.4.2 on docker.

Below config is incomplete. as it only shows the modifications I made to get it to work:

auth_enabled: false

    dir: /loki/wal

  http_listen_port: 3100
  grpc_listen_port: 9096

  working_directory: /loki/boltdb-shipper-compactor

    s3forcepathstyle: true
    active_index_directory: /loki/index
    cache_location: /loki/boltdb-cache

Te reply to your question about ‘does a fresh Loki install pull its data from S3 again’, I just tested this out and it works!

What I did was stopping Loki, and then cleared out the whole data directory. After restarting Loki, it neatly pulled data down from S3 when I started sending queries.