Recently stumbled upon Loki and have been grokking a deployment together on AWS EKS Fargate. After writing my own log exporter that is compatible w/ Fargate, I finally got logs rendering in Grafana.
Now, I am trying to configure Loki with S3 and boltdb-shipper so my indices and chunks live in s3 for the longterm. For compliance purposes, I’d like them to live indefinitely. I’ve read through the docs, but would like to validate some assumptions before rolling my configuration into production:
- loki (boltdb-shipper(?)) will not delete indices or chunks stored in s3 – I must configure my own retention policy to delete them as I see fit. If I don’t do this, they will live forever.
- a fresh install of loki would regenerate its logs based on indices & chunks stored in s3
Essentially what I seek to accomplish is running loki stateless with all indices & chunks shipped to s3 for longterm storage & compliance. I’ll include my config as well, in case it’s useful:
config: ingester: chunk_idle_period: 3m chunk_block_size: 262144 chunk_retain_period: 1m max_transfer_retries: 0 lifecycler: ring: kvstore: store: inmemory replication_factor: 1 limits_config: enforce_metric_name: false retention_period: 24h reject_old_samples: true reject_old_samples_max_age: 18h compactor: working_directory: /data/loki/boltdb-shipper-compactor shared_store: aws compaction_interval: 10m retention_enabled: true retention_delete_delay: 2h retention_delete_worker_count: 150 schema_config: configs: - from: 2022-01-01 store: boltdb-shipper object_store: aws schema: v11 index: prefix: loki_index_ period: 24h storage_config: aws: s3: s3://REGION/MY-BUCKET sse_encryption: true boltdb_shipper: active_index_directory: data/loki/index cache_location: data/loki/boltdb-cache shared_store: s3