Losing index after Loki Container shutdown

Hello everbody!
I got a kinda strange problem at hand. I use a docker stack containing grafana, loki and seaweedfs. I oriented myself on a docker compose from here. I switched the minio for the seaweedfs S3 storage because i need to pay attantion to the licensing in my setup. Seaweedfs works fine with the other containers but i have a fatal flaw. In case i stop the compose or god forbid the server went down somestimes the loki cant write the last index in the storage. If i restart the stack sometimes loki can’t find a index in the S3 storage because he looks for a index that he got to save on his perisistent index storage on my systam that seaweedfs dosen’t have. If this happens all logs befor this point are not accessible anymore.

Now the question: What is the best practice to prevent this?
for some more intel, my loki conf:

auth_enabled: false
ingester:
  chunk_idle_period: 2h
  chunk_target_size: 1536000
  max_chunk_age: 2h
server:
  http_listen_port: 3100
  http_server_read_timeout: 10m
  http_server_write_timeout: 10m
  http_server_idle_timeout: 10m
  grpc_server_max_recv_msg_size: 104857600 
  grpc_server_max_send_msg_size: 104857600 
memberlist:
  join_members:
    - loki:7946
limits_config:
  retention_period: 2160h
  max_global_streams_per_user: 1000000
  max_streams_per_user: 1000000
  max_query_series: 2000
  per_stream_rate_limit: 256MB
  ingestion_burst_size_mb: 256
  ingestion_rate_mb: 256
  max_cache_freshness_per_query: 5m
  max_concurrent_tail_requests: 1000000
query_range:
  max_retries: 5
  align_queries_with_step: true
  parallelise_shardable_queries: true
  cache_results: true
schema_config:
  configs:
    - from: 2023-12-12
      store: boltdb-shipper
      object_store: s3
      schema: v11
      index:
        prefix: index_
        period: 24h              
common:
  path_prefix: /loki
  replication_factor: 1
  storage:
    s3:
      endpoint: seaweedfs:8333
      insecure: true
      bucketnames: loki-data
      access_key_id: ******
      secret_access_key: ********
      s3forcepathstyle: true
      region: null
  ring:
      kvstore:
        store: memberlist

Sounds like you are missing indices and/or chunks during shutdown. Couple of things to try:

  1. Use a persistent volume or mount a drive to the docker container from host. I am assuming you are running a single instance, mounting a drive should not be a problem.

  2. Disable WAL, which should force flushing at shutdown (not ideal).

  3. Force flushing at shutdown by enabling wal.flush_on_shutdown under ingester configuration.

Write Ahead Log | Grafana Loki documentation for more information.

Thank you!
Im trying the things you sugested. But i have a question about wal. As i understand wal are temp data that keep track where loki is at a given moment in his not ready to write index that loki is collecting so that this “bundle” can be stored in storage. because its temporary data it will not be storaged in a volume or a mounted disk.

here is a sample node from my compose:

  write_1:
    image: grafana/loki:main
    command: "-config.file=/etc/loki/config.yaml -target=write -config.expand-env=true"
    ports:
      - 3100
      - 7946
      - 9095
    volumes:
      - write_1_storage:/loki
      - ./loki-config.yaml:/etc/loki/config.yaml
      - ./certs/grafana.crt:/etc/loki/grafana.crt
      - ./certs/grafana.key:/etc/loki/grafana.key   
    depends_on:
      - seaweedfs
      - gateway
    networks:
       loki:
         aliases:
           - write

The ingesters store logs that haven’t been flushed in memory, so if the ingesters restart or crash all logs that are in memory that haven’t been flushed are lost. WAL is essentially a way to store those logs that haven’t been flushed on filesystem, so that ingesters can replay after a restart. This is why WAL directory needs to persist between container restart.

Also with WAL enabled ingesters will no longer flush on shutdown by default. You can still force this, of course, with a configuration change.