Hello everbody!
I got a kinda strange problem at hand. I use a docker stack containing grafana, loki and seaweedfs. I oriented myself on a docker compose from here. I switched the minio for the seaweedfs S3 storage because i need to pay attantion to the licensing in my setup. Seaweedfs works fine with the other containers but i have a fatal flaw. In case i stop the compose or god forbid the server went down somestimes the loki cant write the last index in the storage. If i restart the stack sometimes loki can’t find a index in the S3 storage because he looks for a index that he got to save on his perisistent index storage on my systam that seaweedfs dosen’t have. If this happens all logs befor this point are not accessible anymore.
Now the question: What is the best practice to prevent this?
for some more intel, my loki conf:
auth_enabled: false
ingester:
chunk_idle_period: 2h
chunk_target_size: 1536000
max_chunk_age: 2h
server:
http_listen_port: 3100
http_server_read_timeout: 10m
http_server_write_timeout: 10m
http_server_idle_timeout: 10m
grpc_server_max_recv_msg_size: 104857600
grpc_server_max_send_msg_size: 104857600
memberlist:
join_members:
- loki:7946
limits_config:
retention_period: 2160h
max_global_streams_per_user: 1000000
max_streams_per_user: 1000000
max_query_series: 2000
per_stream_rate_limit: 256MB
ingestion_burst_size_mb: 256
ingestion_rate_mb: 256
max_cache_freshness_per_query: 5m
max_concurrent_tail_requests: 1000000
query_range:
max_retries: 5
align_queries_with_step: true
parallelise_shardable_queries: true
cache_results: true
schema_config:
configs:
- from: 2023-12-12
store: boltdb-shipper
object_store: s3
schema: v11
index:
prefix: index_
period: 24h
common:
path_prefix: /loki
replication_factor: 1
storage:
s3:
endpoint: seaweedfs:8333
insecure: true
bucketnames: loki-data
access_key_id: ******
secret_access_key: ********
s3forcepathstyle: true
region: null
ring:
kvstore:
store: memberlist