Issue with Loki and MinIO: Premature Log Deletion and Delayed Deletion After API Usage

Hi Grafana Community,

We are encountering a peculiar issue with our Loki setup using MinIO

  1. Premature Log Deletion: Logs in MinIO are being automatically deleted after only 20 to 25 minutes, despite our retention_period being configured for 14 hours. This is unexpected behavior.

  2. Delayed Deletion After Delete API: When we attempt to delete logs using Loki’s Delete API, the logs are removed from MinIO within 2 to 3 minutes. However, if we query these deleted logs via Loki’s Get API, they are still visible and only disappear after the same 20 to 25-minute window mentioned above.

  3. Our Desired Outcome:
    a. We need logs to be retained for the full 14 hours as configured, after which they should be automatically deleted.
    b. When we delete logs using the Delete API, we expect them to be removed immediately from both Loki’s query results and MinIO, without the observed delay.

We would appreciate any insights or assistance in resolving this issue attaching my values file for ref:

Any ideas on what might be going wrong or how to fix it?

Thanks in advance! @tonyswumac @yosiasz

grafana:
enabled: false
sidecar:
datasources:
enabled: true
maxLines: 1000

deploymentMode: SingleBinary
singleBinary:
replicas: 1

backend:
replicas: 0
read:
replicas: 0
write:
replicas: 0
ingester:
replicas: 0
querier:
replicas: 0
queryFrontend:
replicas: 0
queryScheduler:
replicas: 0
distributor:
replicas: 0
compactor:
enabled: false
indexGateway:
replicas: 0
bloomCompactor:
replicas: 0
bloomGateway:
replicas: 0

promtail:
enabled: false

loki:
enabled: true
commonConfig:
replication_factor: 1

schemaConfig:
configs:
- from: “2024-05-01”
store: tsdb
object_store: s3
schema: v13
index:
prefix: loki_index_
period: 24h

storage_config:
aws:
s3: “http://loki-minio.monitoring.svc.cluster.local:9000/loki
bucketnames: “protectionlog”
endpoint: “http://loki-minio.monitoring.svc.cluster.local:9000
access_key_id: “minioadmin”
secret_access_key: “minioadmin”
region: “us-east-1”
s3forcepathstyle: true

limits_config:
discover_service_name:
discover_log_levels: false
allow_structured_metadata: true
volume_enabled: true
retention_period: 15h
max_query_parallelism: 8
query_timeout: 30s
split_queries_by_interval: 1h

ingester:
chunk_target_size: 262144
chunk_idle_period: 30s
max_chunk_age: 1m
chunk_retain_period: 1m

compactor:
retention_enabled: true
delete_request_store: s3
retention_delete_delay: 10s

Disable Loki internal cache systems

chunksCache:
enabled: false
resultsCache:
enabled: false

chunksCache:
enabled: false

resultsCache:
enabled: false

gateway:
enabled: false

memcachedchunks:
enabled: false
memcachedfrontend:
enabled: false
memcachedindexqueries:
enabled: false
memcachedindexwrites:
enabled: false

loki-canary:
enabled: true

minio:
enabled: true
rootUser: “minioadmin”
rootPassword: “minioadmin”
image:
repository: minio/minio
tag: latest
pullPolicy: Always
buckets:
- name: protectionlog
policy: none
- name: protectionfaultrecord
policy: none

Are you sure your Loki logs are actually stored on MinIO? Can you query for logs that are older than, say, 5 hours?

Thanks for replay @tonyswumac
No, after 20 to 25 min’s logs are getting deleted
yes logs are getting stored in MinIO
for ref:
I have pushed the logs

and i can see the same logs in Minio bucket

this is query part

Check your MinIO storage directly, you should see a fake directory with chunk files if your Loki cluster is indeed writing to MinIO.

Also, in your configuration here:

storage_config:
  aws:
    s3: “http://loki-minio.monitoring.svc.cluster.local:9000/loki”
    bucketnames: “protectionlog”
    endpoint: “http://loki-minio.monitoring.svc.cluster.local:9000”
    access_key_id: “minioadmin”
    secret_access_key: “minioadmin”
    region: “us-east-1”
    s3forcepathstyle: true

Try changing aws to s3.