During exploration and development of new Alloy processing pipeline configs, I often want to delete ingested logs and start again with the same logs but new processing pipeline config.
For this my current config (loki.yaml) looks like follows:
limits_config:
reject_old_samples: false
# disable retention
retention_period: 0s
compactor:
# if retention_period=0s, then delete API is enabled but no retention is happening
retention_enabled: true
delete_request_store: aws
retention_delete_delay: 1h0m0s
delete_request_cancel_period: 2h0m0s
delete_max_interval: 3h0m0s
This works nearly perfect except:
Let’s assme I ingest (old) logs in the timeframe of 2024-08-12T08:00:00 for an hour.
The I make a delete request for the given log stream an timeframe.
- After some time the logs are hidden, because the delete_mode is filter-and-delete - which is exactly what should happen.
- now the I can query the pending delete request under the API /loki/api/v1/delete
- after some more time the delete job gets executed and the status of the pending delete request changes to processed - all as expected.
If I want to try out another ingest pipeline with the same (old) logs, I can ingest the without problem. If I try to query them with grafana GUI it seems that the labels are there, but NO logs get displayed.
These logs are STILL hidden as an effect of the still existing (but processed) delete request. This means the hiding is persistent.
I tried to delete the delete request via API, but thats forbidden (only allowed if pending).
Question: Is this as designed or more likely a bug?
I can work around this by manually deleting the recorded delete requests by
- shutting down loki
- deleting the object ‘…/index/delete_requests/delete_requests.gz’ in the S3 object store
- deleting the file ‘…/compactor/deletion/delete_requests/delete_requests’ from the persistent store/cache of the pod.
Is there a better solution to my problem?