Hi all!
Loki deployed using helm chart. (6.2.1)
Our release process involves deleting old namespaces and creating new ones. The problem is that I can’t retrieve logs from Loki using the old namespace filter, even if the time range is set to when the namespace was present in the cluster.
Is there any way to fix this so I can view historical data regarding namespaces that existed at the time?
Are you saying older logs on your Loki cluster go away? That should not happen unless it’s misconfigured.
Might be a misconfiguration, but can’t pinpoint where. The only things I’ve changed in chart is data persistence for ‘write’, ‘backed’ and ‘minio’, some security values and ingestion rate for ‘write’.
Two troubles:
- I can’t see logs from containers that no longer exist in the cluster.
- Sometimes Promtail hangs, and I have to restart it to push logs again. All the logs prior to the hanging are lost.
Actually, don’t mind the second trouble.
My main problem that loki flushes logs of absent containers. I can’t find the value which could configure that behaviour.
I have this relevant values set:
tableManager:
retention_deletes_enabled: false
retention_period: 0
limits_config:
reject_old_samples: false
reject_old_samples_max_age: 168h
max_cache_freshness_per_query: 10m
split_queries_by_interval: 15m
query_timeout: 300s
ingestion_rate_mb: 8
ingestion_burst_size_mb: 16
Once again, I would like to ask a question regarding that matter.
Could the problem be related to some labels or similar issues?
I guess Loki deletes labels associated with deleted namespaces or does not keep logs that are no longer present in the Kubernetes cluster.
I would appreciate a small hint.
I have the same issue. I want to query the logs of deleted pods but when I do that in grafana it says no log found.