I’m using Loki with AWS S3 as the backend for long-term storage of my indexed log data. I have implemented an S3 lifecycle policy that transitions my Loki files in the ‘fake/’ and 'index/’ directories to the Glacier Deep Archive storage class after a certain period to reduce storage costs.
However, I’m encountering an issue where Loki is unable to write new files to these directories once the existing files have been transitioned to Glacier Deep Archive. The specific error message I’m getting is “failed to run compaction” and “failed to get s3 object: InvalidObjectState: The operation is not valid for the object’s storage class”.
From my understanding, the Glacier Deep Archive requires objects to be restored before they can be accessed, which can take up to 12 hours. But, I’m unclear as to why this is impacting Loki’s ability to write new files to the S3 bucket.
Could you please provide insight into why this might be happening and how to best handle this situation? Would using a different S3 storage class for my Loki files resolve this problem?
Any advice or guidance would be greatly appreciated.
You were absolutely correct! The lifecycle policy was indeed moving both indexes and chunks from the same day, which was causing Loki’s compaction to fail since it was trying to access chunks that were no longer available. I’ve now updated the policy to ensure it only transitions files that are older than 10 days, and everything seems to be working well.
I do appreciate your guidance on this matter. However, I’m now facing a new challenge. When I attempt to restore objects from the archive, I’m unable to identify which chunks correspond to a specific day. I need to know this to restore only the relevant objects.
Do you know of any best practices or standard procedures for this situation? I could certainly use some advice.
I’ve seen similar questions asked here, unfortunately I don’t think there was a straight and satisfying answer, and I haven’t had this use case so I haven’t really researched into a solution.
Perhaps someone with more experiences can comment, but if I had to do this I’d probably try to retrieve the index from code storage first, copy the code from github to read and parse the index file, and that should hopefully provide a list of chunks that should be retrieved.