Loki split brain cluster ended with local datastore

Hi I have to reach out to the community to figure out this issue.

We are running 3 monolithic Loki nodes collecting 24GB of log each day to S3 storage. But a few days ago one of the nodes got a problem, and AWS detected a dead server which it helpfully enough replaced with a new one and added it to the loadbalancer pool. Only problem was that this new node seems to have started up with default configfile instead of our. Result is that it stored all received data for a while into the local filesystem until we detected it and restarted it with correct configfile.
Problem now is that I have 30% of logs for a period in a local directory /tmp/loki/ unavailable for searching and I would like to move it to S3.

I don’t think there is an easy answer for this.

I have never tried this before, but you might be able to just upload both index files and chunk files to S3. There shouldn’t be conflict. But I can’t guaranteed that this will work, and messing with the storage seems risky.

Another solution is to copy your local logs (both index and chunk) to another server, and stand up a local Loki instance there. Once done you should be able to then query that local loki instance for the missing logs, you can then write a script, extract everything, then upload to your actual loki server. More work, but I think it’s safer.