In our use case, we use Loki to capture logs from the customer system, and later export the database content (when support is requested) and import it into our internal Loki instance so we can provide support to the customer.
Our current solution uses Loki API to export data and then push that data into our internal instance.
Is there a way to do it more efficiently? Can we copy the “Loki database” files directly and use them as a base for another Loki instance? The challenge here seems to be a state kept in memory. We are okay to miss e.g. last 10 minutes of logs and call the “flush” operation just before copying the files. Is that something that will work?
I think API is probably your best bet. You can copy the files directly, but you’d have to parse the index to find out which chunk files to copy, and you’d have to make sure your target Loki cluster has identical schema configuration.
This is a rather strange usecase though. While you probably have a good reason to do this, I am just curious why you aren’t using your client’s Loki cluster directly. You seem to have access already (since you can export).
We cannot export. The client executes the provided JAR file that contains export logic (or triggers it from the UI). We do not have access to the client clusters, so the support is an “offline” process - we have access to only what is provided by the client.
Can I find the recommendation on exporting in batch (in the most efficient way) somewhere? The problem we are facing is a vast number of iterations (the exported logs from the last seven days in big installations have over 40 GB, so being able to export 4MB in a single call is limiting - probably with some customizations we can do a way better)
@tonyswumac - could you please elaborate on “You can copy the files directly, but you’d have to parse the index to find out which chunk files to copy”? Cannot we copy the entire directory? Is copying more chunks problematic in some way? We are interested in a dump of the whole database.
Note: We have complete control over both Loki instances (client and our side) - it’s provided as a part of the deployment, so schema should not be a problem.
If that’s the case, you should be able to copy everything over, chunks and index files. You can just give it a try and see if it works, provided you don’t mind potentially messing up your internal loki cluster. I’ve never tried this personally so I could be missing something.
I was able to successfully download the Loki storage directory and read it from the local cluster.
The remaining question is: how much data is stored in memory and not saved to underlying storage? Is calling “flush” API and waiting a minute enough to get all the logs (until this moment) stored in the file system so we can copy it?
Is shutting down Loki pod (scaling the deployment down) enough to get in memory data stored in the file system? We do copy the data from side container so “stop the world” is fine - that’s probably required too for data consistency.
You should be able to use the flush API to force ingester to flush to the chunk storage, yes.
I am not quite sure why you are worried about consistency =. If you have to make sure both clusters are the same then I suppose the easiest solution is to just shutdown your source cluster.