Loki config optimized for backfilling

Hi all,

I want to analyse gathered logs from some on premise application. I got promtail to parse our log format fine so far and am now struggling to get a “sensible” configuration for Loki. In this instance we’re talking about 15 instances with 3GB of uncompressed logs each.

Now I tried the docker-compose setup with minio (simple scalable deployment) as well as the single binary on a 16 cores / 32 GB RAM VM with the more or less “standard” config mentioned in the documentation. It seems like promtail is much faster in feeding in data than Loki can handle. By the way I push the logs in time order and the instance is a label so logs even though in the past come in “order” - at least per label value.

Is there some general guideline what should be tweaked for such a backfill only config? Or do I need to limit the “speed” at which promtail ingests. I played a bit with the chunk_idle settings as well as some max limits but after a few minutes it seems RAM is exhausted :slight_smile:

As newbie it’s a bit hard to understand all the detailed config settings especially as this is not “the standard use case”.

Greetings
maybebuggy

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.