Then running the “docker-compose” version of Loki, all queries are running on a single CPU-core, which is inefficient especially then you have multiple cores available (and go is really good at doing parallel tasks) .
I found this article, but it’s for running loki in k8s.
And I guess most bigger installations has moved to k8s already.
I tried to add some configs from the Slack channel, but that had no effect on parallelism AFAICT. But from the post above, I interpret that there need to be more frontend_workers too?
query_range: align_queries_with_step: true max_retries: 10 split_queries_by_interval: 15m split_queries_by_day: true parallelise_shardable_queries: true cache_results: true results_cache: cache: redis: endpoint: loki:6379 expiration: 86400s db: 1 timeout: 60s
If possible, How would a single binary config for more parallel “search workers” look like?