How to tune Loki for better performance in a benchmark?

The setup defined at GitHub - SigNoz/logs-benchmark: Logs performance benchmark repo: Comparing Elastic, Loki and SigNoz did not successfully run Loki for almost all the queries listed in the blog SigNoz - Logs Performance Benchmark | SigNoz. Can someone from the team help correctly configure Loki so that the correct results are published?

Hello Ankit!

Thank you for reaching out, we appreciate you trying to give a fair review of Loki in your benchmarks.

Unfortunately I’m afraid we must decline to support this effort. We’ve also done benchmarking of this style in the past, including building a k6 extension for testing Loki GitHub - grafana/xk6-loki: k6 extension for Loki.

Our experience is that flog is not a very good representation of actual business workloads, at best it really just represents one type of workload, something I might call “access logs”: highly repetitive, well structured, consistent volume over time.

Some databases will excel at this use case more than others, I suspect Clickhouse is one of them, with a properly defined schema which maps the structured data to columns, I would expect a columnar store database to perform extremely well this type of workload!

But this is really only one type of workload and most people have many many others. Logging workloads are incredibly varied and dynamic, as such, we feel investing time in this kind of benchmark has limited value and poorly represents real world logging use cases.

The only way someone can know if a database will meet their needs is to directly test their workloads and use cases on the databases they are interested in using. This is the only way they can understand the tradeoffs they are making, speed being only one variable in determining what’s best for your data and your users.

Thank you for giving us the opportunity to participate and hope you can understand why we don’t see the same value in putting our time towards this.

Got it. I understand. Thanks for replying though.