Grafana explore UI very slow for Loki on queries that fetch more than 40000 log lines

I am experiencing slowness on Grafana UI explore section when searching for Loki logs. Grafana and Loki are running in kubernetes envt. I have used kube-prometheus-stack and loki-distributed helm charts to deploy to kubernetes.

Queries with long time range is returning the logs but if they are more than 30000 - 40000 log line returned, the grafana UI is very slow and takes a while to load. Any setting I can change to improve this? Grafana UI is exposed via an ingress coming from an nginx ingress controller.

Hi @saikiran1208 , just to clarify, are you showing 30k-40k lines in Explore (you have set Maximum lines in the Loki datasource to 30k-40k) or you have Maximum lines at the default 1k and Loki says that the total amount of log lines found are 30k-40k.

If the first one, why do you want to show so many log lines?

If the second, fetching 40k log lines from Loki should be quite quick. How much time is very slow and takes a while to load?

Hi @b0b, the Maximum lines in Loki datasource is set to 40k. We are using Loki as our main logging solution for our microservices running in k8s and wanted the logs to diagnose a specific issue which happened in a specific time frame.

Was querying based on the default label filters on namespace, pod and for specific text in the logs to troubleshoot the issue. I am very new to Loki and our use case is to be able to fetch logs for troubleshooting purposes.

Loki seems to load the log lines ok (see stats below), however once the logs are loaded on the grafana explore UI, the UI becomes very slow i.e. moving around the UI takes time, download button or resetting the filters all take time to load.

Stats
Total request time	3.15 s
Data processing time	0.300 ms
Number of queries	1
Total number rows	40000
Data source stats
Summary: bytes processed per second	58.0 MB/s
Summary: lines processed per second	191181
Summary: total bytes processed	64.9 MB
Summary: total lines processed	213987
Summary: exec time	1.12 s
Ingester: total reached	3
Ingester: total chunks matched	4
Ingester: total batches	582
Ingester: total lines sent	74213
Ingester: head chunk bytes	0 B
Ingester: head chunk lines	0
Ingester: decompressed bytes	0 B
Ingester: decompressed lines	0
Ingester: compressed bytes	0 B
Ingester: total duplicates	0

Am I doing anything wrong here or is there a better way to fetch the logs from Loki via Grafana explore without having to load them on the UI? Trying to make it as user friendly as possible for users to self service themselves.

Thanks for the additional info :slight_smile: Those stats look quite good as far as I can tell.

I completely understand what you are attempting. I have similar discussions with our developers constantly.

I really challenge anyone to give a valid reason to load more than 1000 log lines in a web UI when troubleshooting anything. At least for me it takes a very long time to read even 100 log lines. If you load 40000 log lines in the UI, will anyone ever take the time to actually read all of them ever?

For me, with Loki (same for Elastic Stack/Kibana) it really comes down to the quality of the log query. It takes some time to learn how to effectively query Loki. Once you learn how to query the logs you are interested in, 1000 log lines should be plenty.

Another thing I have noticed with our devs that come from having used Elastic Stack, they expect to be able to see the log volume over time for long time ranges with normal log queries. That is not something that works very well with Loki. You should use metric queries for that.

If you really need to fetch a lot of logs from Loki, you want to export them or something like that, then LocCLI is a better tool for that.

These are just my own opinions. I know there are many who think differently.

2 Likes

Thanks @b0b I tried logcli and this seems to work ok for our needs. My 2 cents, ideally if the Grafana Explore UI was capable of handling this without slowing down would have been great, 1 interface for devs would help as opposed to using another cli tool to fetch log term logs. Guess its a trade off we can live with for now. Appreciate your help with providing input.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.