Grafana Ui not loading for the 50000 loki logs

Currently we are running the grafana/loki/promtail for the logs of the pod in the cluster currently our application(pods) are generating 100000 logs per min so the limit in the loki by default is 1000 log line so now in the loki configuration we have increased to the 2000000000000 and maximum size receive logs to 10GB but when we get logs from the grafana UI is not able to response

auth_enabled: false
chunk_store_config:
  max_look_back_period: 0s
compactor:
  compaction_interval: 10m
  shared_store: filesystem
  working_directory: /data/loki/boltdb-shipper-compactor
ingester:
  chunk_block_size: 262144
  chunk_idle_period: 3m
  chunk_retain_period: 1m
  lifecycler:
    ring:
      kvstore:
        store: inmemory
      replication_factor: 1
  max_transfer_retries: 0
  wal:
    dir: /data/loki/wal
limits_config:
  cardinality_limit: 200000
  enforce_metric_name: false
  max_chunks_per_query: 200000
  max_entries_limit_per_query: 2000000000000
  max_label_value_length: 4096
  max_query_length: 721h
  max_streams_matchers_per_query: 2000000000000
  reject_old_samples: true
  reject_old_samples_max_age: 168h
  retention_period: 744h
  split_queries_by_interval: 24h
memberlist:
  join_members:
  - 'loki-memberlist'
schema_config:
  configs:
  - from: "2020-10-24"
    index:
      period: 24h
      prefix: index_
    object_store: filesystem
    schema: v11
    store: boltdb-shipper
server:
  grpc_listen_port: 9095
  http_listen_port: 3100
  grpc_server_max_recv_msg_size: 10737418240
  grpc_server_max_send_msg_size: 10737418240
  http_server_read_timeout: 600s
  http_server_write_timeout: 600s
  http_server_idle_timeout: 10m
storage_config:
  boltdb_shipper:
    active_index_directory: /data/loki/boltdb-shipper-active
    cache_location: /data/loki/boltdb-shipper-cache
    cache_ttl: 24h
    shared_store: filesystem
  filesystem:
    directory: /data/loki/chunks
table_manager:
  retention_deletes_enabled: false
  retention_period: 0s


We have tried increasing the resources for the grafana pod to 4.5 CPU and 16GB RAM but the Grafana pod resource consumption is just 1% of the pod

and increased the Grafana timeout in the data source but still the issue exits

UI is not designated to handle this kind of log load. Default is 1k Configure the Loki data source | Grafana documentation and doc mentions explicitly:

Decrease the limit if your browser is sluggish when displaying log results.

Your only option to improve (still may not solve) without excluding some logs is your PC, where browser loads Grafana UI - use more powerful hardware. But then you will want to load 100k load line, so this approach doesn’t scale.

1 Like

@jangaraj Thanks for your quick response
As you mentioned we agree with the 1st point and for the 2nd point it will be tough for us to increase the resources of the PC of DEV team members.

Note: Just for the information we are able to achieve to get 1lakh lines through the API call but it would have been better in Grafana itself and is it possible to modify the Grafana UI page size in the configuration file?

curl -v -H 'X-Scope-OrgId: POC' --max-time 3600 'http://loki:3100/loki/api/v1/query_range?direction=backward&end=1728889440000000000&limit=100000&query=%7Bpod%3D%22**<pod_name>**%22%7D+%7C%3D+%60%60&start=1728869400000000000&step=60000ms' | jq -r .data.result[].values[][1]