Hi community,
I have setup a multiples promtail services on server to push logs to a centralised loki services hosted using docker. The problem has been arised when the amount of logs per second is greater than 100000 lines. The grafana dashboard shows error when trying to query the data from the loki. The error shown from loki logs are:
The dashboard errors are shown in attached:

The current loki configuration is as shown:
server:
http_listen_port: 3100
grpc_server_max_recv_msg_size: 16777216
grpc_server_max_send_msg_size: 16777216
common:
path_prefix: /loki
storage:
filesystem:
chunks_directory: /loki/chunks
rules_directory: /loki/rules
replication_factor: 1
ring:
kvstore:
store: inmemory
schema_config:
configs:
- from: 2020-10-24
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h
query_scheduler:
max_outstanding_requests_per_tenant: 8192
frontend:
max_outstanding_per_tenant: 8192
log_queries_longer_than: 10s
compress_responses: true
query_range:
parallelise_shardable_queries: true
align_queries_with_step: true
cache_results: true
limits_config:
split_queries_by_interval: 15m
max_query_length: 0h
max_query_parallelism: 32
ingestion_rate_strategy: local
ingestion_rate_mb: 32
ingestion_burst_size_mb: 64
max_streams_per_user: 0
max_entries_limit_per_query: 5000000
max_global_streams_per_user: 0
cardinality_limit: 200000
The server spec is relatively high for this use case:
It has 16 cores and 64gb in ram.
Is any configuration missed out or wrong that result in such error.
Thanks