Currently we are running the grafana/loki/promtail for the logs of the pod in the cluster currently our application(pods) are generating 100000 logs per min so the limit in the loki by default is 1000 log line so now in the loki configuration we have increased to the 2000000000000 and maximum size receive logs to 10GB but when we get logs from the grafana UI is not able to response
auth_enabled: false
chunk_store_config:
max_look_back_period: 0s
compactor:
compaction_interval: 10m
shared_store: filesystem
working_directory: /data/loki/boltdb-shipper-compactor
ingester:
chunk_block_size: 262144
chunk_idle_period: 3m
chunk_retain_period: 1m
lifecycler:
ring:
kvstore:
store: inmemory
replication_factor: 1
max_transfer_retries: 0
wal:
dir: /data/loki/wal
limits_config:
cardinality_limit: 200000
enforce_metric_name: false
max_chunks_per_query: 200000
max_entries_limit_per_query: 2000000000000
max_label_value_length: 4096
max_query_length: 721h
max_streams_matchers_per_query: 2000000000000
reject_old_samples: true
reject_old_samples_max_age: 168h
retention_period: 744h
split_queries_by_interval: 24h
memberlist:
join_members:
- 'loki-memberlist'
schema_config:
configs:
- from: "2020-10-24"
index:
period: 24h
prefix: index_
object_store: filesystem
schema: v11
store: boltdb-shipper
server:
grpc_listen_port: 9095
http_listen_port: 3100
grpc_server_max_recv_msg_size: 10737418240
grpc_server_max_send_msg_size: 10737418240
http_server_read_timeout: 600s
http_server_write_timeout: 600s
http_server_idle_timeout: 10m
storage_config:
boltdb_shipper:
active_index_directory: /data/loki/boltdb-shipper-active
cache_location: /data/loki/boltdb-shipper-cache
cache_ttl: 24h
shared_store: filesystem
filesystem:
directory: /data/loki/chunks
table_manager:
retention_deletes_enabled: false
retention_period: 0s
We have tried increasing the resources for the grafana pod to 4.5 CPU and 16GB RAM but the Grafana pod resource consumption is just 1% of the pod
and increased the Grafana timeout in the data source but still the issue exits