Hi everyone, I have Prometheus/Grafana/Loki installed from Helm on my Kubernetes cluster, but I’m noticing that LOKI, specifically the “Ingester” pod, is consuming a lot of memory.
Is this the case, or can it be fixed somehow?
Ingester has 3 replicas, and all 3 are consuming a lot of memory, one more than the others.
I have a more or less basic YAML:
backend:
replicas: 0
bloomCompactor:
replicas: 0
bloomGateway:
replicas: 0
compactor:
compaction_interval: 10m
delete_request_store: s3
extraVolumeMounts:
- mountPath: /data/loki-retention
name: loki-retention
extraVolumes:
- name: loki-retention
persistentVolumeClaim:
claimName: loki-retention-prod
replicas: 1
retention_delete_delay: 2h
retention_delete_worker_count: 150
retention_enabled: true
working_directory: /data/loki-retention
deploymentMode: Distributed
distributor:
maxUnavailable: 2
replicas: 3
indexGateway:
maxUnavailable: 1
replicas: 2
ingester:
replicas: 3
resources:
requests:
memory: "2Gi"
cpu: "500m"
limits:
memory: "6Gi"
cpu: "1000m"
loki:
auth_enable: false
ingester:
chunk_encoding: snappy
querier:
max_concurrent: 4
schemaConfig:
configs:
- from: "2024-04-01"
index:
period: 24h
prefix: loki_index_
object_store: s3
schema: v13
store: tsdb
storage_config:
aws:
bucketnames: loki-storage
region: us-east-1
s3: s3://ACCESS_KEY:SECRET_KEY@S3_BUCKET_NAME
s3forcepathstyle: true
boltdb_shipper:
active_index_directory: /data/loki-retention/index
cache_location: /data/loki-retention/boltdb-cache
filesystem:
directory: /data/loki-retention/chunks
tracing:
enabled: true
minio:
enabled: true
querier:
maxUnavailable: 2
replicas: 3
queryFrontend:
maxUnavailable: 1
replicas: 2
queryScheduler:
replicas: 2
read:
replicas: 0
singleBinary:
replicas: 0
write:
replicas: 0
Thanks!