Hello Loki Team,
I am deploying Loki in monolithic mode and sending logs with Fluent Bit. One thing that’s catching my attention is the memory consumption. Just by deploying Loki, the memory usage gradually increases until it stabilizes at around 1.5GB. I’m not sure if this level of consumption is normal??
Furthermore, when I perform a query like {namespace="kong"} | json | line_format "{{.message}}"
, the memory usage can spike up to 4GB or more. I understand that this is not an efficient query, but I’m wondering if a different deployment mode could reduce this consumption.
Below is my values.yaml
:
loki:
auth_enabled: false
commonConfig:
replication_factor: 1
storage:
type: 'filesystem'
limits_config:
reject_old_samples: true
reject_old_samples_max_age: 168h
max_cache_freshness_per_query: 10m
split_queries_by_interval: 24h
max_query_parallelism: 100
retention_period: 20m
query_scheduler:
max_outstanding_requests_per_tenant: 100
frontend:
max_outstanding_per_tenant: 2048
compactor:
retention_enabled: true
retention_delete_delay: 10m
singleBinary:
replicas: 1
resources:
limits:
memory: 4Gi
requests:
memory: 100Mi
persistence:
size: 10Gi
storageClass: gp2
monitoring:
# Self monitoring determines whether Loki should scrape its own logs.
# This feature currently relies on the Grafana Agent Operator being installed,
# which is installed by default using the grafana-agent-operator sub-chart.
# It will create custom resources for GrafanaAgent, LogsInstance, and PodLogs to configure
# scrape configs to scrape its own logs with the labels expected by the included dashboards.
selfMonitoring:
enabled: false
grafanaAgent:
installOperator: false
# The Loki canary pushes logs to and queries from this loki installation to test
# that it's working correctly
lokiCanary:
enabled: false
gateway:
enabled: false
# -- Section for configuring optional Helm test
test:
enabled: false