`grafana-agent-integrations-deploy` is taking lot of memory

We have deployed grafana-agent using static mode kubernetes operator.
In our setup we currently have 4 Integrations:

  1. statsd-exporter
  2. postgres-metrics
  3. redis-metrics
  4. custom one

The total memory used by grafana-agent-integrations-deploy is very high across all the clusters, reaching more than 5 GiB in some clusters.

PromQL query used: sum(container_memory_working_set_bytes{cluster!="", namespace!="", pod=~".+integrations-deploy.+", container!="",image!=""}) by (cluster, namespace, pod, container) / (1024 * 1024 * 1024)

I want to find out the reason for high memory usage (is this the expected behaviour ?) and any suggestions for reducing the same.

Can someone also point out on what factors the memory of grafana-agent-integrations-deploy depends.


this contains =~ probably is a resource hogger. for testing purposes use pod=an_actual_pod_name

also for test purposes also remove / (1024 * 1024 * 1024)
(are you trying to convert bytes to ?? )

I tried by passing exact value to the pod. The resource is being used up by integrations-deploy pod only. This is happening in almost all of our clusters so that’s why used pod=~".+integrations-deploy.+".

PromQL query used: sum(container_memory_working_set_bytes{cluster="cluster", container="grafana-agent", pod="application-grafana-agent-scout-integrations-deploy-7c88848lf78s"}) by (cluster, pod)

I also verified the old query used there were no other pods except integrations-deploy.
I have used / (1024 * 1024 * 1024) to convert bytes to GiB.