I currently have the problem that labels for e.g. per pod CPU usage metrics in Prometheus seem to get lost. The metric itsself is there, but additonal labels seem to be lost.
My river.config for the components looks as follows. I am only allowing one metric to pass through at the moment, so that I can gradually add additional metrics
From other resources I saw that the cadvisor metric container_cpu_usage_seconds_total can have additonal labels for containers or pods. But those seem to be missing and I am only getting information per worker node of the cluster. For grafana agent I thought that the config store_container_labels would add those labels.
Am I missing something, or is my interpretation of the metric wrong? Is there an alternative metric I could use?
I have exactly the same behavior and I was wondering if you found any solution. I find it quite weird that my only series are from the worker nodes and nothing from what I usually get with the kube-prom-stack on pods with several labels (images, namespace, etc)
Hello,
I also spend quite a time with this issue. In my case, I missed to add some folder mounts to my Alloy Docker container, that were needed by cAdvisor in order to get information about container metadata and labels.
These mounts are documented in the cAdvisor documentation.
In the end, the following compose configuration for my Alloy container worked and labels were shown:
What I notice quite a lot in the logs of alloy is this:
ts=2025-10-02T21:01:07.137285312Z level=error msg=“Failed to create existing container: /docker/a65bd67e236649d7a3d3e0fba3453184790a00a6b361d1f9f6edb8c1945162e2: failed to identify the read-write layer ID for container “a65bd67e236649d7a3d3e0fba3453184790a00a6b361d1f9f6edb8c1945162e2”. - open /rootfs/var/lib/docker/image/overlayfs/layerdb/mounts/a65bd67e236649d7a3d3e0fba3453184790a00a6b361d1f9f6edb8c1945162e2/mount-id: no such file or directory” component_path=/ component_id=prometheus.exporter.cadvisor.docker