I have Grafana Alloy running in a podman container with host mounts passed through. On the host, I have a rather large (16tb) NFS mount setup through systemd mount/automount units.
I keep receiving these log messages spammed constantly, hundreds and hundreds of lines:
Mar 25 11:20:52 boh systemd[1]: var-omv-data.automount: Got automount request for /var/omv/data, triggered by 51426 (alloy)
Mar 25 11:20:52 boh systemd[1]: var-omv-data.automount: Automount point already active?
Mar 25 11:20:52 boh systemd[1]: var-omv-data.automount: Got automount request for /var/omv/data, triggered by 51426 (alloy)
On my other hosts with NFS mounts, but much smaller in data size (± 32gb), I do not receive any log messages like this.
Here is a snippet of my alloy config:
prometheus.exporter.unix "node" {
rootfs_path = "/host"
procfs_path = "/host_proc"
sysfs_path = "/host_sys"
udev_data_path = "/host/run/udev/data"
enable_collectors = [
"systemd",
"logind",
]
systemd {
enable_restarts = true
start_time = true
}
}
and my alloy.container quadlet:
[Unit]
Description=Alloy - https://github.com/grafana/alloy
[Container]
Image=docker.io/grafana/alloy:latest
ContainerName=alloy
SecurityLabelDisable=true
Exec=run --disable-reporting --server.http.listen-addr=0.0.0.0:12345 --storage.path=/var/lib/alloy/data /etc/alloy/config.alloy
Network=host
PublishPort=12345:12345
Volume=/etc/alloy/config.alloy:/etc/alloy/config.alloy:Z,ro
# Loki Journal Volumes
Volume=/var/log/journal:/var/log/journal:ro
Volume=/run/log/journal:/run/log/journal:ro
Volume=/etc/machine-id:/etc/machine-id:ro
# Node Exporter Volumes
Volume=/proc:/host_proc:ro
Volume=/sys:/host_sys:ro
Volume=/:/host:ro
Volume=/run/systemd/private:/run/systemd/private:ro
Volume=/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:ro
Is there a way to stop these requests, or are they a symptom of some misconfiguration?
I am still dealing with this issue, but I may have mitigated it by disabling nfs collectors.
There is a closed node_exporter bug report with the same symptoms, but they say its a service unit issue, not node-exporter, however, there’s no resolution as to what was wrong with the automount unit.
opened 05:15PM - 05 Nov 21 UTC
closed 05:17PM - 05 Nov 21 UTC
### Host operating system:
`Linux pool-ba164-nivab 5.10.0-9-cloud-amd64 #1 SMP … Debian 5.10.70-1 (2021-09-30) x86_64 GNU/Linux`
### node_exporter version:
```
node_exporter, version 1.2.2 (branch: HEAD, revision: 26645363b486e12be40af7ce4fc91e731a33104e)
build user: root@b9cb4aa2eb17
build date: 20210806-13:44:18
go version: go1.16.7
platform: linux/amd64
```
### node_exporter command line flags
See bellow.
### Are you running node_exporter in Docker?
Yes, I'm deploying using Docker Swarm, and this is my stack file:
```
version: "3.8"
services:
my-node-exporter-service:
image: prom/node-exporter
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- '--path.rootfs=/host'
- '--collector.filesystem.ignored-mount-points="^(/rootfs|/host|)/(sys|proc|dev|host|etc)($$|/)"'
- '--collector.filesystem.ignored-fs-types="^(sys|proc|auto|cgroup|devpts|ns|au|fuse\.lxc|mqueue)(fs|)$$"'
networks:
- back
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
networks:
back:
```
### What did you do that produced an error?
Simply deploy an leave it running.
### What did you expect to see?
A normal use of logs.
### What did you see instead?
I've several GB of logs file in `/var/log` reporting this kind of lines:
```
Oct 29 12:39:39 pool-ba164-nivab systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 2885345 (node_exporter)
Oct 29 12:39:39 pool-ba164-nivab systemd[1]: proc-sys-fs-binfmt_misc.automount: Automount point already active?
Oct 29 12:39:39 pool-ba164-nivab systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 2885345 (node_exporter)
Oct 29 12:39:39 pool-ba164-nivab systemd[1]: proc-sys-fs-binfmt_misc.automount: Automount point already active?
Oct 29 12:39:39 pool-ba164-nivab systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 2885345 (node_exporter)
Oct 29 12:39:39 pool-ba164-nivab systemd[1]: proc-sys-fs-binfmt_misc.automount: Automount point already active?
Oct 29 12:39:39 pool-ba164-nivab systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 2885345 (node_exporter)
Oct 29 12:39:39 pool-ba164-nivab systemd[1]: proc-sys-fs-binfmt_misc.automount: Automount point already active?
Oct 29 12:39:39 pool-ba164-nivab systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 2885345 (node_exporter)
Oct 29 12:39:39 pool-ba164-nivab systemd[1]: proc-sys-fs-binfmt_misc.automount: Automount point already active?
Oct 29 12:39:39 pool-ba164-nivab systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 2885345 (node_exporter)
Oct 29 12:39:39 pool-ba164-nivab systemd[1]: proc-sys-fs-binfmt_misc.automount: Automount point already active?
Oct 29 12:39:39 pool-ba164-nivab systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 2885345 (node_exporter)
Oct 29 12:39:39 pool-ba164-nivab systemd[1]: proc-sys-fs-binfmt_misc.automount: Automount point already active?
Oct 29 12:39:39 pool-ba164-nivab systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 2885345 (node_exporter)
Oct 29 12:39:39 pool-ba164-nivab systemd[1]: proc-sys-fs-binfmt_misc.automount: Automount point already active?
```
These are collected in files `daemon.log` and `syslog`.
Here are my mount and automount units, as well:
var-omv-data.mount:
[Unit]
Description=mount omv vm nfs data
Requires=network-online.target
After=network-online.target
[Mount]
What=10.10.10.5:/medias
Where=/var/omv/data
Type=nfs4
[Install]
WantedBy=multi-user.target
var-omv-data.automount:
[Unit]
Description=automount omv vm nfs data
[Automount]
Where=/var/omv/data
[Install]
WantedBy=multi-user.target