Alloy constantly triggering automount unit

I have Grafana Alloy running in a podman container with host mounts passed through. On the host, I have a rather large (16tb) NFS mount setup through systemd mount/automount units.

I keep receiving these log messages spammed constantly, hundreds and hundreds of lines:

Mar 25 11:20:52 boh systemd[1]: var-omv-data.automount: Got automount request for /var/omv/data, triggered by 51426 (alloy)
Mar 25 11:20:52 boh systemd[1]: var-omv-data.automount: Automount point already active?
Mar 25 11:20:52 boh systemd[1]: var-omv-data.automount: Got automount request for /var/omv/data, triggered by 51426 (alloy)

On my other hosts with NFS mounts, but much smaller in data size (± 32gb), I do not receive any log messages like this.

Here is a snippet of my alloy config:

prometheus.exporter.unix "node" {
    rootfs_path = "/host"
    procfs_path = "/host_proc"
    sysfs_path = "/host_sys"
    udev_data_path = "/host/run/udev/data"
    enable_collectors = [
      "systemd",
      "logind",
    ]
    systemd {
      enable_restarts = true
      start_time = true
    }
}

and my alloy.container quadlet:

[Unit]
Description=Alloy - https://github.com/grafana/alloy

[Container]
Image=docker.io/grafana/alloy:latest
ContainerName=alloy
SecurityLabelDisable=true
Exec=run --disable-reporting --server.http.listen-addr=0.0.0.0:12345 --storage.path=/var/lib/alloy/data /etc/alloy/config.alloy
Network=host
PublishPort=12345:12345
Volume=/etc/alloy/config.alloy:/etc/alloy/config.alloy:Z,ro
# Loki Journal Volumes
Volume=/var/log/journal:/var/log/journal:ro
Volume=/run/log/journal:/run/log/journal:ro
Volume=/etc/machine-id:/etc/machine-id:ro
# Node Exporter Volumes
Volume=/proc:/host_proc:ro
Volume=/sys:/host_sys:ro
Volume=/:/host:ro
Volume=/run/systemd/private:/run/systemd/private:ro
Volume=/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:ro

Is there a way to stop these requests, or are they a symptom of some misconfiguration?

I am still dealing with this issue, but I may have mitigated it by disabling nfs collectors.

There is a closed node_exporter bug report with the same symptoms, but they say its a service unit issue, not node-exporter, however, there’s no resolution as to what was wrong with the automount unit.

Here are my mount and automount units, as well:

var-omv-data.mount:

[Unit]
Description=mount omv vm nfs data
Requires=network-online.target
After=network-online.target

[Mount]
What=10.10.10.5:/medias
Where=/var/omv/data
Type=nfs4

[Install]
WantedBy=multi-user.target

var-omv-data.automount:

[Unit]
Description=automount omv vm nfs data

[Automount]
Where=/var/omv/data

[Install]
WantedBy=multi-user.target