Don't collect data on diskspace usage for remote (NFS) filesytems

I’ve just set up an account on grafana cloud, and have followed

to get some metrics for a host. That works nicely, but I have one major issue, the agent also collects (and submits) data for the filesystems I have NFS mounted. Those should be monitored with the NFS server, not a random client. How do I (quickly) stop the agent from doing that?

Hey There,

You can configure the agent not to send NFS metrics to grafana cloud.

You need to utilize write_relabel_configs in remote_write section of Agent config. For example :

      writeRelabelConfigs:
      - sourceLabels:
        - "__name__"
        regex: "node_nfs_.*"
        action: "drop"

The above config will stop sending any metric which starts with node_nfs_ prefix. Adjust the metric name as per your requirement.

But where is the documentation describing that?

Here you go.

Why is that hiding on a page about kubernetes? I set this up on a simple linux host, no kubernetes involved. It also not clear where to put that snippet you suggest in /etc/grafana-agent.yaml (I’m guessing, it’s more-or-less the only configuration file for grafana-agent in that setup). And that so-called documentation doesn’t really help, it shows “writeRelabelConfigs” (stupid capitalisation BTW) as something that goes under “prometheus”/“prometheusSpec”, but there is no “prometheus” or “prometheusSpec” section in /etc/grafana-agent.yaml.

Why is that hiding on a page about kubernetes? I set this up on a simple linux host, no kubernetes involved.

Metric relabeling guides are not hiding in just this one document about Kubernetes Helm deployments, it is just one example of one document that mentions it for that use case. When, how and why to use write_relabel_configs and other metrics relabeling and reduction methods are described in several Grafana Cloud docs on reducing metrics usage:

Guides for metrics reduction are also linked out to from the Cardinality Management dashboards included in Grafana Cloud Pro that help you analyze your highest cardinality metrics and labels.

It also not clear where to put that snippet you suggest in /etc/grafana-agent.yaml (I’m guessing, it’s more-or-less the only configuration file for grafana-agent in that setup)

The agent only has one configuration so there’s nowhere else to add it except for in the agent’s yaml. The logic around relabel configs still applies. Specifically in the remote_write section as the article points out.

It shows “writeRelabelConfigs” (stupid capitalisation BTW) as something that goes under “prometheus”/“prometheusSpec”, but there is no “prometheus” or “prometheusSpec” section in /etc/grafana-agent.yaml

That’s because that article assumes a specific Kubernetes helm deployment which includes the prometheusSpec and is not relevant to you if you did not deploy that way. Check out the other guides linked here and perhaps that’ll be less confusing to you.