I’m experimenting with a new grafana setup after retiring the old setup using telegraf. I’m doing OK but can’t figure out a couple of configuration tweaks.
I’ve set up the grafana server with prometheus. I installed node-exporter on the server and the node exporter full dashboard, and that’s looking great, self reporting itself in great detail.
I have some other servers I want to add to this setup with the same amount of data. However these servers are behind firewalls and instead of the grafana server ‘pulling’ data, these servers must push their data to the collecting server. I have read the way to do this is by using grafana alloy remote write mode. I have this partially working after a few hiccups with firewalls (as always), but there are a few niggles.
The first is a small one. The remote server adds itself to the dashboard with the Job name of “integrations/unix”. I’d like to be able to set this job name to the server name, or another identifier. Where do I do this? I figure if I add more servers in this manner, they’ll all have the jobname of integrations/unix, and I want to differentiate them.
The second problem is more concerning. A lot of the data that the locally running node-exporter service supplied, is not supplied by grafana alloy agent in remote write mode. eg
CPU Basic
Network Traffic Basic
CPU
Network Traffic
Disk IOPS
Memory VMstat
Some of the Process data
Systemd data
7 out of the 8 panels under Storage Disk although Storage Filesystem is reported fine
… etc
This seems like quite a lot of data that’s not getting reported. Some of it I can do without, but Disk IO is particularly important for me.
Here is my alloy exporter config.
logging {
level = "warn"
}
prometheus.exporter.unix "default" {
include_exporter_metrics = true
disable_collectors = ["mdadm","xfs","zfs"]
filesystem {
fs_types_exclude = "^(devtmpfs|tmpfs|overlay|proc|squashfs|pstore|securityfs|sysfs|tracefs|udev)$"
mount_points_exclude = "^/(proc|run|sys|dev/pts|dev/hugepages)($|/)"
}
}
prometheus.scrape "default" {
targets = concat(
prometheus.exporter.unix.default.targets,
[{
// Self-collect metrics
job = "alloy",
__address__ = "127.0.0.1:12345",
}],
)
forward_to = [prometheus.remote_write.metrics_service.receiver]
}
prometheus.remote_write "metrics_service" {
endpoint {
url = "http://myhost.com:9990/api/v1/write"
name = "remotehost.com"
basic_auth {
username = "xxxxxx"
password = "yyyyyyyyyy"
}
}
}
I was playing with excluding filesystem data, but the config still returns the same (incomplete) data even with that stanza removed.