I’ve deployed two servers that I would like to cluster using Alloy’s clustering functionality for HA. My intent is to configure about 20 SNMP devices to have their data pulled by this cluster and sent to Grafana Cloud so I can finally decommission my Zabbix. I got to the point where when I stop Alloy on one server, metrics do keep flowing in using the other node, however the instance gets set to localhost for some reason. Both nodes are using the exact same config.alloy. I’d actually like to override instance entirely probably with something like “alloycluster01” since in the dashboard, it makes you filter by the instance that collected, and the instance that collected the metrics doesn’t really matter when it’s the end device’s metrics I’m really after here. Sorry I’m still trying to wrap my head around how Alloy passes around and transforms metrics and labels so I’m sure this is a very easy fix, but I would appreciate any help I can get!
prometheus.exporter.snmp "integrations_snmp" {
target "tmpeswitch01" {
address = "<ip>"
module = "if_mib"
auth = "public_v2"
}
target "tmpeswitch02" {
address = "<ip>"
module = "if_mib"
auth = "public_v2"
}
}
discovery.relabel "integrations_snmp" {
targets = prometheus.exporter.snmp.integrations_snmp.targets
rule {
source_labels = ["job"]
regex = "(^.*snmp)\\/(.*)"
target_label = "job_snmp"
}
rule {
source_labels = ["job"]
regex = "(^.*snmp)\\/(.*)"
target_label = "snmp_target"
replacement = "$2"
}
}
prometheus.scrape "integrations_snmp" {
clustering {
enabled = true
}
targets = discovery.relabel.integrations_snmp.output
forward_to = [prometheus.remote_write.metrics_service.receiver]
job_name = "integrations/snmp"
}