I have 2 servers, 1 server A for my application, 1 server B for monitoring.
Server A includes: my application, node_exporter (in docker env) for scraping system status
Server B includes:
grafana: visualization metrics- `prometheus: store metrics data
grafana alloy: scrape data fromnode_exporterfrom server A
This is my alloy config file
discovery.http "discovery" {
url = sys.env("REMOTE_NODE_EXPORTER") + "/metrics"
}
prometheus.scrape "scraper" {
targets = discovery.http.discovery.targets
forward_to = [prometheus.relabel.relabel.receiver]
job_name = "scraper.node_exporter"
}
prometheus.relabel "relabel" {
forward_to = [prometheus.remote_write.remote_write.receiver]
# Add some labels...
}
prometheus.remote_write "remote_write" {
endpoint {
url = sys.env("PROMETHEUS_REMOTE_WRITE") + "/api/v1/write"
}
}
There is a problem with Grafana Alloy discovery.http, as it only accepts HTTP request headers of type application/json, while node_exporter exposes data as text/plain.
I have searched for a solution, and there is a suggestion to use prom2json to convert the metrics data to JSON. However, I am unsure how to configure Alloy (or node_exporter) to integrate with prom2json.
Can someone help?
NOTE:
- I’ve tried to deploy an Alloy agent on server B, but it consumes too much memory on the server (Alloy consumes about 100MB of memory while node_export consume < 10MB)