Do the Grafana static agent support the OS Metric data collection?

Hello All,

I have tried to install the Grafana Agent in Static Mode in an EC2 instance to collect the OS metrics by following the documentation given for RHEL in the below link:

After the installation, I do not see any OS metrics related to it.

Most of the metrics so far noticed are for “prometheus”. Can someone provide the appropriate documentation that contains the actual linux binaries of the latest grafana agent for only OS monitoring ?

You could try this: Get started with monitoring using an integration | Grafana Cloud documentation This tutorial shows one way (of many ways) you can monitor your local Linux machine.

Hello @claytoncornell

Thank you for the response. Is it possible to collect the OS metrics only using Grafana agent without any integrations like Grafana Cloud, Node-exporter etc ? because I do not see metrics for disk/network/IOPS etc. There is only a single metric for total cpu and a few metrics for Memory.

Is there any change with the Grafana static agent ? because I do not see the basic OS metrics required for monitoring.

It seems like the metrics collected by Grafana agent so far seems to be for a container environment.

Hello! The “node exporter” is pre-built in Grafana Agent:

There is no need to run a separate node exporter.

A full config example can be found here:

Hi @paulintodev,

Thank you for the response. I did updated with the same configuration (full config example) in the “grafana-agent.yaml” file and restarted the grafana agent service. However, still I am seeing the same metrics designed for containers when I tried http://localhost:12345/metrics. Could you please let me know how I can see the metrics collected by in-built node exporter of the grafana static agent ? (Eg: specific port and detailed steps)

The metrics from the node_exporter integration can be seen by querying the database which the Agent is configured to “remote_write” to in the “metrics” section of its config.

The “http://localhost:12345/metrics” endpoint is only for the Agent’s own metrics, and not for the ones it is scraping.

I have provided the remote_write field in grafana-agent.yaml file with a remote prometheus server IP Address (Eg: http://<remote_prometheus_server_ip>:9090/api/prom/push ) and enabled the node_exporter in the integrations as well.

How to query in the remote_prometheus server with the inbuilt node_exporter metrics collected by grafana agent ?
Is there any query to search in prometheus server like
curl http://<server_installed_with_Grafana_agent_ip>:9100/metrics ? because I do not see anything listening on the 9100 port (default port for node exporter metrics) from grafana-agent server.

At this stage, when I query from prometheus server,
curl http://<server_installed_with_Grafana_agent_ip>:12345/metrics, it is showing the same metrics as locally searched in grafana agent installed server.

You can query Prometheus using PromQL and the Prometheus UI:

Finally figured out the solution. Ideally, a Grafana agent can collect it metrics using the “http://<server_ip_with_grafana_agent_installed>:12345/metrics”. Since, Node-exporter is available as in-built package with the Grafana agent to collect OS metrics, we just need to enable the integrations in the “grafana-agent.yaml” configuration file.

integrations:
  node_exporter:
      enabled: true
      include_exporter_metrics: true

and then, **access the node-exporter metrics using grafana agent by: **
“http://<server_ip_with_grafana_agent_installed>:12345/integrations/node_exporter/metrics”
so one can see the node_exporter metrics collected directly in the grafana agent installed server