Grafana Cloud Node Exporter metrics gone after helm upgrade (kubernetes / k3s)

Hi there, I am kind of new to this so please be patient :slight_smile:

I have a k3s cluster on my rpi and just recently upgraded the grafana monitoring metrics via helm

helm upgrade grafana-k8s-monitoring grafana/k8s-monitoring -n "monitoring"

Interestingly, now the node_uname_info call which is needed for the variables in grafana seems to stop being sent to Grafana cloud:

When grepping them locally on my rpi it still exists, even though labels like “instance” are not there (i guess that is a kubernetes config setting).

curl http://localhost:9100/metrics | grep uname
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current` `                                 Dload  Upload   Total   Spent    Left  Speed 100  133k    0  133k    0     0   814k      0 --:--:-- --:--:-- --:--:--  814k
node_scrape_collector_duration_seconds{collector="uname"} 6.174e-05
node_scrape_collector_success{collector="uname"} 1` `# HELP node_uname_info Labeled system information as provided by the uname system call.` `# TYPE node_uname_info gauge` 
node_uname_info{domainname="(none)",machine="aarch64",nodename="raspberrypi",release="5.10.103-v8+",sysname="Linux",version="#1529 SMP PREEMPT Tue Mar 8 12:26:46 GMT 2022"} 1

At first I thought the connection to grafana cloud is misconfigured after the upgrade but the metrics status via node_exporter_build_info is still there:

Am I missing something essential here?

Thanks for any help!

Update: In the meantime I deleted all services, configmaps and running statefulsets, daemonsets, deployments, deleted unnecessary helm repos and started again the official way:

Now while it seems to have some of the metrics imported (see node fleet overview dashboard):

However, there is still nothing for node_uname_info…

Hello! Thanks for your post.

I am one of the authors of the k8s-monitoring Helm chart.

The chart has a default set of metrics that it allows to be saved and sent for storage. By default, it only keeps a subset of the Node Exporter metrics (k8s-monitoring-helm/charts/k8s-monitoring/default_allow_lists/node_exporter.yaml at main · grafana/k8s-monitoring-helm · GitHub). However, there is an easy switch to include a larger set of metrics to work with the Node Exporter integration and dashboards. Here is the larger set: k8s-monitoring-helm/charts/k8s-monitoring/default_allow_lists/node_exporter_integration.yaml at main · grafana/k8s-monitoring-helm · GitHub. Note that the node_uname_info metric is included in that list.

To enable that set, add this to your values file:

metrics:
  node-exporter:
    metricsTuning:
      useIntegrationAllowList: true

That should have the Grafana Agent include the larger set of metrics and then it should show up inside Grafana!

1 Like

Thanks so much!
This solved the issue:

For anyone else stumbling across here. To update the values.yaml for an existing file I used:

helm upgrade --reuse-values -f git/grafana/helm/values.yaml grafana-k8s-monitoring grafana/k8s-monitoring --version 0.10.2 --namespace monitoring

This let’s you keep the version. The version can be found via helm search repo grafana/k8s-monitoring

1 Like