Update Grafana via helm without losing all dashboards and data?


I’m aware this isn’t directly related to this project and more about helm charts configurations, but this seems like a proper place to ask.

Every time I do any modification to the values.yaml chart (for example changing grafana.ini, updating ingress rules, etc.), the pod get redeployed and all data (users/dashboards/datasources/etc) is deleted and lost. Even if the app version doesn’t change.

I’m using kube-prometheus-stack to install prometheus/grafana/alertmanager, but I assume it’s the same with other charts.

I see this ticket from 2018, but the same problem still exists in 2023. They changed the type to a StatefulSet but it didn’t resolve the issue.

Looking at my cluster PVC, I see one for prometheus and one for alertmanager, and they remain. But there’s none for Grafana.

Looking at it’s configuration, it seems to mount volumes called “sc-dashboard-volume” and “sc-datasources-volume” but they are mapped to “emptyDir”, so I guess that’s normal they get deleted when the pod get redeployed.

What’s the proper way to make them permanent?

These volumes aren’t it, “sc” stands for “sidecar” and is used by a mechanism to allow dynamic importing of dashboard via configmap with annotations. That’s great, but not what I’m looking for.

I just want users to be able to manually modify things via the web interface, and that they remain if the pod is restarted. Kinda weird it’s not the default behavior.

Ok, I think I got it.

I had to download the chart locally, and browse the grafana subchart templates up to the _pod.tpl file to find this section:

   {{- if and .Values.persistence.enabled (eq .Values.persistence.type "pvc") }}
   - name: storage
       claimName: {{ tpl (.Values.persistence.existingClaim | default (include "grafana.fullname" .)) . }}

Which is in the grafana chart values.yaml, actually the standard helm structure for persistence.
And can be set in the kube-prometheus-chart by passing this snippet to its values.yaml:

    enabled: true
    size: 500Mi

The part that confused me, is that browsing the kube-prometheus-stack default values via the artifacthub website, this section does not appear in the list.

Unlike Prometheus and AlertManager which appear, and are under a differently named category prometheus.prometheusSpec.storageSpec.volumeClaimTemplate.spec

I guess I still have learning to do to understand where I can find the list of values that exists in a chart to be modified.