Provision dashboard json files to grafana from aws s3 bucket

  • What Grafana version and what operating system are you using?
    11.3.0 version that is in kube-prometheus-stack helm chart , installed on eks cluster version 1.28

  • What are you trying to achieve?
    i want to provision dashboards that are located in an aws s3 bucket

  • How are you trying to achieve it?
    im trying to mount a s3 backed volume to the grafana pod.

adding some values to review:

grafana:
  persistence:
    enabled: true

  extraVolumeMounts:
    - name: s3-dashboards
      mountPath: /var/lib/grafana/dashboards/s3-pvc  # Path where Grafana expects the dashboards
      readOnly: true

  extraVolumes:
    - name: s3-dashboards
      persistentVolumeClaim:
        claimName: ${dashboards_pvc_claim_name}

  dashboardProviders:
    dashboardproviders.yaml:
      apiVersion: 1
      providers:
        - name: 'default'
          orgId: 1
          type: file
          disableDeletion: false
          options:
            path: /var/lib/grafana/dashboards 
            foldersFromFilesStructure: true
        - name: 's3'
          orgId: 2
          type: file
          disableDeletion: false
          options:
            path: /var/lib/grafana/dashboards/s3-pvc
            foldersFromFilesStructure: true

  • What happened?
    getting an error on the pod starting:
    `

> Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "/var/lib/kubelet/pods/1b95c703-b80b-4936-8d9a-205c39b34ab5/volumes/kubernetes.io~empty-dir/s3-dashboards" to rootfs at "/var/lib/grafana/dashboards/s3-pvc": mkdir /run/containerd/io.containerd.runtime.v2.task/k8s.io/grafana/rootfs/var/lib/grafana/dashboards/s3-pvc: read-only file system: unknown

`

i dont eant grafana to be able to write and change the provisioned dashboards inside the s3 bucket

maybe my logic to this operation is not right,
please let me know if you know a better way to achieve my goals :slight_smile:

That’s opinionated question.

My “better” way: use terraform to read this objects from s3 and then provision dashboard for each release object.

you mean , to create a config map for each dashboard?

No. Terraform, which manages Grafana resources

https://registry.terraform.io/providers/grafana/grafana/latest/docs/resources/dashboard

Using Terraform is indeed a good option but what I did to get this to work is installing the AWS S3 CSI driver and use that to be able to create a extra volume and mount that to the Grafana pod. Besides this, I have a cronjob running that looks into a GitHub repository where the dashboards are stored in JSON files and push that into a S3 bucket.

In this way, I have a setup where the dashboards are always in code and the internal walk done by Grafana makes sure I have always the most recent changes in Grafana.

Creating configmaps for each dashboards is in my opinion a no-go. Did this before but if you have more then 1000+ dashboards, then every time a dashboards changes, Grafana is going to restart, thus it’s temporarily unavailable.

thank you for your time responding to me, at the end i did used the terraform grafana provider, and it worked wonderfully!
for example:

resource "grafana_dashboard" "k8s" {
  provider = grafana.cloud

  for_each    = fileset("${path.module}/grafana-dashboards/k8s", "*.json")
  config_json = file("${path.module}/grafana-dashboards/k8s/${each.key}")
  folder      = grafana_folder.k8s.id
}

in this way terraform also detects changes make to dashboard.json code and updates it accordingly.
and then there is no need for an extra job or github actions.