How do I persist the API Key when deploying Grafana on Kubernetes

I have deployed Grafana on Kubernetes to create dashboards for a Kafka cluster. Since the pod is ephemeral, then if the team that manages our cluster does a redeploy of apps on the cluster I will lose the API key I generated. What would be the best way to persist the key when the pod is redeployed?

Does anyone know where they data for the API key is store if Grafana is not mounted to a PVC. I looked in the sqlite3 db and there were no tables so I assume it can’t be there. Any suggestions would be helpful.

I’m kind of confused that you’re saying that the sqlite db has no tables in it - but on a MySQL backed instance it looks like there’s an api_key table that fits the bill for storage of that item.

I can’t really help a fellow Kafka dashboarder with the second bit, (because we use a MySQL cluster outside of K8S to do backing store for the Grafana pods) - but couldn’t you mount the file sqlite would use for storage as a configmap with r/w permissions? If my reading is right then the Grafana pod should be able to use that “virtual” file to persist it’s data between pod start cycles. Downside is that you can’t scale that to more than one pod.

This might help: https://serverfault.com/questions/909052/kubernetes-configmap-only-writable-by-root

I’m a big fan of the Kubernetes+realDB setup - as Grafana’s requirements are low enough that even a pretty small DB (PostgreSQL or MySQL) can support it pretty happily, and then you can do multi-pod deployments, blue/green, etc.

Sorry, I realise that’s probably not that helpful

@skybob Unfortunately, using a containerized DB is not an option and since we are pretty limited on resources I’m trying to stay away from persistent volume claims.

When I queried the sqlite db, there were no tables in it. What I was trying to figure out was where the api key is stored so I could possibly configure grafana deployments on K8s to have an api key once deployments were completed without manually intervening and without using something to persist the key if the pod was brought down and spun back up (normal behavior for an ephemeral environment). I’m starting to think that I might have asked the wrong question or be trying to solve the wrong problem in the first place.

Thank you for the suggestions though.

Hey team!

Any update on this? I’m facing the same issue. The keys get deleted as soon as I redeploy the pods…

Thanks,
Luís