Override default configuration using k8s

We’re new to Grafana as a company, we have grafana deployed as a container using kubernetes.
The goal is to enable github authentication, the documentation is sort of clear on how to that, however, when I “login” to the grafana pod, I don’t see the custom.ini file, only the default one (called grafana.ini).

I’m very new to containers, I was able to deploy grafana as a pod in k8s and it’s working great, and to my understanding I would need to override the value using my deployment file using GF_AUTH_GITHUB_ENABLED
but that has no affect since I don’t see the custom.ini in /etc/grafana - I think that’s the problem.

My question is how would I go about solving this? should I use a different image? any suggestions would be most helpful.

The key GF_AUTH_GITHUB_ENABLED for the environmental override looks correct.

There is a hierarchy with the different ways of configuring Grafana. Setting environmental variables with the GF_ syntax overrides the config file values. Overrides is not the same as overwrites. It does not change the config file. It creates environmental variables and then Grafana uses the value in the environmental variable if it is exists.

For the docker file, no custom.ini is needed as the convention is to use the environmental variables instead.

Run printenv to see the list of environmental variables or check the server logs for Grafana. Here I overrode the server.http_port variable as an example like this:

docker run -d --name=grafana -p 3003:3003 -e "GF_SERVER_HTTP_PORT=3003" grafana/grafana:4.4.1

and looked at the logs with this command: docker logs d2e695f1f9b11a14a2d534091c20bf1b5a1a24761998e066856bbf16b0281363

kubectl logs does the same thing for a pod.

The last line shows that GF_SERVER_HTTP_PORT was set to 3003.

That’s great thank you, what I did is exported the grafana.ini to a configmap in kubernetes, then using the deployment I did what you did sort of and passed the relevant environment variables.
Everything works great, thanks a lot.

no3arms are you currently using any persistant storage for k8s. I cant get that system to work due to the User id change. Permissions issue