Provisioning dashboards in docker

Hi Experts

We want to Provision dashboards to container instances with mappings to local directories.

Use case: only several different dashboards.
Dashboard json automatically updated with the UI Panel edits to the dashboard.
Two instances of Grafana for development and production respectively.

Currently under consideration are mostly Grizzly and Grafana Operator

Which tools are best supported and well maintained?
I understand the the Grafana developers use Grizzly, does this make it a good choice?

Thanks for your support and pointers
E

It’s a really wide and preference oriented question. IMHO, you should try them and find which one suits you and your case better. My choice, for example, is to go with the old and gold mounted volumes with the native provisioning yaml files (Provision Grafana | Grafana documentation) in a simple native deployment. Does the job without hassle on my cases.

Thank you for the ise answer!
Perhaps this may be paraphrased as TIMTOWTDI of Perl and Python’s response of there being only a single obvious way to do it.

Given the use case described I would first like to go the shortest simplest and supported way to met the specification and have things working (liek Python).

Time to a working (and freedom from frustration) is the priority.
I prefer a copy/paste tutorial or an example case that “just works” if anyone knows of such. This in my experience is a really great and smart way to proceed.

Get up and running fast. Optimisation and configuration are the cans that we kick down the road for now :slight_smile:

So I see that we use yaml; to configure. I will need to see if that includes Dockerfile for example. No idea from a very cursory look.

From the docs:

Are these projects then optional.

Any handholding? :wink:

My very best
E

I’m really sorry but IMHO this way you’ll fall on the mediocrity of the job… I’m more on the “let me study and understand the ways, since I’ll be able to troubleshoot and tune up the things later” side of the things.

Well, not really a Grafana question, but finishing what I have started. It’s just a classical way to load custom files: COPY on the Dockerfile before the build or mount some local file volumes. On my GitHub - isaqueprofeta/zabbix-lab: Simple docker monitoring lab with Zabbix, PostgreSQL, PGAdmin, Zapix, Grafana and Mailhog you can see a “example to copy paste” working for Plug-in an Datasource provisioning right away.

I get your point @isaqueprofeta

What I describe is a workflow that I have found most efficient and most fun. Mediocrity in outcome and lack of wisdom is not an endpoint that I recognise. But maybe there is a price to pay for edgy, impatience, who knows. There are so many other places than devops where I want and need to invest clock cycles.

I like a fast to working system and then pick around and learn organically. Play and poke the system from within. And yes swap frameworks entirely when required but with a live working reference, that being my traditional comfort zone. (Maybe one day a more scholarly approach might work better for me, who knows?)

PS Testing iterations based on assumptions is a way to test with more certainty if you have it right or not. This has been my experience. It is partly because I often find documentation difficult to interpret and ambiguous. My issue.

/rant

I am now going to dive into your links and see if I can play puppeteer and bring something up. Exciting.

Thanks friend.
E

I see in this nice project that it should be easy to click and paste and get up and running. Thank you.

The datasources and plugins in the relevant directories then ae deployed in the GF instance, good.

If I placed a foo_dashboard.json in the dashboards dir, I assume that this would be similarly deployed.

In the spec I mentioned that I would like to have UI Panel Edits that reliably change the json in the runtime to be serialised to that foo_dasboard…json. The docs indicate that the design is the the provisioned json is the source and always overrides user edits. USer edits ar eupdated to the database and this database is always overwritten. For example:
When Grafana starts, it updates and inserts all dashboards available in the configured path. Then later on, Grafana polls that path every updateIntervalSeconds, looks for updated JSON files, and updates and inserts those into the database.
It seems to achieve the UI editing to update the file json would require some additional workflow.
Are there automation approaches that allow this. I want to easily and automatically update the source json.

Thanks for any help that may be useful for this.

Warm wishes
E

IMHO you should keep the updates on a separate API Oriented project or in a CI/CD repository that updates your containers each new deploy. It isn’t something that the application should be “container aware”.

That is a wise opinion, thanks

However here is the use case.

I want to enable development via the UI and at the same time development by editing the json of the current runtime dashboard.That means to be able to use these two methods interchangeably.

Does this make any sense?

Over the years I have tried to edit either panel json or the entire dashboard json.
I find the IDE/UI of Grafana to be very difficult to use as an editor. Accordingly, I have given up except for very minor and occasional edits.

What is the motivation to want to edit the json directly in th same session as the UI?
Working in code is more deterministic and lower overhead and more efficient way to copy features in bulk. (Dashboard as code mantra after all :wink: ).
Critically, using a live GF instance as the target for json edits allows for iterative development and checking the code for errors and functionality. My expereince is that I very much benefit from this type of facility, feedback from the “compiler”. Its also a methodology for learning, rising from mediocrity,

That is why I want to have in a development channel, live update of the json file, and bidirectionally. It is ok if a reload is required after editing the json. However I prefer not to have to restart the container each time. But if this is the only way, it is still worth it for me.

So I can imagine writing some API that could access the current state of the json (from the database) and updating the json file. Actually jsut in writing I can figure something else.

But hey, has no-one else this same desire and has someone done it before. Anyone else interested in a well thought through solution for this?

I am curious if this is only my itch.

Thank for the positive help to you and all
E

That sounds like a Terraform. Grafana provider is available.

Thank you @jangaraj

This looks very nice.
Allow me two followups on.

The Terraform docs say the following.

Prerequisites
Before you begin, you should have the following available:

I prefer running on my own instance. Various reasons.
Is this posisble and still use Terraform or is it more complicated and/or impossible?

I also like the idea of CLI and Grizzly, could pull push be part of a good solution?

I am excited to move further with such expert and thoughtful advice.
:pray:
E

My terraform doc is not saying that:

https://registry.terraform.io/providers/grafana/grafana/latest/docs

Of course if you want to manage Grafana Cloud resources, then you must have Grafana cloud account. I guess you want to manage on-prem Grafana only.

Brilliant, I was blindly following the note about cloud on the Grafana Terraform docs here GitHub - grafana/terraform-provider-grafana: Terraform Grafana provider

Yes, I like on-prem Grafana

Terraform is massive. Might even be overkill, but happy to go down this road now.
Is this a good place to start with documentation:

Its the HCP account that I need?

I am keen to find any useful resources to get up and running fast with a working Terraform Grafana stack (and then iteratively gaining deeper understanding). I have seen quite some tutorials, but none show an example of an implementation that combines the UI edits with the direct dashboard.json edits.
Would that functionality be part of a CD/CI deployment?
(Sure, I can imagine a way to hand roll a solution, but that is not how I want to bootstrap with somethign readymade).

I am super grateful to you for your handholding into brand new territory for me.

Thank you.

My best
E

I managed to get the datasources yaml and the dashboard json to line up.
So the soft links between these are working and basic provisioning is good and working.

In order to implement development from the provisioned sources, I need to save to json file, its the default option when you go save. I do this on my local machine with browser.
Share like this is not possible as you cannot leave your dashboard to get to the Share control in the UI.

For now the working solution is to prepend the __includes and the __requires lists in json that the “Export for external sharing” GF UI function generated off a previous version,

This just works, so running with it for now. Will develop devops framework on top of this.
But for now have gitability and development by iteration off provisioned dashboards. This was the main requirement.

Could not find any useful docs.
Will submit PR to the repo for docs on basic provisioning and possibly some code to automate from local browser machine to remote server that deploys Grafana and the prepend and deployment to /provisioning.
Poor man’s CD/CI.

Thanks for the help received in this thread.

My best
E