Hi, having some issues provisioning and maintaining the Alerts via the API.
Grafana Ver 9.0.4 running in container
DataSources:
- Azure Metrics / Logs
- SQL Azure PaaS
- SQL Azure VM
I can successfully create and update alerts and dashboards using the API’s.
But am experiencing an issue, when the alerts are updated via the API, it causes their alerting state to reset, go to pending, then alerting again.
Also have noticed the number of alerts drops and increases during deployment.
I am not doing any deletions, only POST new alerts, or PUT existing alerts.
These are the APIs i hit
"PUT" { "/api/v1/provisioning/alert-rules/$($AlertModel.uid)" }
"POST" { "/api/v1/provisioning/alert-rules" }
I configured Prometheus to scrape Grafana Metrics, and pushed some annotations so i could see whats happening during deployment time:
You can see in the Legend panel, the Range of change, is only caused by a deployment happening, the alerts are actually constantly in this state.
The quantity of Normal alerts fluctuates between 127 - 181
But there is actually this many Alerts
I am making an assumption that the difference of 54 alerts, is because some of the alerts create their own dimensions based of the queries and that’s getting reset at deployment time.
And the alerts going from Alerting State, to OK, Pending, Alerting again, i’ve no idea what is causing that.
And to top off the alerting issues, we have DataSourceNoData and DataSourceErrors constantly firing at random points that i also cannot debug.
Would really appreciate some help if possible please. Thanks