We have been getting “No Data” alerts on several (but not all) of our cloudwatch-sourced dashboards for about 72 hours now. Every five minutes we get “No Data”, the data Grafana claims to be missing, and an alert because we alert on “No Data”.
There are no recorded changes on our side and we definitely are not missing the data Grafana claims to be missing - but still getting the alerts. So far Grafana paid support has been unable to provide any assistance, so coming here to see if there are any suggestions. This is Grafana Cloud, so I’m not sure how deep into the logs I can get.
Noting - I’ve been back and forth via email with Grafana paid support for 7 days now with no actual resolution to this problem. I’ve been told “we deployed some internal query optimization for promQL that could have triggered this behavior” so for any potential paying customers - just know that what that looks like from a paying customer is that Grafana will break your graphs and alerts at random times. This is not what I expected from a managed service and if you’re looking to make your life easier as an admin, Grafana Cloud probably isn’t what you should be looking at.
Just closing the loop here: After about 14 days of engaging with support I got a new support rep who walked me through reverting all the changes the previous support rep had me make. Then I turned off alerting on ‘No data’ returns. So after half a month of losing production metrics/alerting, the solution ended up being that we just needed to turn off ‘No data’ alerts because Grafana broke something in a commit they made (that comes from the grafana support rep, so take it up with him).
1 Like