Grafana Alerts No value

Hi all,

I am quite new to Grafana and using Alerting with version v9.3.6.

More or less once a week (but irregularily) the datasource does not have any data and I receive a no value alert.
I tried the following measures to get rid of this alert, but none worked:

1.) Alert evaluation behavior:
image

2.) Notification policies: A new contact point with an operations mail for errors:

3.) Added a silence for no data:

Any help is appreciated how to get rid of all the novalue email alerts.
Thank you!

Any help is very much appreciated :slight_smile:

hi @grafananewbie :wave:

have you tried using silences?

Hi @antonio,
yes I configured a silence for no data (see third picture in the original post). I don’t know, maybe there is something wrong with the label?

oops. my bad :sweat:

let me reach out my team and I will get back to you

1 Like

sorry for the delay @grafananewbie

could you share a screenshot of the actual alert you are getting?
also status history please

thank you

Hi @antonio,
sure:
Here is the Alert History, the alert on the 26th April was a “true” alert. However, an alert was sent on 23rd April at 21.20 and you can see the alert message below.



thanks for your help!

hi @grafananewbie

could you check the Timing options in your Root policy (Alerting > Notification Policies > Edit
The delay could be explained by a setting.

Also, could you double check that we are looking at the right alert? The notification you received has a different name?

Hi @antonio,
the timing options look exactly the same. except I have a custom alerting-list and not the grafana-default-email. But that should not cause the problem I guess.
The screenshots were badly chosen sorry for that! The alert history is the same. However, I got 32 alerts for every single alert configured. Please see attached that there was an alert for the given alerting history as well.

Hi @antonio,
any news on the Grafna Alerts “No value”?
Thanks!

Hi,

There are two different event types

  • “no data” → If the query returned on data, this might happen if your metrics-collector is broken or delayed and indeed the data is missing in the database
  • “no value” → if the query returned an error or timeout. The reason is typically a network issue or database issue so that the query from Grafana fails for some reason.


Did you try setting the “Alert state if execution error or timeout” to “OK”?

Hi @schneefisch
Thank you for the explaination. I checked the alarm and both options were already set to ‘OK’…

Hi @grafananewbie ,

maybe there is another reason.
I seem to have a similar issue.

We are regularly receiving “no value” alerts and if I’m looking into the logs the reason is not the query towards prometheus or elasticsearch, but instead the query to the grafana configuration-datbase (PostgreSQL in my case) returned a timeout.
It seems that if the query to update the Alert-State in Grafana’s own configuration fails, then an alert is sent although the Error-State is configured as “OK”.

I guess, this is a bug in the alert handling inside of Grafana.

Hi @schneefisch
I am having similar issue how can I check the logs of timeout while I run grafana in Kubernetes.
I am using kube prometheus stack with default

Edit got the logs
lvl=eror msg="error getting dashboard for alert annotation" dashboardUID=xyx alertRuleUID=xyx error="database is locked"

getting this error how to fix it :smiling_face_with_tear: its bug that due to database locked its giving no value