Getting NoData alert although data is there

Hi there

I am new to grafana and I have an issue that I do get an alert that there is no data although the data is there. I guess I am doing something wrong with eval and for times.

What am I doing?
I am monitoring my batteries at the garden house with a LoRa module. This I translate from MQTT to prometheus and from there into grafana cloud.
I do send the values from the batteries every two hours.

If the voltage goes below 12.2 I want an alert and this is how I configured it:



But I keep getting noData alerts. Although it has data… I hope someone can help me understand what I am doing wrong.

Hello! :wave: Can you show a screenshot from a time where you got no data? If I had to guess, sometimes metrics are missing creating a 4 hour gap while your query looks at the last 3 hours of metrics.

Please see attached images, there is no gap.


I think my suggestion would be to make a couple of changes:

  1. Instead of having a separate metric per battery, use a common metric such as voltage and make each battery a label of voltage, similar to how you have done already for instance, job, sensor, etc.

  2. Then I would change the query from a range query to an instant query that looks something like this:

min(min_over_time (voltage[4h])) by (battery)

This query will tell you the lowest recorded voltage by battery in the last 4 hours. In Prometheus, it’s recommended that you query at least twice the data collection interval seconds, which in your case is 2 x 2 hours = 4 hours.

  1. Add a Threshold expression for < 12.2.

This change will not just make your alert more efficient, but it will also rule out Grafana as the issue and help us understand if the data is arriving late.