Configure Grafana Managed Alert correctly - noData problem

We have a graph that displays data from Elasticsearch. Unfortunately, the data arrives somewhat irregularly at an interval of between one and usually 4 minutes.
Now we got a request to be alerted if the value of the alert is below 1 for 5 minutes continuously.

Current configuration:
Every minute an interval of one minute is looked at and if for 5 minutes the above condition is true, then we should be alerted.

Problem:
Due to the fact that the interval of the values is over one minute again and again Grafana says that no data arrives (noData alarm).
If we set the interval which the alarm should look at higher, e.g. 4 minutes, then we had the problem that there were false positives with the alarms. So there were often alarms, although there was only a short peak, but not a permanent low value, just because ONE value in the looked interval was below 1.

Temporary “solution”:
We built two alarms that are detached from the graph.
Alarm 1 checks a one minute interval every minute for our condition and defines “noData” as OK.
Alarm 2 checks if data is still arriving or not.
Since Alarm 1 unfortunately only ignores the “noData” for the alarm, but not in the display in the graph, this looked very chaotic, so that we only had the decoupling from the graph.

However, we actually need the display in the graph for other colleagues, so we are now on a better solution to the problem.