Alerts getting closed stating Normal (Missingseries)

My alerts are getting closed after sometime even though condition still remain valid. It says Normal (Missingseries) and closed it. I do not want it. I want the alert should remain open until condition remain true. How to achieve it. I sat NoData as well but it still causing alert to closed. I also do not want alert when there is ab actual nodata condition happen. Basically I am saying if an incident that is opened, and when nodata situation occur, then do not close the alert and wait for next data point and judge based on its value. Please help me.

How your condition can be valid, when you have NoData?
That looks like a problem in your query, e. g. it is checking only last 5 minutes, but metrics are ingested every 5 minutes - so of course there can be state when last 5min query returns nothing. Make sure your alert query returns data in all cases.

You may also use Keep Last State, which was announced recently. See “Keep Last State for Grafana Managed Alerting”:

Thanks for replying.
In my case evaluation period if 5m and for is 5 min as well.
I am using Grafana v10.0.1.
I have attach the screenshots.
So what you are suggestion is not clear to me. Are you saying the evalution period should be longer enough to make sure there should be some minimum datapoints occurred?


How often is that metric collected/ingested? Blind guess 5min,so then don’t query last 5 minute in the alert query, but last 10minutes. There must be overlaping, because metric processing adding also delay.

In my case scrap interval is 60s. I checked grafana-agent configuration and also validation from explorer that data points coming for each minutes.

      prometheus.scrape "kubernetes_nodes" {
        targets         = discovery.relabel.kubernetes_nodes.output
        forward_to      = [prometheus.remote_write.default.receiver]
        job_name        = "kubernetes-nodes"
        scrape_interval = "60s"
        scheme          = "https"

.....

      prometheus.scrape "kubernetes_pods" {
        targets         = discovery.relabel.kubernetes_pods.output
        forward_to      = [prometheus.remote_write.default.receiver]
        job_name        = "kubernetes-pods"
        honor_labels    = true
        scrape_interval = "60s"

        clustering {
          enabled = true
        }
      }


Ok, then you didn’t have that metric at that time. Maybe that replica was not available (so _unavailable metric reports that replica at that time, not _available). Read doc for used metrics to understand their meaning :person_shrugging:

To prevent alerts from automatically closing if conditions are not met, you should configure the correct alert conditions. Make sure you use the right conditions such as “NoData” or others that suit your requirements. Also make sure that the timeout settings before closing alerts are configured correctly.

Thanks Jangaraj for commenting, but I think here issue is not about wrong metric used or something. It is closing alert even though condition remain valid. For example, two pods are desired and both are not able to come up as I programmed them in that way. So condition is valid and alert get triggered. Now I did not solve the issue, still after sometime, alert get closed and after 5-10mins, a new alert get created. So what essentially it is doing is increasing the alert count but in reality it should just one alert. I am looking for help around to solve this issue. I verified that data was there as when I plot a graph, I can see data points were there with some value. Still why Grafana reporting that “Normal(Missing series)” is something I am unable to understand.

Thanks rovertnorton, I did that. I changed no data condition from Ok to NoData. Still same error occurring. Then I changed NoData to Alerting that seems to be working, no doubt about it, as the very first alert still not closed as I have not fixed anything. But what it is causing is for a healthy pods or where the condition is failing, it is triggering alert. Probably due to Missing Datapoints or something, and it does around 21 times in a day , causing confusion. These are just false alert. If I can so something like open incident only when Grafana find condition failing twice continuously then change the state , otherwise not. How Can I do that?

Please prove it with real metric numbers. I can think, because I don’t have access to your TSDB. You must be 100% sure.

You can see Green Normal(Missingseries) events that close the alerts. But at same time datapoints are visible on the graph. There is no place where the graph appear to be broken or discontinue.


That’s aggregated/calculated result. That’s not valid proof - check raw metrics with the finest resolution - and only for that particular timeserie. See Rate query failing only on Grafana - small change in resolution and you may have completely different results in the graph.

I will try it. Its AWS AMP and need to check how to do it there. Anyway, in the graph we can see, query always shows value of 0.5(50%) availability, still alert get close. But it happen only when Normal(Missingseries) event comes.

Hi @ashish060211, I got the same problem as you describe: I’m using grafana alert provisioning v9.4, and been trying to understand how to “maintain” the same state of an alert regardless of the NoData state but haven’t found a solution. I thought of not sending out email/ to slack webhook when an alert instance has “MissingSeries” in the annotations, but unfortunately the matcher in Notification Policy only uses labels. Appreciate any help!