Too many Alerts firing for a single Condition

I have configured an alert in grafana which evaluates the Query every 1 hour and has to fire immediately since for condition not let me to put 0m, It presets to “1h”.

When it fires, it sends 52 notification to the configured Slack Channel.
This happens between 3.31 to 4.30 AM IST.

Actually Every hour there is a condition match but it’s not firing during those time.

For Example:
2 - 3 PM IST there is a condition match during this interval and alert state goes to “pending” for 1 hour since “for” value is “1h”. After an hour again it goes to “Normal” state instead of “Firing” state.

This repeats for Every hour, after some time around 3.30 AM next day, it fires 52 notification as stated earlier. After that again a quite period.

Grafana Version: v8.5.6

Also, I am not very clear with “For” usage.

Note: There is no Group wait / interval configured.

welcome to the :grafana: community @tkrishnakumar

to avoid receiving too many notifications I think you might need to group the alerts, so they are evaluated together after the time interval that you specify in the Evaluate every x for x

for it is the time the alert rule will be in pending state before it goes to ‘firing state’ . This is useful to avoid getting alarmed too early. You don’t want to be notified about, for example, high cpu usage, unless it does not go low after a few minutes.

Thanks for detailed information.

Actually, I have grouped the Alerts. Specific group let’s say GroupA contains 12 Alerts which are all evaluated every 1 hour for 1 hour.

Still the behaviour is same.


could you view your rule in Explore, or check its status history and verify that the rule is firing appropriately and how frequently ?

alert rules > expand a rule > click Show state history

alert rules > expand a rule > click VIew > click View in Explore

also, it would be helpful to see an example of notifications you have received (are they different, identical, etc)

last, please. share your query and a screenshot of your notification policy (hiding any sensitive data)

thank you

Here is the recent Alert state history

Here is the Alert State history during Alerting.

Received Slack Alerts
Date & Time: 07-Apr-2023 10:31 AM IST - 11:29:00 AM IST
Total Fired Alerts: 43
Sample alert out of 43

Alert Rule Details

Notification Policy:

Contact Point Details

It’s hard to know what the issue is as most of the screenshots have been redacted (colored in). I would think the issue is that the custom labels keep changing between evaluations which would explain the repeated “Normal” to “Pending”. But without seeing the actual labels and the alert definition I’m not sure what else I can do to help.