Grafana Alert sent 3 alert notifications at once

I created an alert rule to monitor Memory usage of a Kubernetes cluster.
And I configured a contact point and notification policy to send alert notification to the Slack.
When the memory usage exceeds the threshold, I received 3 alert notifications in Slack at once. The content of 3 messages are the same.
I tried other contact point type (e.g. AWS SNS), it is the same.
Why do we receive 3 alert messages at once?

How is your alert grouping option in the notification policy configuration? You can change this grouping by enabling alert grouping in Notification policies to make Grafana sending a single compact notification depending on the group by you’ve defined.

I leave alert grouping option in the notification policy configuration as default.
I think the configuration in group option is for multiple alerts in a group.
In my example, there is only one alert, so the group option is not related, I think.

1 Like

yes, you’re right. If you have only one alert firing then grouping is not the solution.

Hi! Can you give us some more information about what notification policies you have configured?

If only a single alert instance is created for your alert rule if might be delivered several times because of how your notification policies are configured.

hi @nguyenxuanhai266 , could you please confirm if these 3 notifications come from the same alert instance(only 1 alert firing) or alert rule(maybe you have more than one alert firing for the same alert rule?)

@gillesdemey

I created one notification policy:

  • Matching labels: scope=test
  • Group wait/Group interval/Repeat interval: NA (use default value of Root policy)
  • Contact point: slack

@soniaaguilar
I confirmed that there is only one alert instance. Please refer to the image below:

thank you @nguyenxuanhai266 , then if you go to the group tab and filter for {scope="test"}, do you still get only one alert instance?

Yes, it has only 1 alert instance also.

1 Like

Could you please share your alert manager configuration setup?

Here is my configuration:

{
  "template_files": {},
  "alertmanager_config": {
    "route": {
      "receiver": "default",
      "routes": [
        {
          "receiver": "slack",
          "object_matchers": [
            [
              "scope",
              "=",
              "test"
            ]
          ]
        }
      ]
    },
    "templates": null,
    "receivers": [
      {
        "name": "default",
        "grafana_managed_receiver_configs": [
          {
            "uid": "",
            "name": "default",
            "type": "slack",
            "disableResolveMessage": false,
            "settings": {
              "recipient": "test-grafana-alert",
              "username": "azure-grafana",
			  "token": "xxxxxxxxxxxxxxx"
            },
            "secureFields": {
              "token": true
            }
          }
        ]
      },
      {
        "name": "slack",
        "grafana_managed_receiver_configs": [
          {
            "uid": "",
            "name": "slack",
            "type": "slack",
            "disableResolveMessage": false,
            "settings": {
              "recipient": "test-grafana-alert",
              "username": "azure-grafana",
			  "token": "xxxxxxxxxxxxxxxxxx"
            },
            "secureFields": {
              "token": true
            }
          }
        ]
      }
    ]
  }
}

ok, now, could you share your notification policies configuration? thank you!

Here is my notification policies configuration:

I don’t see from this information something that could cause these duplication of notifications.
I have another question, though: do you have multiple Alert managers or multiple Grafana instances?

No, I created a workspace in Amazon Managed Grafana with 1 Alert manager (default of AWS Grafana).

Hey @nguyenxuanhai266 did you able to manage your issue?
Looks like I have same on my side. I have AWS Managed Grafana v9.4 with enabled “Turn Grafana alerting on” and added integration to CloudWatch.
One configured contact point for slack.
Has created a folder with one rule for some data from CloudWatch and it fires 3 notification.

I’ve just added for test reason simple math rule 1 == 1 and and found that inside state history I have 3 pending and then 3 alerting rows

Not yet. I have no solution for it.

Perhaps the is the same issue?

1 Like

Looks like it might be the same issue, but it’s managed grafana and we have no idea how it deployed, probably it’s in HA mode. So I’ve checked nslookup and found there are 3 records so looks like it’s ha with 3 instances. Anyway I’ve already created a support ticket in aws, I’ll post an update here.