My current preview routing -
But I have this notification policy setup and I should receive an alert in Opsgenie via this -
Also, should I be changing anything while viewing this alert rule option in query because , the state history that I showed was of 28th but I didn’t receive an alert at that time
Also, for your reference I have an example state history where this is working -
Uploading: image.png…
and I receive an alert in Opsgenie too -
So, maybe there’s something that I am missing here
You didn’t provide reproducible example, e. g. it’s is not clear which labels your alert have => it’s not possible to say what’s a problem.
My alert has this label -
But routing preview is saying that’s matching only that top level policy (so it’s emailing alert, not opsgenies) - that indicates problem with labels. You may have some white spaces, typos, there,…
So edit (delete, save, add, save, repeat eventually) alerts labels/notification policy matchers until routing preview is showing ops genie CP.
Now, over here -
I have changed the option to last 7 days, but says no data , how could this differ from alert state history
Sorry, this is off topic.
Ok, but for the preview routing to work will I need the data point or will it just work with labels added ?
Check your routing logic: you are playing with nodata there
Yeah that’s been added so that if there’s no data we don’t want notifications
I have grafana logs sent to splunk is there something that I should search for in that
Hey this has been fixed the problem was that wewere actually receiveing alerts on opsgenie but since the alias were same we faced an issue that alerts for alert instances within a alert rule were getting appended to same alert .
faced this problem -
opened 05:42PM - 17 Feb 21 UTC
closed 07:42PM - 17 Feb 21 UTC
## Context
If you have a setup with _one_ Grafana instance and _one_ OpsGenie i… ntegration configured with the default settings, the grafana-opsgenie integration functions as expected.
However, if you have a setup with _multiple_ Grafana instances and/or _multiple_ OpsGenie integrations configured with the default settings, this can introduce some issues. There are simple workarounds you can do in OpsGenie to address these issues described further down. Though these workarounds are suboptimal compared to how this could potentially be solved by updating the implementation of the integration.
### OpsGenie Alert Alias
Unique identifier for "open" alerts in OpsGenie
#### OpsGenie Alert De-duplication
https://docs.opsgenie.com/docs/alert-deduplication
OpsGenie uses alert-deduplication based on the alert _alias_. In short, there can be at most one open alert with the same alias at any time.
#### Grafana-OpsGenie Integration Default Settings
When you create a new Grafana integration in OpsGenie, the default settings for the alias is `alias="{{alias}}"`, where `alias==Grafana alert rule ID` https://github.com/grafana/grafana/blob/438b403acc3e4a374872104a2ef1a510abdd5075/pkg/services/alerting/notifiers/opsgenie.go#L160
## Issue
The _alert rule ID_ that is sent to OpsGenie from Grafana as a reference to triggered alerts, is a plain auto-increment numeric ID of a Grafana alert. This ID is only a unique identifier for alerts per Grafana instance and is _not_ suitable be used as an alias for a setup with _multiple_ Grafana instances and/or _multiple_ OpsGenie integrations.
Some examples to clarify the issue:
(The two example issues can easily be solved by the **workaround** explained further down)
### Ex. 1 - Setup with multiple Grafana instances A and B
1. An alert is triggered in Grafana instance **A** with alert rule id=**42**
2. Grafana **A** sends a notification to OpsGenie that an alert with id=**42** changed state from ok->alerting
3. OpsGenie creates an _open_ alert with alias=**42**
4. A completely different alert is then triggered in Grafana instance **B** that also happens to have alert rule id=**42**
5. Grafana **B** sends a notification to OpsGenie that an alert with id=**42** changed state from ok->alerting
6. (in the case when the first OpsGenie alert created in step 3. is still "open") OpsGenie de-duplicates the _open_ alert with alias=**42**, increasing the count. The alert with id=**42** from Grafana instance **B** is _lost_ due to unintended alert de-duplication in OpsGenie. This can cause people to miss out on alerts.
(This issue only occurs in the event when a new Grafana alert is triggered, at the same time as there is already an "open" alert in OpsGenie with the same alias/id that was created by another alert from a different Grafana instance. This makes the issue very unpredictable and difficult to debug why some alerts are never created in OpsGenie)
### Ex. 2 - Setup with single Grafana, multiple grafana-opsgenie intergrations/notification channels
A Grafana alert can be configured to have multiple "Send to" OpsGenie notification channels in the same alert. For example, let's say that we have one Grafana alert with id=**13** and two OpsGenie notification channels/integrations **X** and **Y** configured as "Send to". The two different OpsGenie notification channels **X** and **Y** are connected to two different teams in OpsGenie that should both receive the alert and be notified in OpsGenie.
1. The alert with id=**13** is triggered in Grafana
2. Grafana sends a notification to OpsGenie via integration **X**, informing that alert with id=**13** changed state from ok->alerting
3. Grafana sends a notification to OpsGenie via integration **Y**, informing that alert with id=**13** changed state from ok->alerting
4. OpsGenie receives the notification from the grafana-opsgenie integration **X** and creates an _open_ alert with alias=**13**
5. OpsGenie receives the notification from the grafana-opsgenie integration **Y** and instantly de-duplicates the alert. Resulting in one of the teams(**Y**) will never receive any notification from OpsGenie about this alert.
(Whether the Grafana notification from opsgenie-integration X or Y first reaches OpsGenie, can vary due to network delays, making it inconsistent what team receives the alert in OpsGenie. Either way, there will at most be one alert created and one team notified in OpsGenie, all other teams configured for this alert will miss it)
## Workaround
There's a simple solution for this. From the OpsGenie side, the alias of alerts created from an integration is a user-configurable setting. The default is `alias="{{alias}}"`, where `alias==Grafana alert rule ID`. However, the OpsGenie alias can be configured to dynamically be an actual unique ID for a setup with _multiple_ instances and/or _multiple_ integrations. Along with the _alert rule ID_, Grafana also sends other data about the alert, such as: _dashboard UID_, _URL_, _tags_ etc. to OpsGenie.
You can easily configure an OpsGenie integration alias to dynamically concatenate string with the variables passed by Grafana in different ways to create unique alert alias for all alerts: to enable a working setup with _multiple_ Grafana instances and/or OpsGenie integrations tied to the same OpsGenie service.
### Caveat
#### Auto-close Alerts in OpsGenie
Auto-close of alerts can also be solved by applying the same workaround as described above, by configuring the integration to have the same unique alias in the "Close Alert" settings as configured in the "Create Alert" settings.
There is one caveat though. The way the integration is currently implemented for auto-closing alerts, limits the alert fields sent to OpsGenie to _only_ the rule ID of the alert. The other fields such as _tags_ etc. are not sent in the `closeAlert()` function, only by the `createAlert()` function
https://github.com/grafana/grafana/blob/438b403acc3e4a374872104a2ef1a510abdd5075/pkg/services/alerting/notifiers/opsgenie.go#L221
Invariant: the same alias assigned to an alert when creating it through the integration, must also be used for (auto-)closing the alert through the integration.
Given the invariant, the only alert field we can leverage to configure a unique alias in OpsGenie is the _alert rule ID_, which is sent by _both_ `closeAlert()` and `createAlert()`. Only having the alert rule ID to play with, introduces restrictions on making a _dynamic_ unique alias on the OpsGenie side of the integration. The only solution is to add _more_ integrations, where unique aliases are achieved through concatenating the alert rule ID with literal strings(different literal strings for each integration).
Ex. different integrations with unique alias configured as: `alias="<literal string: team name>/<literal string: URL of the Grafana instance>/{{alias}}"`
**Ex. for a setup with 5 Grafana instances and 10 different OpsGenie teams**
To have no unintended alert de-duplication and _working auto-closing_, there would have to be one integration per team for each grafana instance configured in OpsGenie: 5x10=50 integrations are required to satisfy a properly working setup.
If possible, to update the integration such that the `closeAlert()` function would send all the alert fields as the `createAlert()` function does in the POST body, the same setup could potentially be achieved with only _one_ integration, by leveraging several alert fields to configure a proper _dynamic_ unique alias. This would be a more manageable and maintainable alternative.
Might be difficult though if the alert identifier has to be given directly as an in-line parameter to the API close alert endpoint `https://api.opsgenie.com/v2/alerts/:identifier/close`
https://docs.opsgenie.com/docs/alert-api#close-alert
changed the alias in opsgenie -
{{extraProperties.alertname}}-{{extraProperties. stack}}-{{extraProperties.instance}}-{{extraProperties.job}}
this worked for me and thanks for taking in time and efforts @jangaraj
1 Like