Hello, Please see below.
opened 10:22PM - 09 Jun 23 UTC
closed 02:23PM - 01 Dec 23 UTC
type/bug
effort/none
datasource/CloudWatch
###
# What went wrong?
**What happened**:
- I tried creating a grafa… na managed alert using ANOMALY_DETECTION_BAND() function of cloudwatch data source
**What did you expect to happen**:
- Nodata is seen in alerting section even though same metric can be graphed in grafana dashboard.
###
# How do we reproduce it?
**Step 1**:
- Enable anomaly detection for a cloudwatch metric in AWS. eg: ALB\Requestcount
**Step 2**:
- In Grafana, configure for cloudwatch datasource and go to alert rule and create one. Select grafana managed alerting and use the same cloudwatch metric for which model created in query A with m1 as ID. In query B, enter ANOMALY_DETECTION_BNAND(m1,2) after selecting "code"
**Step 3**:
- Run queries and will see NO data for query B though data available for query A
###
# What Grafana version are you using?
v9.5.0
###
## Optional Questions:
### Is the bug inside a Dashboard Panel?
Copy the panel's ["get-help" data](https://grafana.com/docs/grafana/latest/troubleshooting/send-panel-to-grafana-support/) here
### Grafana Platform?
Other
### User's OS?
Amazon linux
### User's Browser?
_No response_
### Is this a Regression?
No
### Are Datasources involved?
cloudwatch
### Anything else to add?
_No response_
Per the above link, the fix for the issue will be released in Grafana 10.2.0. Keep in mind that you will have to enable the ‘sseGroupByDatasource’ feature toggle for the time being, in order to use metric math in Alerts.
2 Likes