We are doing some research now to see if we should use Prometheus and Alert manager or the new Grafana unified alerting system to manage our alerts. I have observed a different behavior between a same alerts when it is defined in Grafana or directly in Prometheus and I do not know if it is a bug or not (hope you can help me with that).
Imagine that you define an alert both in Grafana and Prometheus. This alert is really simple and checks that the current value of a metric is above 5. The prometheus expression of the alert, when evaluated returns 2 metrics instances (one for each server impacted).
When I check the alert status in the alert manager (I’ve configured grafana to forward alert to the Prometheus alert manager) I can see two alert instances for the alert defined in prometheus while I only see one for the one defined in grafana. This is also confirmed when I look at the grafana alert rules panel where it also shows only one instance for that rule. Also when I look at the alert status details, I can see that the value field is filled with an array representing the different prometheus expressions instances.
Is it the expected behavior or should it be flagged as a bug?
I use the latest grafana community edition 8.3.4
Thanks for you help!