Grafana sending wired alert notifications on Alert rules in normal state

Hello,

I am getting notifications on some alert rules that have once been in state “error pending”.
The are sent in 8h frequency, reqding at the bottom:

grafana_state_reason Error
Observed 72h45m38s before this notification was delivered, at 2024-06-16 14:13:00 +0000 UTC

What did I do at 2024-06-16 14:13:00 +0000 UTC? I edited the underlying data sources Prometheus to point to the new host:port where I migrated it to.
It was not reachable thus causing “pending error” state. I corrected the docker compose configuration and restart the docker compose including grafana, then the data source was functional.

How can I repair the alerts? Is there a chance to wipe legacy / corrupt alert state history?

I’m using grafana:latest and new grafana alerting system.

It was my old grafana instance that sent the alerts after I terminated the old prometheus instance after migrating from lxc-containers to docker. I expected that old grafana instance to be terminated and disabled either, but a certificate renewal automatism brought it back to life unexpectedly…