I have set up a ruler in Loki 2.3.0 and can see the rule being applied, but the alarm does not seem to be sent to Alertmanager. I can see “Get - deadline exceeded” in the log. What should the log look like if the alert is successfully sent to Alertmanager?
Can you please provide your Loki config?
This looks to be an issue communicating with the ring; I don’t think this has anything to do with sending to Alertmanager yet.
I wonder why/how it’s timing out. Maybe it’s a red herring?
First things first: let’s try changing the alert expression to something you are 100% certain will succeed (like 1+1). See if that results in a call to the AM and what log messages are produced.
So the first rule evaluation result was discarded which seems to be expected, but then I actually got the Alertmanager alarm for 1+1. This is a big step forward, thank you very much, @dannykopping!
I send pod logs from another Kubernetes cluster to Fluentd which look fine in Grafana itself.
I will further research why this expression does not work in a ruler. It works as expected in Grafana.
Just make sure that the ruler has the same storage_config so that it can query your logs.
The ruler itself is basically a querier with rule evaluation bolted on.