I’m trying to set up a Loki alert rule that fires whenever a certain pattern like error appears in systemd-journal logs. In the alert notification, I also want to include the full log message that matched the pattern as a label (so that Alertmanager can display it in the alert).
Here’s what I have so far.
groups:
- name: node-log-errors
rules:
- alert: MyAlertForNode
expr: |
sum by (node, logmsg) (
count_over_time(
{job=“systemd-journal”} |= “error”
| regexp (?P<logmsg>.+)
| label_format logmsg=“{{.logmsg}}”
[1m]
)
) > 0
for: 1m
labels:
severity: critical
node: ‘{{{{ $labels.node }}}}’
error_summary: ‘{{{{ $labels.logmsg }}}}’
annotations:
summary: “Error log detected on node {{{{ $labels.node }}}}”
description: |
Log message: {{{{ $labels.logmsg }}}}
Please investigate this issue on node {{{{ $labels.node }}}}.
-
What works: When I test this LogQL expression in Grafana (via the Explore view, using Loki as a data source), I can see the logmsg label and its value correctly — the full log message appears as expected.
-
What doesn’t work": However, after deploying the alert rule, when the alert fires. The alert appears in Alertmanager, but The label logmsg (and consequently error_summary) is missing — it doesn’t appear in the alert labels.
Why does the logmsg label (which is visible in the Loki query results) not get propagated to Alertmanager when the alert fires?
Is this a limitation in how Loki ruler handles dynamic labels from regexp / label_format?
Or do I need to adjust the query or alert definition to persist these labels?
Any guidance or examples on how to include the actual log message in alert labels or annotations would be appreciated.
Thanks!

