Hi any suggestion what could be root cause?
I’ve deploy loki-stack-2.5.0 via helm chart and set rule
alert:
alerting_groups:
- name: logs
rules:
- alert: QuestDbInstanceHasError
expr: |
sum by (log)(
count_over_time( {container="questdb"}
|=" E "
| regexp `(?P<log>(?s).+)`
[1s])
) > 0
for: 0m
labels:
severity: critical
I see that alerts are read - logs:
loki-0 loki level=info ts=2021-11-11T15:05:50.709721474Z caller=loki.go:272 msg="Loki started"
loki-0 loki level=info ts=2021-11-11T15:05:50.922519612Z caller=mapper.go:154 msg="updating rule file" file=/tmp/scratch/..2021_11_11_15_05_49.801024531/loki-alerting-rules.yaml
05_49.801024531 level=debug msg="Discoverer channel closed" provider=static/0
loki-0 loki level=info ts=2021-11-11T15:05:52.546380023Z caller=mapper.go:154 msg="updating rule file" file=/tmp/scratch/..data/loki-alerting-rules.yaml
But when I push data to logs that contain " E ", it’s not picking it up
loki-0 loki level=info ts=2021-11-11T16:06:55.473852714Z caller=metrics.go:92 org_id=..data traceID=2c484f13acf2ad03 latency=fast query="(sum by(log)(count_over_time({container=\"questdb\"} |= \" E \" | regexp \"(?P<log>(?s).+)\"[1s])) > 0)" query_type=metric range_type=instant length=0s step=0s duration=1.671772ms status=200 limit=0 returned_lines=0 throughput=0B total_bytes=0B