I deploying some grafana alerts using grafana operator helm chart, one of these alerts is a watchdog alert that always be firing in alertmanager to certify that is working properly
Accordin to grafana documentation, the kubernetes manifest is this
apiVersion: grafana.integreatly.org/v1beta1
kind: GrafanaAlertRuleGroup
metadata:
name: prometheus-stack-kube-prom-general-grafana-rules
namespace: monitoring
spec:
folderRef: prometheus-stack-kube-prom-general-grafana-rules
instanceSelector:
matchLabels:
dashboards: “grafana”
interval: 1m
rules:
- uid: 589e3fbda213
title: Watchdog
condition: A
for: 1m
data:
- refId: A
relativeTimeRange:
from: 600
to: 0
datasourceUid: prometheus
model:
datasource:
type: prometheus
uid: prometheus
editorMode: code
expr: vector(1)
intervalMs: 1000
legendFormat: __auto
refId: A
- refId: B
datasourceUid: expr
model:
conditions:
- evaluator:
params:
type: gt
operator:
type: and
query:
params:
- B
reducer:
params:
type: last
type: query
datasource:
type: expr
uid: expr
expression: A
intervalMs: 1000
reducer: last
refId: B
type: reduce
noDataState: OK
execErrState: Error
labels:
severity: info
annotations:
summary: ‘An alert that should always be firing to certify that Alertmanager is working properly.’
description: ‘This is an alert meant to ensure that the entire alerting pipeline is functional. This alert is always firing, therefore it should always be firing in Alertmanager and always fire against a receiver.’
isPaused: false
When I apply the rule I got an error “invalid format of evaluation results for the alert definition A: looks like time series data, only reduced data can be alerted on.”
I try almost everything that I found looking for information and I keep with the same error