Loki operator: Ruler not able to query

Hello,

im trying to create alerting rules with Loki operator, unfortunately Ruler component is not able to query logs. When i try to query logs with Grafana from query-frontend everything is working, but ruler always return 0 log lines (looks like Ruler is running query on empty storage even when logs are stored in s3 bucket). Im using custom tenant names.

Loki operator version: 0.8.0

Has anyone experience with this ?

Ruler doesn’t query against the backend, it queries against the query frontend of querier. Make sure your ruler has connectivity to your query frontend or querier. Check ruler logs and see if there is any obvious errors.

Also please share your configuration.

Hello,
thanks for reply.
Here is the configuration. Ruler has connectivity to query frontend and querier

logs from Ruler:
level=info msg=“request timings” insight=true source=loki_ruler rule_name=HighPercentageError rule_type=alerting total=0.001779676 total_bytes=0 query_hash=3381541240

944level=info ts=2025-04-10T06:38:24.233298708Z caller=compat.go:67 user=..data rule_name=HighPercentageError rule_type=alerting query=“((sum by (app)(rate({app="grafana-stable"} |= "error"[5m])) / sum by (app)(rate({app="grafana-stable"}[5m]))) > 0.01)” query_hash=3381541240 msg=“evaluating rule”

945level=info ts=2025-04-10T06:38:24.233985553Z caller=engine.go:263 component=ruler evaluation_mode=local org_id=..data msg=“executing query” query=“((sum by (app)(rate({app="grafana-stable"} |= "error"[5m])) / sum by (app)(rate({app="grafana-stable"}[5m]))) > 0.01)” query_hash=3381541240 type=instant

946level=info ts=2025-04-10T06:38:24.235810266Z caller=metrics.go:237 component=ruler evaluation_mode=local org_id=..data latency=fast query=“((sum by (app)(rate({app="grafana-stable"} |= "error"[5m])) / sum by (app)(rate({app="grafana-stable"}[5m]))) > 0.01)” query_hash=3381541240 query_type=metric range_type=instant length=0s start_delta=3.301191ms end_delta=3.301345ms step=0s duration=1.779676ms status=200 limit=0 returned_lines=0 throughput=0B total_bytes=0B total_bytes_structured_metadata=0B lines_per_second=0 total_lines=0 post_filter_lines=0 total_entries=0 store_chunks_download_time=0s queue_time=0s splits=0 shards=0 query_referenced_structured_metadata=false pipeline_wrapper_filtered_lines=0 chunk_refs_fetch_time=2.034362ms cache_chunk_req=0 cache_chunk_hit=0 cache_chunk_bytes_stored=0 cache_chunk_bytes_fetched=0 cache_chunk_download_time=0s…

config:
kind: LokiStack
metadata:
name: lokistack-sample
namespace: loki
labels:
app.kubernetes.io/instance: lokistack
spec:
size: 1x.pico
storageClassName: trident-iscsi
tenants:
mode: openshift-logging
managementState: Managed
proxy: {}
rules:
enabled: true
namespaceSelector:
matchLabels:
alerting: allowed
selector:
matchLabels:
alerting: allowed
limits:
global:
queries:
queryTimeout: 3m
storage:
schemas:
- effectiveDate: ‘2024-11-14’
version: v13
secret:
credentialMode: static
name: loki-bucket-creds
type: s3
hashRing:
type: memberlist

kind: RulerConfig
metadata:
labels:
alerting: allowed
name: rulerconfig-sample
namespace: loki
spec:
alertmanager:
client:
tls:
insecureSkipVerify: true
discovery:
enableSRV: false
refreshInterval: 1m
enableV2: true
endpoints:
- ‘http://alertmanager-operated.openshift-monitoring.svc.cluster.local:9093
notificationQueue:
capacity: 5000
forGracePeriod: 10m
forOutageTolerance: 1h
resendDelay: 1m
timeout: 30s
evaluationInterval: 1m
pollInterval: 1m

kind: AlertingRule
metadata:
labels:
alerting: allowed
name: alertingrule-sample-testnamespace
namespace: testnamespace
spec:
groups:
- interval: 15s
name: alerting-rules-group
rules:
- alert: HighPercentageError
annotations:
message: ‘Namespace {{ $labels.namespace }} application {{ $labels.app }} has a very high error rate.’
summary: High error rate
expr: ‘sum(rate({app=“grafana”} |= “error” [5m])) by (app) / sum(rate({app=“grafana”}[5m])) by (app) > 0.01’
for: 15s
labels:
severity: warning
target: namespace
tenantID: testnamespace