Loki can't send alerts to alertmanager

Alertmanager, loki and grafana are set up on the same host. The problem is: loki can’t send triggered alert to alertmanager.
Each component version:
alertmanager:0.23.0
loki: 2.4.1
grafana: v8.2.2

The loki rules config:

ruler:
  storage:
    type: local
    local:
      directory: /etc/loki/rules
  rule_path: /tmp/loki/rules-tmp
  alertmanager_url: http://XXX.XXX.XX.XX:9093
  ring:
    kvstore:
      store: inmemory
  enable_api: true
  enable_alertmanager_v2: true

The rules file:

    groups:
    - name: easystack-alert
      rules:
        - alert: easystack-log-alert
          expr: (sum by(clustername,message)(count_over_time({job="easystack"} != "memory" != "CPU" != "NIC" | json | severity!="info"[5m])) > 0)
          for: 5m
          labels:
            source: loki
            target: easystack
          annotations:
            message: '{{ $labels.clustername }} alert: {{ $labels.message }}.'
            summary: log alert

The alertmanager file:

route:
  group_by: ['alertname']
  group_wait: 10s
  group_interval: 30s
  repeat_interval: 24h
  receiver: 'default.hook'
  routes:
    - receiver: 'loki.hook'
      match:
        source: loki
    - receiver: 'web.hook'
      match:
        alertname: hostPingAlert

The tree of directory:

├── loki.conf
├── ruler-wal
│   └── easystack
│       └── wal
│           └── 00000000
├── rules
│   └── fake
│       ├── easystack.yml
│       └── smartx.yml
├── start.sh
└── stop.sh

The log of loki, it seems it check the rules:

ts=2022-01-27T08:33:05.122277206Z caller=spanlogger.go:87 org_id=fake Summary.BytesProcessedPerSecond="4.7 MB" Summary.LinesProcessedPerSecond=15030 Summary.TotalBytesProcessed="63 kB" Summary.TotalLinesProcessed=201 Summary.ExecTime=13.373125ms
level=info ts=2022-01-27T08:33:05.124576308Z caller=metrics.go:92 org_id=fake latency=fast query="(sum by(clustername,message)(count_over_time({job=\"easystack\"} != \"memory\" != \"CPU\" != \"NIC\" | json | severity!=\"info\"[5m])) > 0)" query_type=metric range_type=range length=15m0s step=500ms duration=13.373125ms status=200 limit=1843 returned_lines=0 throughput=4.7MB total_bytes=63kB
level=info ts=2022-01-27T08:33:10.14540277Z caller=metrics.go:92 org_id=fake latency=fast query="(sum by(clustername,message)(count_over_time({job=\"easystack\"} != \"memory\" != \"CPU\" != \"NIC\" | json | severity!=\"info\"[5m])) > 0)" query_type=metric range_type=instant length=0s step=0s duration=1.890929ms status=200 limit=0 returned_lines=0 throughput=8.7MB total_bytes=16kB
level=info ts=2022-01-27T08:33:10.717554562Z caller=metrics.go:92 org_id=fake latency=fast query="(sum by(clustername,message)(count_over_time({job=\"easystack\"} != \"memory\" != \"CPU\" != \"NIC\" | json | severity!=\"info\"[5m])) > 0)" query_type=metric range_type=range length=30m0s step=1s duration=10.389943ms status=200 limit=1843 returned_lines=0 throughput=6.2MB total_bytes=64kB
ts=2022-01-27T08:33:10.721594761Z caller=spanlogger.go:87 org_id=fake Ingester.TotalReached=1 Ingester.TotalChunksMatched=2 Ingester.TotalBatches=1 Ingester.TotalLinesSent=50 Ingester.HeadChunkBytes="64 kB" Ingester.HeadChunkLines=206 Ingester.DecompressedBytes="0 B" Ingester.DecompressedLines=0 Ingester.CompressedBytes="0 B" Ingester.TotalDuplicates=0 Store.TotalChunksRef=0 Store.TotalChunksDownloaded=0 Store.ChunksDownloadTime=0s Store.HeadChunkBytes="0 B" Store.HeadChunkLines=0 Store.DecompressedBytes="0 B" Store.DecompressedLines=0 Store.CompressedBytes="0 B" Store.TotalDuplicates=0
ts=2022-01-27T08:33:10.72166911Z caller=spanlogger.go:87 org_id=fake Summary.BytesProcessedPerSecond="4.2 MB" Summary.LinesProcessedPerSecond=13560 Summary.TotalBytesProcessed="64 kB" Summary.TotalLinesProcessed=206 Summary.ExecTime=15.19154ms
level=info ts=2022-01-27T08:33:10.723719295Z caller=metrics.go:92 org_id=fake latency=fast query="(sum by(clustername,message)(count_over_time({job=\"easystack\"} != \"memory\" != \"CPU\" != \"NIC\" | json | severity!=\"info\"[5m])) > 0)" query_type=metric range_type=range length=30m0s step=1s duration=15.19154ms status=200 limit=1843 returned_lines=0 throughput=4.2MB total_bytes=64kB
level=info ts=2022-01-27T08:33:13.835617886Z caller=metrics.go:92 org_id=fake latency=fast query="(sum by(clustername,message)(count_over_time({job=\"easystack\"} != \"memory\" != \"CPU\" != \"NIC\" | json | severity!=\"info\"[5m])) > 0)" query_type=metric range_type=range length=15m0s step=500ms duration=6.158469ms status=200 limit=1843 returned_lines=0 throughput=10MB total_bytes=63kB
ts=2022-01-27T08:33:13.839994507Z caller=spanlogger.go:87 org_id=fake Ingester.TotalReached=1 Ingester.TotalChunksMatched=2 Ingester.TotalBatches=1 Ingester.TotalLinesSent=45 Ingester.HeadChunkBytes="63 kB" Ingester.HeadChunkLines=201 Ingester.DecompressedBytes="0 B" Ingester.DecompressedLines=0 Ingester.CompressedBytes="0 B" Ingester.TotalDuplicates=0 Store.TotalChunksRef=0 Store.TotalChunksDownloaded=0 Store.ChunksDownloadTime=0s Store.HeadChunkBytes="0 B" Store.HeadChunkLines=0 Store.DecompressedBytes="0 B" Store.DecompressedLines=0 Store.CompressedBytes="0 B" Store.TotalDuplicates=0
ts=2022-01-27T08:33:13.840062785Z caller=spanlogger.go:87 org_id=fake Summary.BytesProcessedPerSecond="5.5 MB" Summary.LinesProcessedPerSecond=17732 Summary.TotalBytesProcessed="63 kB" Summary.TotalLinesProcessed=201 Summary.ExecTime=11.33539ms
level=info ts=2022-01-27T08:33:13.842397952Z caller=metrics.go:92 org_id=fake latency=fast query="(sum by(clustername,message)(count_over_time({job=\"easystack\"} != \"memory\" != \"CPU\" != \"NIC\" | json | severity!=\"info\"[5m])) > 0)" query_type=metric range_type=range length=15m0s step=500ms duration=11.33539ms status=200 limit=1843 returned_lines=0 throughput=5.5MB total_bytes=63kB
level=info ts=2022-01-27T08:33:42.487106283Z caller=table_manager.go:171 msg="uploading tables"
level=info ts=2022-01-27T08:33:46.979608313Z caller=metrics.go:92 org_id=fake latency=fast query="(sum by(clustername,message)(count_over_time({job=\"smartx\"} | json | severity!=\"info\"[5m])) > 0)" query_type=metric range_type=instant length=0s step=0s duration=1.53635ms status=200 limit=0 returned_lines=0 throughput=0B total_bytes=0B    

solved. The problem is the time diff between promtail , loki and alertmanager.

could you please tell me how to solve it,it seams that we can not set time zone for the three software.

Hi, did anyone solved this ?

yeah, but we can set the time zone of the system which run the software. By the way ,which system do you run the loki?

I solved it with the correct system time of each software components.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.