Loki - Alertmanager

Hi Team

It’s good to hear that the new Loki release supports Alert Configuration through alertmanager.

I am working with my team so that Digivalet can deploy grafana-loki-promtail as a centralized logging system. But our team is facing some few challenges. I am not sure whether it’s a bug or our team fault.

Here my scenario is I am running Grafana-loki on 192.168.126.167 and a Promtail client on 192.168.126.168
1> where Portail client is sending my HTTPD logs to loki
2> I have installed alert manager on 192.168.126.167:9093
3> I have defined rule file to trigger alert whenever incoming log per second is more then 5
4> when Loki invoke rule file he gives output as follow

  • Feb 09 06:09:06 centos 7.linux vm images.local loki[3394]: level=info ts=2021-02-09T11:09:06.883855921Z caller=metrics.go:83 org_id=1 traceID=5a9b9e046985fa05 latency=fast query=“sum(count_over_time({filename=”/var/log/httpd/access_log"}[1s])) > 5" query_type=metric range_type=instant length=0s step=0s duration=28.679653ms status=200 throughput=0B total_bytes=0B
    6> Here range type is instant and I believe that when query type is instant its doesn’t return anything.
    7> Help us to find the way to change to query type from instant to range.

Please find below config file of loki,alertmanager,promtail,rules1.yaml

######################### Promtail.yml #####################################################

server:
http_listen_port: 9080
grpc_listen_port: 0

positions:
filename: /tmp/positions.yaml

clients:

  • url: http://192.168.126.167:3100/loki/api/v1/push
    tenant_id: 1
    scrape_configs:
  • job_name: journal
    journal:
    max_age: 12h
    labels:
    job: systemd-journal
    relabel_configs:
    • source_labels: [’__journal__systemd_unit’]
      target_label: ‘unit’
  • job_name: httpd
    entry_parser: raw
    static_configs:
    • targets:
      • localhost
        labels:
        job: httpd
        path: /var/log/httpd/*log
        pipeline_stages:
    • match:
      selector: ‘{job=“httpd”}’
      stages:
      • regex:
        expression: ‘^(?P<remote_addr>[\w.]+) - (?P<remote_user>[^ ]) [(?P<time_local>.)] “(?P[^ ]) (?P[^ ]) (?P[^ ])" (?P[\d]+) (?P<body_bytes_sent>[\d]+) “(?P<http_referer>[^”])” “(?P<http_user_agent>[^”]*)"?’
      • labels:
        remote_addr:
        remote_user:
        time_local:
        method:
        request:
        protocol:
        status:
        body_bytes_sent:
        http_referer:
        http_user_agent:
        ######################################################################################################

############################### LOKI.YML ###############################################################

auth_enabled: true

server:
http_listen_port: 3100

ingester:
lifecycler:
address: 127.0.0.1
ring:
kvstore:
store: inmemory
replication_factor: 1
final_sleep: 0s
chunk_idle_period: 5m
chunk_retain_period: 30s
max_transfer_retries: 0

schema_config:
configs:
- from: 2018-04-15
store: boltdb
object_store: filesystem
schema: v11
index:
prefix: index_
period: 168h

ruler:
storage:
type: local
local:
directory: /tmp/loki/rules
rule_path: /tmp/scratch
alertmanager_url: http://192.168.126.167:9093
ring:
kvstore:
store: inmemory
enable_api: true

storage_config:
boltdb:
directory: /tmp/loki/index

filesystem:
directory: /tmp/loki/chunks

limits_config:
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h

chunk_store_config:
max_look_back_period: 0s

table_manager:
retention_deletes_enabled: false
retention_period: 0s

################################################################################################

############################ RULES1.YAML #####################################################

groups:

  • name: rate-alerting
    rules:
    • alert: HighLogRate
      expr: sum(count_over_time({filename="/var/log/httpd/access_log"}[1s])) > 5
      for: 1m
      labels:
      severity: warning
      annotations:
      title: “High LogRate Alert”
      description: “something is logging a lot”

###################################################################################################

############################ Alertmanager.yml ########################################################

global:
resolve_timeout: 1m

route:
group_by: [‘alertname’]
group_wait: 10s
group_interval: 10s
repeat_interval: 1h
receiver: Slack-Notifications
receivers:

  • name: ‘Slack-Notifications’
    slack_configs:
    • api_url: ‘’
      channel: ‘#loki-alert-test
      send_resolved: true

###############################################################################################