Agent on docker not collecting journal datas

  • What Grafana version and what operating system are you using?

Grafana Agent v0.30.1 on Docker on Archlinux

  • What are you trying to achieve?

I want to send journald logs to my main grafana instance

  • How are you trying to achieve it?

Here is an excerpt of my scrape_config :

scrape_configs:
  - job_name: journal
    journal:
      json: false
      max_age: 12h
      path: /ext_logs/journal
      labels:
        job: systemd-journal
        host: ${HOSTNAME:-default_value}
    relabel_configs:
      - source_labels: ['__journal__systemd_unit']
        target_label: 'unit'
  - job_name: containers
    static_configs:
      - targets:
          - localhost
        labels:
          job: containerlogs
          host: ${HOSTNAME:-default_value}
          __path__: /var/lib/docker/containers/*/*log

And my compose file :

services:
  agent:
    image: grafana/agent:latest
    restart: unless-stopped
    volumes:
      - /etc/machine-id:/etc/machine-id
      - /var/log:/ext_logs:ro
      - /var/lib/docker/containers:/var/lib/docker/containers:ro
      - ./agent/location:/agent_location
      - ./agent/agent.yml:/etc/agent-config/agent.yml
    environment:
      - HOSTNAME=xxxxx
      - PROM_REMOTE_WRITE_URL=xxxxx
      - LOKI_REMOTE_WRITE_URL=xxxxx
    entrypoint:
      - /bin/agent
      - -config.file=/etc/agent-config/agent.yml
      - -config.expand-env
  • What happened?

The logs of my containers logs get sent OK, but my journald logs aren’t being sent. If I run agent natively with the same configuration, they are being sent.

  • What did you expect to happen?

All my logs, including journald, to be sent

  • Did you receive any errors in the Grafana UI or in related logs? If so, please tell us exactly what they were.

Absolutely no error, and journald isn’t even mentioned in the logs

Here are an excerpt of the logs of the agent :

ts=2023-01-03T11:32:02.731693805Z caller=server.go:191 level=info msg="server listening on addresses" http=127.0.0.1:12345 grpc=127.0.0.1:12346 http_tls_enabled=false grpc_tls_enabled=false
ts=2023-01-03T11:32:02.732180008Z caller=node.go:85 level=info agent=prometheus component=cluster msg="applying config"
ts=2023-01-03T11:32:02.732268884Z caller=remote.go:180 level=info agent=prometheus component=cluster msg="not watching the KV, none set"
ts=2023-01-03T11:32:02.735016751Z caller=promtail.go:123 level=info component=logs logs_config=vili msg="Reloading configuration file" md5sum=b596666ce87527c514f02359e1d6e921
ts=2023-01-03T11:32:02.737008031Z caller=wal.go:199 level=info agent=prometheus instance=32b76ecc3d5563bdd4030539c63aa7da msg="replaying WAL, this may take a while" dir=/agent_location/wal/32b76ecc3d5563bdd4030539c63aa7da/wal
ts=2023-01-03T11:32:02.740952972Z caller=zapadapter.go:78 level=info component=traces msg="Traces Logger Initialized"
ts=2023-01-03T11:32:02.742832831Z caller=reporter.go:103 level=info msg="running usage stats reporter"
ts=2023-01-03T11:32:02.763333921Z caller=wal.go:222 level=info agent=prometheus instance=32b76ecc3d5563bdd4030539c63aa7da msg="WAL checkpoint loaded"
ts=2023-01-03T11:32:02.773080017Z caller=wal.go:246 level=info agent=prometheus instance=32b76ecc3d5563bdd4030539c63aa7da msg="WAL segment loaded" segment=98 maxSegment=104
ts=2023-01-03T11:32:02.782598554Z caller=wal.go:246 level=info agent=prometheus instance=32b76ecc3d5563bdd4030539c63aa7da msg="WAL segment loaded" segment=99 maxSegment=104
ts=2023-01-03T11:32:02.792323732Z caller=wal.go:246 level=info agent=prometheus instance=32b76ecc3d5563bdd4030539c63aa7da msg="WAL segment loaded" segment=100 maxSegment=104
ts=2023-01-03T11:32:02.796034816Z caller=wal.go:246 level=info agent=prometheus instance=32b76ecc3d5563bdd4030539c63aa7da msg="WAL segment loaded" segment=101 maxSegment=104
ts=2023-01-03T11:32:02.796813861Z caller=wal.go:246 level=info agent=prometheus instance=32b76ecc3d5563bdd4030539c63aa7da msg="WAL segment loaded" segment=102 maxSegment=104
ts=2023-01-03T11:32:02.797576323Z caller=wal.go:246 level=info agent=prometheus instance=32b76ecc3d5563bdd4030539c63aa7da msg="WAL segment loaded" segment=103 maxSegment=104
ts=2023-01-03T11:32:02.797632163Z caller=wal.go:246 level=info agent=prometheus instance=32b76ecc3d5563bdd4030539c63aa7da msg="WAL segment loaded" segment=104 maxSegment=104
ts=2023-01-03T11:32:02.798017559Z caller=dedupe.go:112 agent=prometheus instance=32b76ecc3d5563bdd4030539c63aa7da component=remote level=info remote_name=32b76e-d15b16 url=https://prometheus.tfdn.app/api/v1/write msg="Starting WAL watcher" queue=32b76e-d15b16
ts=2023-01-03T11:32:02.798028966Z caller=dedupe.go:112 agent=prometheus instance=32b76ecc3d5563bdd4030539c63aa7da component=remote level=info remote_name=32b76e-d15b16 url=https://xxxxx/api/v1/write msg="Starting scraped metadata watcher"
ts=2023-01-03T11:32:02.798103937Z caller=dedupe.go:112 agent=prometheus instance=32b76ecc3d5563bdd4030539c63aa7da component=remote level=info remote_name=32b76e-d15b16 url=https://xxxxx/api/v1/write msg="Replaying WAL" queue=32b76e-d15b16
ts=2023-01-03T11:32:07.741231816Z caller=filetargetmanager.go:352 level=info component=logs logs_config=xxxxx msg="Adding target" key="/var/lib/docker/containers/*/*log:{host=\"xxxxx\", job=\"containerlogs\"}"
ts=2023-01-03T11:32:07.744000948Z caller=filetarget.go:282 level=info component=logs logs_config=xxxxx msg="watching new directory" directory=/var/lib/docker/containers/a9cbcda55a2d497d0f56133e33027b39f908efd7e5ccfcf175657a1930698530
ts=2023-01-03T11:32:07.744057388Z caller=filetarget.go:282 level=info component=logs logs_config=xxxxx msg="watching new directory" directory=/var/lib/docker/containers/2c814386c519e021962d930cef011009883aef5a31f552fd9adeeeb94f291cd5
ts=2023-01-03T11:32:07.744131775Z caller=filetarget.go:282 level=info component=logs logs_config=xxxxx msg="watching new directory" directory=/var/lib/docker/containers/206d5e169aa260f349f8d8fc0c9417bd934f38aaa25f8456b54c7a1faf6b75bc
ts=2023-01-03T11:32:07.744174355Z caller=filetarget.go:282 level=info component=logs logs_config=xxxxx msg="watching new directory" directory=/var/lib/docker/containers/ebc2e9c9bf57e81aa1fefb999d617fd7ac9573bf94122320a647ad1f3bee9029
ts=2023-01-03T11:32:07.744364181Z caller=filetarget.go:282 level=info component=logs logs_config=xxxxx msg="watching new directory" directory=/var/lib/docker/containers/5a90e667aa6a635d5bb437c8cae041c7bb2b92f2f7b06b4b80496fee93fab1e0
ts=2023-01-03T11:32:07.744441506Z caller=filetarget.go:282 level=info component=logs logs_config=xxxxx msg="watching new directory" directory=/var/lib/docker/containers/729ddf9f093e02814bb3a6dd6e48deb92ca6112ded233d74e23e552a531c762e
ts=2023-01-03T11:32:07.744516511Z caller=filetarget.go:282 level=info component=logs logs_config=xxxxx msg="watching new directory" directory=/var/lib/docker/containers/6439e2d97567f68435c865205354804366edaf407181cd3b34546d81db9e469e
ts=2023-01-03T11:32:07.744602499Z caller=filetarget.go:282 level=info component=logs logs_config=xxxxx msg="watching new directory" directory=/var/lib/docker/containers/9c0e9712c033b6636b53ebb1e41fc6626e774ee394429dbc56947f500b814cb8
ts=2023-01-03T11:32:07.744715164Z caller=filetarget.go:282 level=info component=logs logs_config=xxxxx msg="watching new directory" directory=/var/lib/docker/containers/1b9087bf60f785d8f20728664a70cd9f733ee5d4ee0856e5c0ad8d8bf069dd4e

I’m using the exact same config on two debian hosts without issues.

This issue has also been posted on redit here : https://www.reddit.com/r/grafana/comments/1027eje/agent_on_docker_not_collection_journal_datas/

Thanks for your help!