Loki/Promtail only pulling logs from loki-canary

Hello everyone, I am new to Grafana, and I am currently trying to set up a PLG stack, but whenever I start everything up and go to explore the Loki data source, all I see are the logs from loki-canary. For example, when I try tailing logs of other pods, the only option I see is loki-canary as seen below:


I am currently running this stack on a rke2 cluster, and I am using the Grafana Helm chart to launch Grafana. I am also running the monolithic Loki helm chart with the following values:

deploymentMode: SingleBinary
loki:
  auth_enabled: false
  commonConfig:
    replication_factor: 1
  storage:
    type: 'filesystem'
  schemaConfig:
    configs:
    - from: "2024-01-01"
      store: tsdb
      index:
        prefix: loki_index_
        period: 24h
      object_store: filesystem # we're storing on filesystem so there's no real persistence here.
      schema: v13
  image:
    pullPolicy: Always
singleBinary:
  replicas: 1
read:
  replicas: 0
backend:
  replicas: 0
write:
  replicas: 0

global:
  dnsService: "rke2-coredns-rke2-coredns"

For Promtail I am using the Daemon set set up like the following:

--- # Daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: promtail-daemonset
spec:
  selector:
    matchLabels:
      name: promtail
  template:
    metadata:
      labels:
        name: promtail
    spec:
      serviceAccount: promtail-serviceaccount
      containers:
      - name: promtail-container
        image: grafana/promtail
        args:
        - -config.file=/etc/promtail/promtail.yaml
        env: 
        - name: 'HOSTNAME' # needed when using kubernetes_sd_configs
          valueFrom:
            fieldRef:
              fieldPath: 'spec.nodeName'
        volumeMounts:
        - name: logs
          mountPath: /var/log
        - name: promtail-config
          mountPath: /etc/promtail
        - mountPath: /var/lib/docker/containers
          name: varlibdockercontainers
          readOnly: true
      volumes:
      - name: logs
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: promtail-config
        configMap:
          name: promtail-config
--- # configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: promtail-config
data:
  promtail.yaml: |
    server:
      http_listen_port: 9080
      grpc_listen_port: 0

    clients:
    - url: http://loki-gateway.default.svc.cluster.local/loki/api/v1/push

    positions:
      filename: /tmp/positions.yaml
    target_config:
      sync_period: 10s
    scrape_configs:
    - job_name: pod-logs
      kubernetes_sd_configs:
        - role: pod
      pipeline_stages:
        - docker: {}
      relabel_configs:
        - source_labels:
            - __meta_kubernetes_pod_node_name
          target_label: __host__
        - action: labelmap
          regex: __meta_kubernetes_pod_label_(.+)
        - action: replace
          replacement: $1
          separator: /
          source_labels:
            - __meta_kubernetes_namespace
            - __meta_kubernetes_pod_name
          target_label: job
        - action: replace
          source_labels:
            - __meta_kubernetes_namespace
          target_label: namespace
        - action: replace
          source_labels:
            - __meta_kubernetes_pod_name
          target_label: pod
        - action: replace
          source_labels:
            - __meta_kubernetes_pod_container_name
          target_label: container
        - replacement: /var/log/pods/*$1/*.log
          separator: /
          source_labels:
            - __meta_kubernetes_pod_uid
            - __meta_kubernetes_pod_container_name
          target_label: __path__

--- # Clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: promtail-clusterrole
rules:
  - apiGroups: [""]
    resources:
    - nodes
    - services
    - pods
    verbs:
    - get
    - watch
    - list

--- # ServiceAccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: promtail-serviceaccount

--- # Rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: promtail-clusterrolebinding
subjects:
    - kind: ServiceAccount
      name: promtail-serviceaccount
      namespace: default
roleRef:
    kind: ClusterRole
    name: promtail-clusterrole
    apiGroup: rbac.authorization.k8s.io

To test if logs are getting picked up, I wrote a simple python program that looks like this:

import logging
import sys
from time import sleep

while True:
    print("Normal Print Statement")
    print("stderr statement", file=sys.stderr)
    print("stdout statement", file=sys.stdout)
    logging.info("This is an info messages")
    logging.warning("This is a warning message")
    logging.debug("debugging message")
    logging.error("ERROR! ERROR! ERROR! ERROR!")
    sleep(10)

I heard that Promtail uses the same discovery methods as Prometheus, so I tried installing Prometheus, and that was able to discover all of the pods.

If you are already mounting the docker containers directory inside your promtail container, you could simply just read from it like so:

- job_name: containers
  static_configs:
  - targets:
      - localhost
    labels:
      job: containerlogs
      __path__: /var/lib/docker/containers/**/*log

  pipeline_stages:
    ...

A quick update on this issue, I tried building this stack on a separate VM using the same method, and it worked without any problems, so the problem might not necessarily be related to the configuration. To add some context, I am running these clusters on Rocky 8.10 Linux. I don’t know why it worked on that VM as opposed to the other one. I should also add that I have the Multus plugin installed on the cluster that doesn’t work, and I have the Longhorn plugin installed on both of them.

This problem was related to getenforce, if you type in the following command:

getenforce

and see that it says enforcing, then you need to change it to permissive, which can be done by editing the file /etc/selinux/config, and then rebooting the system.