Grafana Loki not scrapping Kubernetes logs with default Helm chart

Hello. I have installed Grafana Loki in my K3s Kubernetes cluster, using the official helm chart, but I don´t have any logs, besides the ones of self-monitoring.

It´s not even detecting any container in Loki query interface (only Loki itself).

Do I need to do any configuration or something for it to start scrapping? I suppose the Helm chart should handle that out of the box.

Here are the values I used to install Grafana Loki (I am using Grafana Agent, without the Operator)

   loki:
      commonConfig:
        replication_factor: 1
      storage:
        type: "filesystem"
      compactor:
        retention_enabled: true
      limits_config:
        retention_period: 15d
      auth_enabled: false
      rulerConfig:
        storage:
          type: local
    singleBinary:
      replicas: 1
      persistence:
        enabled: true
        size: 50Gi
    gateway:
      enabled: false
    monitoring:
      selfMonitoring:
        enabled: true
        grafanaAgent:
          installOperator: false
      lokiCanary:
        enabled: false
    resources:
      requests:
        cpu: 10m
        memory: 128Mi
      limits:
        cpu: 100m
        memory: 256Mi
    test:
      enabled: false

I couldn´t find any relevant logs on the loki container

Having the exact same issue, how did you solve this? Don’t want to add a PodLogs resource for each deployment.

Just got to this thread again, after searching for the problem again on Google. :slight_smile:

I solved my issue, by installing Promtail in my cluster. I thought the Loki helm chart would do that by default, but no. it´s only the server part.

You need to install either Promtail or Grafana Agent to ship the logs to Loki

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.