Promtail scrape config for K8s

Hi, i am trying to gather logs within my K8s cluster using Grafana+Promtail+Loki.
By default Promtail gathers logs from all namespaces. I want to specify f.e. two specific namespaces.
Can someone explain how to rewrite scrape config file? I have tried to exclude and include namespaces using ‘drop’ and ‘keep’.

    scrape_configs:
      - job_name: kubernetes-pods
        kubernetes_sd_configs:
        - role: pod
        relabel_configs:
        - source_labels: [__meta_kubernetes_namespace]
          action: keep
          regex: kube-system
        
    # [...]
    
      - job_name: kubernetes-pods-app
        kubernetes_sd_configs:
        - role: pod
        relabel_configs:
        - source_labels: [__meta_kubernetes_namespace]
          action: keep
          regex: kube-system
    scrape_configs:
      - job_name: kubernetes-pods
        kubernetes_sd_configs:
        - role: pod
        relabel_configs:
        - source_labels: [__meta_kubernetes_namespace]
          action: drop
          regex: kube-system
        
    # [...]
    
      - job_name: kubernetes-pods-app
        kubernetes_sd_configs:
        - role: pod
        relabel_configs:
        - source_labels: [__meta_kubernetes_namespace]
          action: drop
          regex: kube-system

Let’s start by verifying that this label you’re trying to exclude on has the expected value.

Hit http://<promtail-host>:<promtail-port>/targets and find this target.
Hover over the Labels column for one of the entries:


Does your __meta_kubernetes_namespace label exist and have the kube-system value?

2 Likes

I don’t have service that can expose Promtail endpoint. I have used Helm loki-stack without Prometheus.
image

Also I have one more question. With default Promtail config - Promtail doesn’t gather logs from all namespaces and pods. Each time it shows a different number of pods and namespaces. could it be due to lack of resources?
For example:
I have 14 pods within ‘*-app-blue’ namespace. On Grafana dashboard I can view logs only 9 of them.

System and cluster info:
image

You can create a port-forward to the promtail pod, and hit the API that way.

Regarding your other question:

I doubt it; does your promtail have the required RBAC settings to access all pods’ logs on all nodes? Are you running a promtail daemonset?

About RBAC - it comes from Helm loki-stack, I didn’t do any changes:

This is loki namespace, there is Promtail daemonset(2 pods for each node):

If I delete loki namespace and deploy one more time using Helm - number of pods and namespaces on Grafana will be different.

I’m not sure. You should the port-forward to see which targets the two promtail instances have, and work from there.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.