Restarting a pod causes a lost of target promtail

Hello,

We have a weird issue with promtail (2.1.0) with kubernetes 1.18 when a pod is restarted. At the start of promtail, all target are “True”. In the attachment, you can see that the target becomes “FALSE”, it is a pod I explicitly restarted and if I jump inside the pod, I can see that the pod uuid doesn’t match the one which is supposed to be up (you can compare it with the screenshot).

root@promtail-ql879:/var/log/pods# ls
...
*************_engine-5b89b9bc56-mzkhv_e0cae657-d0c5-4415-830e-ab4de445ca20
...

Here is the configuration we use

client:
  backoff_config:
    max_period: 5s
    max_retries: 0
    min_period: 100ms
  batchsize: 102400
  batchwait: 1s
  external_labels: {}
  tenant_id: *******
  timeout: 10s
positions:
  filename: /run/promtail/positions.yaml
server:
  http_listen_port: 3101
  log_level: info
target_config:
  sync_period: 10s
scrape_configs:
- job_name: default
  kubernetes_sd_configs:
  - role: pod
  relabel_configs:
  - action: replace
    source_labels:
    - __meta_kubernetes_namespace
    target_label: namespace
  - action: keep
    regex: (.*)
    source_labels:
    - __meta_kubernetes_pod_container_name
  - action: replace
    regex: (.*)
    replacement: $1
    source_labels:
    - __meta_kubernetes_pod_container_name
    target_label: container
  - replacement: /var/log/pods/*$1/*.log
    separator: /
    source_labels:
    - __meta_kubernetes_pod_uid
    - __meta_kubernetes_pod_container_name
    target_label: __path__
  - action: replace
    source_labels:
    - __meta_kubernetes_pod_node_name
    target_label: node

Thanks for your help!

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.