Alloy config PVC

Alloy targets are not updating when the pod ID changes. Since logs are being read from a PVC path mounted on the node, the changing UID causes Alloy to miss the new pod logs. As a result, Alloy is not scraping logs from the new pod. How can this be fixed?

With this component: local.file_match | Grafana Alloy documentation it will continue to check the filesystem for new files. See examples in docs, they can help too.

I tried issue still persist.Alloy initially tails logs correctly because Pod UID and paths exist.

When the pod restarts:
Pod gets a new UID
The kubelet directory path changes (/var/lib/kubelet/pods/<NEW_UID>/volumes/…/logs/…)
Alloy does not update its targets — it keeps looking for old Pod UID paths

Any suggestions?

Here is an example how to construct a path using discovery.kubernetes which will keep the pod IDs in-sync.

discovery.kubernetes "k8s" {
  role = "pod"
}

discovery.relabel "k8s" {
  targets = discovery.kubernetes.k8s.targets

  rule {
    source_labels = ["__meta_kubernetes_namespace", "__meta_kubernetes_pod_label_name"]
    target_label  = "job"
    separator     = "/"
  }

  rule {
    source_labels = ["__meta_kubernetes_pod_uid", "__meta_kubernetes_pod_container_name"]
    target_label  = "__path__"
    separator     = "/"
    replacement   = "/var/log/pods/*$1/*.log"
  }
}

local.file_match "pods" {
  path_targets = discovery.relabel.k8s.output
}

loki.source.file "pods" {
  targets = local.file_match.pods.targets
  forward_to = [loki.write.endpoint.receiver]
}

loki.write "endpoint" {
  endpoint {
      url = "<LOKI_URL>"
      basic_auth {
          username = "<USERNAME>"
          password = "<PASSWORD>"
      }
  }
}

I also recommend the k8s-monitoring helm chart which makes it much easier to deploy a pipeline that works.