Promtail dropping target labels: Dropped: no path for target

So I am trying to add Kubernetes Node name to my labels. Looking at Promtail Service Discovery page, I can see all Kubernetes Meta labels are getting discovered, but are getting dropped with the error - Dropped: no path for target

In the Targets page, I can see 0/0 targets ready. My issue seems to be matching this previous unsolved issue exactly - Can't add dynamic labels to my static_configs in promtail?

Here are the screenshots of Service Discovery, Targets, and my promtail extraScrapeConfig -

my custom job is kube_sd_test -

extraScrapeConfig -

extraScrapeConfigs: 
- job_name: kube_sd_test
  kubernetes_sd_configs:
    - role: pod
 
  relabel_configs:
  - action: labelmap
    regex: __meta_kubernetes_pod_(.+)

  - source_labels:
    - __meta_kubernetes_pod_node_name
    target_label: node_name

  - action: replace
    source_labels:
    - __meta_kubernetes_pod_name
    target_label: pod

Infrastructure -

k3d for Kubernetes
grafana/loki-stack for Promtail, Loki and Grafana

Any help would be appreciated! Thanks in advance!

I made a few changes to the extraScrapeConfig, and tried to relabel the __address__ and instance label -

- job_name: kube_test
  kubernetes_sd_configs:
    - role: node
  relabel_configs:
  - action: replace
    source_labels: ['__meta_kubernetes_node_name']
    target_label: '__host__'

  - source_labels: 
    - __address__ 
    target_label: node_IP

  - source_labels: 
    - instance
    target_label: node_name
  

Still the same issue -

Here are the log snippets -


level=info ts=2022-03-07T14:20:10.0915505Z caller=filetargetmanager.go:241 msg="no path for target" labels="{__address__=\"172.19.0.2:10250\", __host__=\"k3d-myagent-0\", __meta_kubernetes_node_address_Hostname=\"k3d-myagent-0\", __me │
│ ta_kubernetes_node_address_InternalIP=\"172.19.0.2\", __meta_kubernetes_node_annotation_flannel_alpha_coreos_com_backend_data=\"{\\\"VNI\\\":1,\\\"VtepMAC\\\":\\\"02:b9:e7:49:9e:89\\\"}\", __meta_kubernetes_node_annotation_flannel_alp │
│ ha_coreos_com_backend_type=\"vxlan\", __meta_kubernetes_node_annotation_flannel_alpha_coreos_com_kube_subnet_manager=\"true\", __meta_kubernetes_node_annotation_flannel_alpha_coreos_com_public_ip=\"172.19.0.2\", __meta_kubernetes_node │
│ _annotation_k3s_io_hostname=\"k3d-myagent-0\", __meta_kubernetes_node_annotation_k3s_io_internal_ip=\"172.19.0.2\", __meta_kubernetes_node_annotation_k3s_io_node_args=\"[\\\"agent\\\"]\", __meta_kubernetes_node_annotation_k3s_io_node_ │
│ config_hash=\"GNY45P4EZT4AMDLCADCGJR3BA5KIFTXTP7YACNXMTZAVYI2VMO7A====\", __meta_kubernetes_node_annotation_k3s_io_node_env=\"{\\\"K3S_KUBECONFIG_OUTPUT\\\":\\\"/output/kubeconfig.yaml\\\",\\\"K3S_TOKEN\\\":\\\"********\\\",\\\"K3S_UR │
│ L\\\":\\\"https://k3d-k3s-default-server-0:6443\\\"}\", __meta_kubernetes_node_annotation_node_alpha_kubernetes_io_ttl=\"0\", __meta_kubernetes_node_annotation_volumes_kubernetes_io_controller_managed_attach_detach=\"true\", __meta_ku │
│ bernetes_node_annotationpresent_flannel_alpha_coreos_com_backend_data=\"true\", __meta_kubernetes_node_annotationpresent_flannel_alpha_coreos_com_backend_type=\"true\", __meta_kubernetes_node_annotationpresent_flannel_alpha_coreos_com │
│ _kube_subnet_manager=\"true\", __meta_kubernetes_node_annotationpresent_flannel_alpha_coreos_com_public_ip=\"true\", __meta_kubernetes_node_annotationpresent_k3s_io_hostname=\"true\", __meta_kubernetes_node_annotationpresent_k3s_io_in │
│ ternal_ip=\"true\", __meta_kubernetes_node_annotationpresent_k3s_io_node_args=\"true\", __meta_kubernetes_node_annotationpresent_k3s_io_node_config_hash=\"true\", __meta_kubernetes_node_annotationpresent_k3s_io_node_env=\"true\", __me │
│ ta_kubernetes_node_annotationpresent_node_alpha_kubernetes_io_ttl=\"true\", __meta_kubernetes_node_annotationpresent_volumes_kubernetes_io_controller_managed_attach_detach=\"true\", __meta_kubernetes_node_label_beta_kubernetes_io_arch │
│ =\"amd64\", __meta_kubernetes_node_label_beta_kubernetes_io_instance_type=\"k3s\", __meta_kubernetes_node_label_beta_kubernetes_io_os=\"linux\", __meta_kubernetes_node_label_kubernetes_io_arch=\"amd64\", __meta_kubernetes_node_label_k │
│ ubernetes_io_hostname=\"k3d-myagent-0\", __meta_kubernetes_node_label_kubernetes_io_os=\"linux\", __meta_kubernetes_node_label_node_kubernetes_io_instance_type=\"k3s\", __meta_kubernetes_node_labelpresent_beta_kubernetes_io_arch=\"tru │
│ e\", __meta_kubernetes_node_labelpresent_beta_kubernetes_io_instance_type=\"true\", __meta_kubernetes_node_labelpresent_beta_kubernetes_io_os=\"true\", __meta_kubernetes_node_labelpresent_kubernetes_io_arch=\"true\", __meta_kubernetes │
│ _node_labelpresent_kubernetes_io_hostname=\"true\", __meta_kubernetes_node_labelpresent_kubernetes_io_os=\"true\", __meta_kubernetes_node_labelpresent_node_kubernetes_io_instance_type=\"true\", __meta_kubernetes_node_name=\"k3d-myagen │
│ t-0\", instance=\"k3d-myagent-0\", node_name=\"k3d-myagent-0\"}

Promtail is dropping the other host, as it’s HOSTNAME is not matching, so we can rule out that issue.

Your target needs to have a __path__ label. This tells Promtail which log file to watch. I’d suggest using a relabel_configs option that creates the __path__ label from some of the __meta_kubernetes_* labels.