Grafana Agent - prom_sd_pod_association

Hello,

I keep receiving this messages in grafana agent logs, but I don’t know what’s missing to retrieve information.
traces service disco" msg=“unable to find ip in span attributes, skipping attribute addition”

I tried to add the following parameters to Grafana Agent Deployment, is it the right place to add it?

           # Get pod ip so that k8sattributes can tag resources
          - name: POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP
            # This is picked up by the resource detector
          - name: OTEL_RESOURCE_ATTRIBUTES
            value: "k8s.pod.ip=$(POD_IP)"

So it appears you are trying to enrich your spans with k8s data using the prom service discovery config?

As each span passes through the agent it will attempt to find an IP of the process that created the span. That error message simply means that it failed to find an IP for the given span. Lookup code here:

The easiest way for this to occur is to make sure that the span resource has one of the following attributes:

“Resource” attributes are the same as “Process” attributes. You would see them here in the UI:

Hello @joeelliott,

Thank you for your advices, it helped a lot to understand the concept.

We managed to add a k8s.pod.ip attribute, but still don’t see any labels applied.

I can see in the Grafana Agent logs that this pod is correctly discovered by the prom_sd_processor :

ts=2022-02-25T15:43:06.289753099Z caller=prom_sd_processor.go:211 level=debug component="traces service disco" processedLabels="{__address__=\"10.72.12.91:80\", __meta_kubernetes_namespace=\"dsc\", __meta_kubernetes_pod_annotation_app_kubernetes_io_name=\"gmadd-rt\", __meta_kubernetes_pod_annotationpresent_app_kubernetes_io_name=\"true\", __meta_kubernetes_pod_container_init=\"false\", __meta_kubernetes_pod_container_name=\"gmadd\", __meta_kubernetes_pod_container_port_number=\"80\", __meta_kubernetes_pod_container_port_protocol=\"TCP\", __meta_kubernetes_pod_controller_kind=\"ReplicaSet\", __meta_kubernetes_pod_controller_name=\"gmadd-rt-8579b9bfb6\", __meta_kubernetes_pod_host_ip=\"192.168.8.14\", __meta_kubernetes_pod_ip=\"10.72.12.91\", __meta_kubernetes_pod_label_app_kubernetes_io_name=\"gmadd-rt\", __meta_kubernetes_pod_label_pod_template_hash=\"8579b9bfb6\", __meta_kubernetes_pod_labelpresent_app_kubernetes_io_name=\"true\", __meta_kubernetes_pod_labelpresent_pod_template_hash=\"true\", __meta_kubernetes_pod_name=\"gmadd-rt-8579b9bfb6-wzph6\", __meta_kubernetes_pod_node_name=\"k3s-dsc-sbeu1-1\", __meta_kubernetes_pod_phase=\"Running\", __meta_kubernetes_pod_ready=\"true\", __meta_kubernetes_pod_uid=\"822ca899-78d3-4a21-86cb-2913480cba35\", container=\"gmadd\", namespace=\"dsc\", pod=\"gmadd-rt-8579b9bfb6-wzph6\"}"
ts=2022-02-25T15:43:06.289799126Z caller=prom_sd_processor.go:243 level=debug component="traces service disco" msg="adding host to hostLabels" host=10.72.12.91

We probably missed something, any idea?

We switched from k8s.pod.ip to ip, and it works like a charm, thanks!

1 Like