Endpoint url to use for OTEL_EXPORTER_OTLP_ENDPOINT in k8s

We are running grafana agent in k8s. Data gets from the agent to the cloud account. We have dotnet applications manually configured with opentelemetry SDK.

Cannot get telemetry data from the dotnet application to the agent other than using a console exporter. When using the otlp exporter nothing gets sent.

The pod has the env OTEL_EXPORTER_OTLP_ENDPOINT. It’s unclear what value this should have.

I have tried

  • grafana-agent-k8s.monitoring.svc.cluster.local
  • grafana-agent-k8s.monitoring.svc.cluster.local/v1/logs

There are several endpoints deployed using the agent helm-chart

kubectl -n monitoring get ep                       
NAME                                         ENDPOINTS                      AGE
grafana-agent-k8s                            11.3.0.58:80                   28h
grafana-agent-k8s-cluster                    11.3.0.58:80                   28h
grafana-agent-k8s-grafana-agent-logs         11.3.0.17:80,11.3.0.59:80      28h
grafana-agent-k8s-kube-state-metrics         11.3.0.54:8080                 28h
grafana-agent-k8s-prometheus-node-exporter   11.3.0.34:9100,11.3.0.5:9100   28h

I can’t seem to find any clear docs or even a single example what must be a very typical setup. What might I be missing?

All the endpoints are exposed on port 80, eg:

k -n monitoring describe ep/grafana-agent-k8s 
Name:         grafana-agent-k8s
Namespace:    monitoring
Labels:       app.kubernetes.io/instance=grafana-agent-k8s
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=grafana-agent
              app.kubernetes.io/version=v0.38.1
              argocd.argoproj.io/instance=grafana-agent-k8s
              helm.sh/chart=grafana-agent-0.29.0
Annotations:  endpoints.kubernetes.io/last-change-trigger-time: 2024-01-02T18:41:28Z
Subsets:
  Addresses:          11.3.0.58
  NotReadyAddresses:  <none>
  Ports:
    Name          Port  Protocol
    ----          ----  --------
    http-metrics  80    TCP

Your agent must have enabled OTLP receiver (+ you need to expose it of course - port 4317 usually):

The receivers are setup with the helm chart. But I cannot push logs into it.

Using this env value, I get no errors, but I also get no logs

OTEL_EXPORTER_OTLP_ENDPOINT: "http://grafana-agent-k8s.monitoring.svc.cluster.local:4318"

How are you deploying the Grafana Agent? Directly with the agent helm chart, or using the k8s-monitoring helm chart?

In either case, you have to do two things to have the Grafana Agent accept OTLP data. You first have to expose the right ports on the Grafana Agent pod. That just opens up the pod and configures the Agent’s Service to accept OTLP traffic. Second, you have to configure the agent to know what to do with that traffic when it receives it.

If you can share your helm chart values file, I can see if I can help understand what’s going on.

I did two things:

First I enabled the ‘grpc’ receiver in values.yaml (as @jangaraj pointed out). Before that agent was listening on http-metrics: 80.

Then I correctly set the envs variables for the dotnet sdk (as shown in previous post).

At this point it should have been working but I was not getting a logs shipped to cloud. I knew the externalServices were setup correctly because were we already getting data from other sources. But I just wasn’t seeing any logs for the dotnet service.

And that was because my query was wrong!

I was using the builder and filtering using container == grafana-agent so I changed that query (via autocomplete) to container == reli-server. But that query was wrong. There was no container value for ‘reli-server’. Changing to query to job == ... (or any other attribute that actually existed) and low and behold… there are all logs just as they should be.

So both sorry for noise and thanks very much for the help.

1 Like

Glad things are working!