Otel collector with otlp exporter to Tempo issue

Hi,

I have a Tempo on Kubernetes using helm chart Get started with Grafana Tempo using the Helm chart | Grafana Labs Helm charts documentation

I noticed the distributor is listening on 3100 and 9095
level=info ts=2024-05-02T14:04:22.25847056Z caller=server.go:238 msg=“server listening on addresses” http=[::]:3100 grpc=[::]:9095

I configured the Tempo exporter like this for HTTP:

  otlphttp/tempo:
    endpoint: http://tempo-distributor:3100
    tls:
      insecure: true

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlp/apm, otlphttp/dynatrace, otlphttp/tempo]

But got the following error:
Permanent error: rpc error: code = Unimplemented desc = error exporting items, request to http://tempo-distributor:3100/v1/traces responded with HTTP Status Code 404", “dropped_items”: 4}

And for GRPC i have the configuration like this

  otlp/tempo:
    endpoint: tempo-distributor:9095
    tls:
      insecure: true

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlp/apm, otlphttp/dynatrace, otlp/tempo]

but im getting the following error:

Permanent error: rpc error: code = Unimplemented desc = unknown service opentelemetry.proto.collector.trace.v1.TraceService", “dropped_items”: 2}

I don’t think there is an error before or during the collector because it’s properly working for the other 2 trace exporters.

The PODs are all running
tempo-compactor-77587cf9b5-lf5b2 1/1 Running 0 150m
tempo-distributor-658b6499f7-ph84s 1/1 Running 0 150m
tempo-ingester-0 1/1 Running 0 150m
tempo-ingester-1 1/1 Running 0 150m
tempo-ingester-2 1/1 Running 0 150m
tempo-memcached-0 1/1 Running 0 150m
tempo-querier-78885dd74f-76r2m 1/1 Running 0 150m
tempo-query-frontend-7994df4bf-cmw24 1/1 Running 0 150m

Services:

tempo-compactor ClusterIP 10.233.54.245 3100/TCP 137m
tempo-distributor ClusterIP 10.233.30.82 3100/TCP,9095/TCP 137m
tempo-distributor-discovery ClusterIP None 3100/TCP 137m
tempo-gossip-ring ClusterIP None 7946/TCP 137m
tempo-ingester ClusterIP 10.233.11.79 3100/TCP,9095/TCP 137m
tempo-ingester-discovery ClusterIP None 3100/TCP,9095/TCP 137m
tempo-memcached ClusterIP 10.233.33.134 11211/TCP,9150/TCP 137m
tempo-querier ClusterIP 10.233.28.188 3100/TCP,9095/TCP 137m
tempo-query-frontend ClusterIP 10.233.18.32 3100/TCP,9095/TCP 137m
tempo-query-frontend-discovery ClusterIP None 3100/TCP,9095/TCP,9096/TCP 137m

Does anyone know what I can do to fix this issue?

Check if you have enabled otlp receiver:

If not, then find how to enable it in your helm values.

2 Likes

Can I send traces straight from opentelemetry.sdk to tempo-distributed-ingester port? or do I need a opentelemetry collector in between

Yes, but that’s not a good practice. You may have increased latency and other problems on the app level - you will offload problems (batching, retrying, …) to OTEL collector usually.

ok, but OTEL collector is still a push model instead of scraping from my service? So I think the increased problem for app still holds.
It adds another hop, another point of failure

1 Like