Opentelemetry connection refused to Tempo

I am seeing the following log in the Opentelemetry Collector:

warn    zapgrpc/zapgrpc.go:195  [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "tempo:4317", ServerName: "tempo:4317", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 192.168.112.2:4317: connect: connection refused"  {"grpc_log": true}

Docker Compose:

version: '3'

services:
  loki:
    image: grafana/loki:main
    command: [ "-config.file=/etc/loki/local-config.yaml" ]
    ports:
      - "3100:3100"

  prometheus:
    image: prom/prometheus:latest
    ports:
      - "9090:9090"
    volumes:
      - ${OSI_DOCKER_ROOT}\prometheus\prometheus.yml:/etc/prometheus/prometheus.yml
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'

  tempo:
    image: grafana/tempo:2.2.2
    command: [ "-config.file=/etc/tempo.yml" ]
    volumes:
      - ${OSI_DOCKER_ROOT}\tempo\tempo.yml:/etc/tempo.yml:ro
      - ${OSI_DOCKER_ROOT}\tempo\tempo-data:/tmp/tempo
    ports:
      - "3110:3100"  # Tempo
      - "4317"  # otlp grpc

  grafana:
    image: grafana/grafana:latest
    volumes:
      - ./grafana:/etc/grafana/provisioning/datasources:ro
    environment:
      - GF_AUTH_ANONYMOUS_ENABLED=true
      - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
      - GF_AUTH_DISABLE_LOGIN_FORM=true
    ports:
      - "3000:3000"

  otel-collector:
    image: otel/opentelemetry-collector-contrib
    command:
      - --config=/etc/otelcol-cont/config.yml
    volumes:
      - ${OSI_DOCKER_ROOT}\collector\otel-collector-config.yml:/etc/otelcol-cont/config.yml
    ports:
      - 1888:1888 # pprof extension
      - 8889:8889 # Prometheus metrics exposed by the Collector
      - 8890:8890 # Prometheus exporter metrics
      - 13133:13133 # health_check extension
      - 4317:4317 # OTLP gRPC receiver
      - 4318:4318 # OTLP http receiver
      - 55679:55679 # zpages extension
      

tempo.yml


server:
  http_listen_port: 3200

distributor:
  receivers:                           # this configuration will listen on all ports and protocols that tempo is capable of.
    jaeger:                            # the receivers all come from the OpenTelemetry collector.  more configuration information can
      protocols:                       # be found there: https://github.com/open-telemetry/opentelemetry-collector/tree/main/receiver
        thrift_http:                   #
        grpc:                          # for a production deployment you should only enable the receivers you need!
        thrift_binary:
        thrift_compact:

ingester:
  max_block_duration: 5m               # cut the headblock when this much time passes. this is being set for demo purposes and should probably be left alone normally

compactor:
  compaction:
    block_retention: 1h                # overall Tempo trace retention. set for demo purposes

metrics_generator:
  registry:
    external_labels:
      source: tempo
      cluster: docker-compose
  storage:
    path: /tmp/tempo/generator/wal

storage:
  trace:
    backend: local                     # backend configuration to use
    wal:
      path: /tmp/tempo/wal             # where to store the the wal locally
    local:
      path: /tmp/tempo/blocks

overrides:
  metrics_generator_processors: [service-graphs, span-metrics] # enables metrics generator

Is your Tempo container running? Does it have any errors in the logs? Did you try latest release?

All containers are running including Tempo which shows no errors.

Try this one GitHub - jangaraj/grafana-opentelemetry
It works on my computer :grinning:

Added otlp protocols in collector config. Probably unrelated but also fixed prometheus exporter endpoint.