I currently have Grafana, Open Telemetry Collector, Prometheus, and Tempo in a Docker Compose file. I have the following in the my OpenTelemetry Collector Config
exporters:
logging:
loglevel: debug
otlp:
endpoint: "0.0.0.0:4327"
otlphttp:
endpoint: "https://localhost:4328"
processors:
batch:
service:
pipelines:
traces:
receivers: [otlp]
exporters: [logging, otlp]
processors: [batch]
I have the following in my docker compose file
tempo:
image: grafana/tempo:latest
command: [ "-config.file=/etc/tempo.yaml" ]
volumes:
- ./shared/tempo.yaml:/etc/tempo.yaml
- ./tempo-data:/tmp/tempo
ports:
# - "14268:14268" # jaeger ingest
- "3200:3200" # tempo
- "4327:4317" # otlp grpc
- "4328:4318" # otlp http
# - "9411:9411" # zipkin
networks:
- grafana_net
grafana:
image: grafana/grafana:latest
# environment:
# - "GF_AUTH_DISABLE_LOGIN_FORM=true"
# - "GF_AUTH_ANONYMOUS_ENABLED=true"
# - "GF_AUTH_ANONYMOUS_ORG_ROLE=Admin"
volumes:
- ./shared/grafana-datasources.yaml:/etc/grafana/provisioning/datasources/datasources.yaml
ports:
- "3000:3000"
networks:
- grafana_net
otel-collector:
image: otel/opentelemetry-collector
command: [--config=/etc/otel-collector-config.yaml]
volumes:
- ./shared//otel-collector-config.yaml:/etc/otel-collector-config.yaml
ports:
- 1888:1888 # pprof extension
- 8888:8888 # Prometheus metrics exposed by the collector
- 8889:8889 # Prometheus exporter metrics
- 13133:13133 # health_check extension
- 4317:4317 # OTLP gRPC receiver
- 4318:4318 # OTLP http receiver
- 55679:55679 # zpages extension
networks:
- grafana_net
I have the following in my tempo.yaml config file
server:
http_listen_port: 3200
distributor:
receivers: # this configuration will listen on all ports and protocols that tempo is capable of.
jaeger: # the receives all come from the OpenTelemetry collector. more configuration information can
protocols: # be found there: https://github.com/open-telemetry/opentelemetry-collector/tree/main/receiver
thrift_http: #
grpc: # for a production deployment you should only enable the receivers you need!
thrift_binary:
thrift_compact:
zipkin:
otlp:
protocols:
http:
endpoint: 0.0.0.0:4317
grpc:
endpoint: 0.0.0.0:4318
opencensus:
ingester:
max_block_duration: 5m # cut the headblock when this much time passes. this is being set for demo purposes and should probably be left alone normally
compactor:
compaction:
block_retention: 1h # overall Tempo trace retention. set for demo purposes
metrics_generator:
registry:
external_labels:
source: tempo
cluster: docker-compose
storage:
path: /tmp/tempo/generator/wal
remote_write:
- url: http://prometheus:9090/api/v1/write
send_exemplars: true
storage:
trace:
backend: local # backend configuration to use
wal:
path: /tmp/tempo/wal # where to store the the wal locally
local:
path: /tmp/tempo/blocks
overrides:
metrics_generator_processors: [service-graphs, span-metrics] # enables metrics generator
When the collector starts up, it tries to connect to the GRPC for Tempo but gets the following error
2023-04-28 12:06:16 2023-04-28T18:06:16.120Z warn zapgrpc/zapgrpc.go:195 [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
2023-04-28 12:06:16 “Addr”: “0.0.0.0:4327”,
2023-04-28 12:06:16 “ServerName”: “0.0.0.0:4327”,
2023-04-28 12:06:16 “Attributes”: null,
2023-04-28 12:06:16 “BalancerAttributes”: null,
2023-04-28 12:06:16 “Type”: 0,
2023-04-28 12:06:16 “Metadata”: null
2023-04-28 12:06:16 }. Err: connection error: desc = “transport: Error while dialing: dial tcp 0.0.0.0:4327: connect: connection refused” {“grpc_log”: true}
I have tried to change the endpoint ports, tried use the HTTP (otlphttp) method but that also says refused.
Is there anywhere I can test to make sure my Tempo is working properly or do you have any ideas of what to try next?
Thanks!