Hello, I have the following error.
I also see that the following happens to me
it keeps loading and never shows me the results
Conf:
Tomcat – port 4317 → otel --port 4017–> Tempo – Port 3200 → Grafana
Tomcat conf:
-javaagent:/opt/tempo/opentelemetry-javaagent.jar
-Dotel.traces.exporter=otlp
-Dotel.exporter.otlp.endpoint=http://10.250.2.89:4317
-Dotel.metrics.exporter=none
-Dotel.logs.exporter=none
-Dotel.exporter.otlp.protocol=grpc
-Dotel.service.name=ehcos04
-Dotel.resource.attributes=application=ehcos04
Otel Conf:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:55681
exporters:
logging:
loglevel: debug
otlp/tempo:
endpoint: "10.250.2.89:4017"
tls:
insecure: true
prometheus:
endpoint: "0.0.0.0:8889"
service:
pipelines:
traces:
receivers: [otlp]
exporters: [logging, otlp/tempo]
metrics:
receivers: [otlp]
exporters: [logging, prometheus]
logs:
receivers: [otlp]
exporters: [logging]
In the “otel” logs I see that data is arriving from tomcat
Otel logs:
otel-collector-1 | Span #335
otel-collector-1 | Trace ID : ae6f30492d3b80c2c13ba15e63179196
otel-collector-1 | Parent ID : 3177c70f6d39c586
otel-collector-1 | ID : 6364d5ab28235838
otel-collector-1 | Name : Transaction.commit
otel-collector-1 | Kind : Internal
otel-collector-1 | Start time : 2024-05-28 19:55:31.517557901 +0000 UTC
otel-collector-1 | End time : 2024-05-28 19:55:31.518230479 +0000 UTC
otel-collector-1 | Status code : Unset
otel-collector-1 | Status message :
otel-collector-1 | Attributes:
otel-collector-1 | -> thread.id: Int(224)
otel-collector-1 | -> thread.name: Str(Clinic Scheduler_Worker-7)
Tempo Conf:
stream_over_http_enabled: true
server:
http_listen_port: 3200
log_level: info
grpc_server_max_recv_msg_size: 1.572864e+07
grpc_server_max_send_msg_size: 1.572864e+07
query_frontend:
search:
duration_slo: 5s
throughput_bytes_slo: 1.073741824e+09
trace_by_id:
duration_slo: 5s
distributor:
receivers: # this configuration will listen on all ports and protocols that tempo is capable of.
jaeger: # the receives all come from the OpenTelemetry collector. more configuration information can
protocols: # be found there: https://github.com/open-telemetry/opentelemetry-collector/tree/main/receiver
thrift_http: #
grpc: # for a production deployment you should only enable the receivers you need!
thrift_binary:
thrift_compact:
zipkin:
otlp:
protocols:
http:
grpc:
opencensus:
ingester:
max_block_duration: 5m # cut the headblock when this much time passes. this is being set for demo purposes and should probably be left alone normally
compactor:
compaction:
block_retention: 1h # overall Tempo trace retention. set for demo purposes
metrics_generator:
registry:
external_labels:
source: tempo
cluster: docker-compose
storage:
path: /var/tempo/generator/wal
remote_write:
- url: http://10.250.2.89:9090/prometheus/api/v1/write
send_exemplars: true
traces_storage:
path: /var/tempo/generator/traces
storage:
trace:
backend: local # backend configuration to use
wal:
path: /var/tempo/wal # where to store the wal locally
local:
path: /var/tempo/blocks
overrides:
defaults:
metrics_generator:
processors: [service-graphs, span-metrics, local-blocks] # enables metrics generator
global:
max_bytes_per_trace: 50000000
Logs Tempo:
tempo-1 | level=warn ts=2024-05-28T19:57:43.561953057Z caller=pool.go:250 msg="removing distributor_metrics_generator_pool failing healthcheck" addr=127.0.0.1:9095 reason="failing healthcheck status: NOT_SERVING"
tempo-1 | level=warn ts=2024-05-28T19:57:43.562233573Z caller=pool.go:250 msg="removing distributor_pool failing healthcheck" addr=127.0.0.1:9095 reason="failing healthcheck status: NOT_SERVING"
tempo-1 | level=info ts=2024-05-28T19:57:48.997102332Z caller=registry.go:232 tenant=single-tenant msg="collecting metrics" active_series=804
tempo-1 | level=warn ts=2024-05-28T19:57:58.561657195Z caller=pool.go:250 msg="removing distributor_metrics_generator_pool failing healthcheck" addr=127.0.0.1:9095 reason="failing healthcheck status: NOT_SERVING"
tempo-1 | level=warn ts=2024-05-28T19:57:58.56179246Z caller=pool.go:250 msg="removing distributor_pool failing healthcheck" addr=127.0.0.1:9095 reason="failing healthcheck status: NOT_SERVING"
tempo-1 | level=info ts=2024-05-28T19:58:03.997819565Z caller=registry.go:232 tenant=single-tenant msg="collecting metrics" active_series=804
tempo-1 | level=warn ts=2024-05-28T19:58:13.561861786Z caller=pool.go:250 msg="removing distributor_pool failing healthcheck" addr=127.0.0.1:9095 reason="failing healthcheck status: NOT_SERVING"
tempo-1 | level=warn ts=2024-05-28T19:58:13.562018324Z caller=pool.go:250 msg="removing distributor_metrics_generator_pool failing healthcheck" addr=127.0.0.1:9095 reason="failing healthcheck status: NOT_SERVING"
tempo-1 | level=info ts=2024-05-28T19:58:18.995888045Z caller=registry.go:232 tenant=single-tenant msg="collecting metrics" active_series=804
tempo-1 | level=warn ts=2024-05-28T19:58:28.562238676Z caller=pool.go:250 msg="removing distributor_metrics_generator_pool failing healthcheck" addr=127.0.0.1:9095 reason="failing healthcheck status: NOT_SERVING"
tempo-1 | level=warn ts=2024-05-28T19:58:28.562483142Z caller=pool.go:250 msg="removing distributor_pool failing healthcheck" addr=127.0.0.1:9095 reason="failing healthcheck status: NOT_SERVING"

