Error Grafana Tempo

Hello, I have the following problem when collecting data with OTEL from my application that runs on Tomcat.

I see the following error in the catalina.out logs

[otel.javaagent 2024-04-28 19:57:59:494 -0300] [OkHttp http://10.250.2.89:4318/...] WARN io.opentelemetry.exporter.internal.http.HttpExporter - Failed to export metrics. Server responded with HTTP status code 404. Error message: Unable to parse response body, HTTP status message: Not Found

The strange thing is that I see data in “grafana” but not much else

Additionally I see that the container “tempo_k6-tracing_1” fails

k6-tracing_1  | time="2024-04-28T22:58:34Z" level=error msg="GoError: The moduleSpecifier \"https://jslib.k6.io/k6-utils/1.2.0/index.js\" couldn't be retrieved from the resolved url \"https://jslib.k6.io/k6-utils/1.2.0/index.js\". Error : \"Get \"https://jslib.k6.io/k6-utils/1.2.0/index.js\": dial tcp 3.162.125.16:443: i/o timeout\"\n\tat go.k6.io/k6/js.(*requireImpl).require-fm (native)\n\tat file:///example-script.js:3:0(38)\n" hint="script exception"
tempo_k6-tracing_1 exited with code 107
tempo_1       | level=info ts=2024-04-28T22:58:45.760393023Z caller=registry.go:232 tenant=single-tenant msg="collecting metrics" active_series=938
tempo_1       | level=info ts=2024-04-28T22:59:00.759549133Z caller=registry.go:232 tenant=single-tenant msg="collecting metrics" active_series=938
tempo_1       | level=info ts=2024-04-28T22:59:15.761717937Z caller=registry.go:232 tenant=single-tenant msg="collecting metrics" active_series=938
tempo_1       | level=info ts=2024-04-28T22:59:30.762478344Z caller=registry.go:232 tenant=single-tenant msg="collecting metrics" active_series=938
k6-tracing_1  | time="2024-04-28T22:59:36Z" level=error msg="GoError: The moduleSpecifier \"https://jslib.k6.io/k6-utils/1.2.0/index.js\" couldn't be retrieved from the resolved url \"https://jslib.k6.io/k6-utils/1.2.0/index.js\". Error : \"Get \"https://jslib.k6.io/k6-utils/1.2.0/index.js\": dial tcp 3.162.125.117:443: i/o timeout\"\n\tat go.k6.io/k6/js.(*requireImpl).require-fm (native)\n\tat file:///example-script.js:3:0(38)\n" hint="script exception"
tempo_k6-tracing_1 exited with code 107
tempo_1       | level=info ts=2024-04-28T22:59:45.760319402Z caller=registry.go:232 tenant=single-tenant msg="collecting metrics" active_series=938
tempo_1       | level=info ts=2024-04-28T23:00:00.761664776Z caller=registry.go:232 tenant=single-tenant msg="collecting metrics" active_series=938
tempo_1       | 2024/04/28 23:00:06 http: superfluous response.WriteHeader call from github.com/opentracing-contrib/go-stdlib/nethttp.(*statusCodeTracker).WriteHeader (status-code-tracker.go:17)
tempo_1       | level=info ts=2024-04-28T23:00:15.761191871Z caller=registry.go:232 tenant=single-tenant msg="collecting metrics" active_series=938
tempo_1       | level=info ts=2024-04-28T23:00:30.762008437Z caller=registry.go:232 tenant=single-tenant msg="collecting metrics" active_series=938

Conf tempo:

stream_over_http_enabled: true
server:
  http_listen_port: 3200
  log_level: info
  grpc_server_max_recv_msg_size: 1.572864e+07
  grpc_server_max_send_msg_size: 1.572864e+07

query_frontend:
  search:
    duration_slo: 5s
    throughput_bytes_slo: 1.073741824e+09
  trace_by_id:
    duration_slo: 5s

distributor:
  receivers:                           # this configuration will listen on all ports and protocols that tempo is capable of.
    jaeger:                            # the receives all come from the OpenTelemetry collector.  more configuration information can
      protocols:                       # be found there: https://github.com/open-telemetry/opentelemetry-collector/tree/main/receiver
        thrift_http:                   #
        grpc:                          # for a production deployment you should only enable the receivers you need!
        thrift_binary:
        thrift_compact:
    zipkin:
    otlp:
      protocols:
        http:
        grpc:
    opencensus:

ingester:
  max_block_duration: 5m               # cut the headblock when this much time passes. this is being set for demo purposes and should probably be left alone normally

compactor:
  compaction:
    block_retention: 1h                # overall Tempo trace retention. set for demo purposes

metrics_generator:
  registry:
    external_labels:
      source: tempo
      cluster: docker-compose
  storage:
    path: /var/tempo/generator/wal
    remote_write:
      - url: http://XXXXXXX:9090/prometheus/api/v1/write
        send_exemplars: true
  traces_storage:
    path: /var/tempo/generator/traces

storage:
  trace:
    backend: local                     # backend configuration to use
    wal:
      path: /var/tempo/wal             # where to store the the wal locally
    local:
      path: /var/tempo/blocks

overrides:
  defaults:
    metrics_generator:
      processors: [service-graphs, span-metrics, local-blocks] # enables metrics generator
    global:
      max_bytes_per_trace: 90000000

Conf docker-compose tempo

services:

  # Tempo runs as user 10001, and docker compose creates the volume as root.
  # As such, we need to chown the volume in order for Tempo to start correctly.
  init:
    image: &tempoImage grafana/tempo:latest
    user: root
    entrypoint:
      - "chown"
      - "10001:10001"
      - "/var/tempo"
    volumes:
      - ./tempo-data:/var/tempo

  tempo:
    image: *tempoImage
    command: [ "-config.file=/etc/tempo.yaml" ]
    volumes:
      - ./shared/tempo.yaml:/etc/tempo.yaml
      - ./tempo-data:/var/tempo
    ports:
      - "14268:14268"  # jaeger ingest
      - "3200:3200"   # tempo
      - "9095:9095" # tempo grpc
      - "4317:4317"  # otlp grpc
      - "4318:4318"  # otlp http
      - "9411:9411"   # zipkin
    depends_on:
      - init

  k6-tracing:
    image: ghcr.io/grafana/xk6-client-tracing:latest
    environment:
      - ENDPOINT=tempo:4317
    restart: always
    depends_on:
      - tempo

conf OTEL Agent (setenv.sh):

CATALINA_OPTS=“$CATALINA_OPTS
-javaagent:/opt/tempo/opentelemetry-javaagent.jar
$CUSTOM_ENV”

export OTEL_EXPORTER_OTLP_ENDPOINT=http://10.250.2.89:4318
export OTEL_RESOURCE_ATTRIBUTES=service.name=ehcos-04-pre