Trace not found in grafana nor in jaeger-ui

I moved from tempo to jaeger on kubernetes but trace do no come in anymore. I tried to push a simple trace using this curl -v -X POST http://localhost:9411 -H 'Content-Type: application/json' -d '[{ "id": "352bff9a74ca9ad2", "traceId": "5af7183fb1d4ce", "timestamp": 1606732108000, "duration": 1431, "name": "creditAmount", "tags": { "http.method": "GET", "http.path": "/api" } }]'

My
I saw in the log that the trace has been received but I cannot see them either in grafana with tempo as a source nor in the jaeger-ui.

In the log, I got his that look suspicious

empo-0\\x12\\x8b\\x05\\n\\x0e127.0.0.1:9095\\x10\\xfc\\x98\\x93\""
level=debug ts=2020-11-30T10:42:09.79140194Z caller=mock.go:86 msg=CAS key=collectors/ring modify_index=222 value="\"\\x9a\\x05\\xf4\\x99\\x02\\n\\x97\\x05\\n\\atem
po-0\\x12\\x8b\\x05\\n\\x0e127.0.0.1:9095\\x10\\x81\\x99\\x93\""
level=debug ts=2020-11-30T10:42:09.791427882Z caller=mock.go:159 msg=Get key=collectors/ring modify_index=223 value="\"\\x9a\\x05\\xf4\\x99\\x02\\n\\x97\\x05\\n\\at
empo-0\\x12\\x8b\\x05\\n\\x0e127.0.0.1:9095\\x10\\x81\\x99\\x93\""
level=debug ts=2020-11-30T10:42:09.791452953Z caller=mock.go:113 msg=Get key=collectors/ring wait_index=223
level=debug ts=2020-11-30T10:42:11.789997409Z caller=mock.go:149 msg="Get - deadline exceeded" key=collectors/ring
level=debug ts=2020-11-30T10:42:11.790055669Z caller=mock.go:113 msg=Get key=collectors/ring wait_index=223
level=debug ts=2020-11-30T10:42:13.790088144Z caller=mock.go:149 msg="Get - deadline exceeded" key=collectors/ring


My configuration looks like this:
```auth_enabled: false
compactor:
  compaction:
    compacted_block_retention: 24h

distributor:
  receivers:
    jaeger:
      protocols:
        thrift_compact:
          endpoint: 0.0.0.0:6831
        thrift_binary:
          endpoint: 0.0.0.0:6832
    zipkin:
ingester:
  trace_idle_period: 10s               # the length of time after a trace has not received spans to consider it complete and flush it
  traces_per_block: 100                # cut the head block when it his this number of traces or ...
  max_block_duration: 5m               #   this much time passes
  lifecycler:
    ring:
      replication_factor: 2   # number of replicas of each span to make while pushing to the backend

server:
  http_listen_port: 3100
  log_level: debug

storage:
  trace:
    backend: s3
    s3: 
      bucket: dashblock-tempo
      endpoint: s3.us-east-1.amazonaws.com
    wal:
      path: /var/tempo/wal

It appears that you’re missing your ring storage config. We use memberlist with the following config in the root

memberlist:
    abort_if_cluster_join_fails: false
    bind_port: 7946
    join_members:
      - <DNS Entry for all pods>:7946

You can see an example configmap here:

We use a headless k8s service that points to all tempo pods so that all pods gossip the state of the ring. It is important that the service is headless b/c then it will list the individual IPs of every endpoint in the service.

1 Like

Hye, thank for the response.

Do I need this configuration event if I run tempo as a single binary ?

Nope. If you have a single instance of the single binary I would drop replication factor to 1. RF > 1 doesn’t make sense with a single ingester.

Try pushing spans again and check the following metrics for me:

tempo_distributor_spans_received_total
tempo_ingester_traces_created_total

Wait a good minute after pushing the spans for the second one as it doesn’t count the trace as created until it moves it from the active traces map. Also double check the logs again and see if you see any additional errors.

1 Like