Tempo cant find the traceid

I am trying to deploy a complete observability stack using Grafana/Prometheus/Loki/Tempo. All of them are getting deployed as a helm chart.

For testing the traces I’m using Hotrod. The traces seem to be injected correctly as shown below,

But searching the traceid directly doesn’t show any result

Tempo is deployed as a standalone monolith via helm. The data source looks like this,

 - name: Tempo
        type: tempo
        uid: tempo
        url: http://tempo.observability.svc.cluster.local:3100
        access: proxy
              # Field with an internal link pointing to a logs data source in Grafana.
              # datasourceUid value must match the uid value of the logs data source.
              datasourceUid: 'loki'
              spanStartTimeShift: '1h'
              spanEndTimeShift: '-1h'
              tags: ['job', 'instance', 'pod', 'namespace']
              filterByTraceID: false
              filterBySpanID: false
              customQuery: true
              query: 'method="${__span.tags.method}"'
              datasourceUid: 'prometheus'
              spanStartTimeShift: '1h'
              spanEndTimeShift: '-1h'
              tags: [{ key: 'service.name', value: 'service' }, { key: 'job' }]
                - name: 'Sample query'
                  query: 'sum(rate(traces_spanmetrics_latency_bucket{$$__tags}[5m]))'
              datasourceUid: 'prometheus'
              enabled: true
              hide: false
              datasourceUid: 'loki'
              timeShiftEnabled: true
              spanStartTimeShift: '1h'
              spanEndTimeShift: '-1h'
              type: 'Tag'
              tag: 'http.path'
        version: 1

Tempo log shows the following,

level=info ts=2023-11-10T22:57:33.913148323Z caller=handler.go:135 tenant=single-tenant method=GET traceID=483964bc67157f19 url=/api/v2/search/tags duration=5.357145ms response_size=682 status=200
level=info ts=2023-11-10T22:57:34.004050804Z caller=handler.go:135 tenant=single-tenant method=GET traceID=0a18442f021cd8bf url="/api/traces/938c39c7f9314526c8d8?start=1699653153&end=1699653453" duration=4.741838ms response_size=0 status=404

level=info ts=2023-11-10T22:58:31.780169741Z caller=poller.go:218 msg="writing tenant index" tenant=single-tenant metas=1 compactedMetas=0
level=info ts=2023-11-10T22:58:31.782020228Z caller=poller.go:131 msg="blocklist poll complete" seconds=0.00240308
level=info ts=2023-11-10T22:58:31.782117782Z caller=poller.go:218 msg="writing tenant index" tenant=single-tenant metas=1 compactedMetas=0
level=info ts=2023-11-10T22:58:31.783033382Z caller=poller.go:131 msg="blocklist poll complete" seconds=0.001160989
level=info ts=2023-11-10T22:58:37.813950878Z caller=handler.go:135 tenant=single-tenant method=GET traceID=0eff4e0838007d5d url="/api/traces/938c39c7f9314526c8d8?start=1699653217&end=1699653517" duration=5.814736ms response_size=0 status=404

Any idea what’s going wrong here?

Clicking on the listed traces doesn’t work either,

I figured out the problem, the regular expression was not filtering the complete tace_id (last 4 characters were missing) and hence it could not find it.

Once I adjusted the tace_id regular expression, it started working :slight_smile:

The key takeaway is - Try the regular expression in Loki data source and make sure it’s listing the result.