[Tempo Search Beta] 0 Series returned

Hello, running on Grafana 8.4 & Tempo 1.2.1, we would like to try the promising feature of Tempo Search.

We enable the Feature Toggle tempoSearch tempoBackendSearch in Grafana, and the search functionnality in Helm Chart for Tempo:

search:
  # -- Enable Tempo search
  enabled: true

When I try to request a Tempo Search, I can see that the fields are populated, but I get 0 Series, but I know we post some traces each minute for this service:

I can see the requests incoming in the Query Frontend:

level=info ts=2022-03-04T16:17:41.73750736Z caller=handler.go:94 tenant=single-tenant method=GET traceID=57e22543b9145f47 url="/api/search?tags=%20service.name%3D%22dsc%3Adsc-insertion-router-aggregator%22&limit=20&start=1646407061&end=1646410661" duration=4.150722ms response_size=78 status=200

I was expecting to see some logs in the ingesters, but no.

Any idea of what could be missing? Or investigations I could do?

Here are our Tempo Helm custom values:

queryFrontend:
  query:
    # -- Required for grafana version <7.5 for compatibility with jaeger-ui. Doesn't work on ARM arch
    enabled: false

ingester:
  replicas: 2

search:
  # -- Enable Tempo search
  enabled: true

traces:
  jaeger:
    # -- Enable Tempo to ingest Jaeger GRPC traces
    grpc: true
    # -- Enable Tempo to ingest Jaeger Thrift Binary traces
    thriftBinary: false
    # -- Enable Tempo to ingest Jaeger Thrift Compact traces
    thriftCompact: false
    # -- Enable Tempo to ingest Jaeger Thrift HTTP traces
    thriftHttp: false
  # -- Enable Tempo to ingest Zipkin traces
  zipkin: false
  otlp:
    # -- Enable Tempo to ingest Open Telementry HTTP traces
    http: false
    # -- Enable Tempo to ingest Open Telementry GRPC traces
    grpc: true
  # -- Enable Tempo to ingest Open Census traces
  opencensus: false

# ServiceMonitor configuration
serviceMonitor:
  # -- If enabled, ServiceMonitor resources for Prometheus Operator are created
  enabled: true
  # -- Namespace selector for ServiceMonitor resources
  namespaceSelector: { matchNames: ["monitoring"]}
  # -- Additional ServiceMonitor labels
  labels: {app.kubernetes.io/monitoring: prometheus}

A few things I would try:

  1. Remove all criteria from your search and see if it returns anything
  2. Use metrics like tempo_distributor_bytes_received_total to confirm the backend is receiving data
  3. Upgrade to latest: Release v1.3.2 · grafana/tempo · GitHub

Hello, the last version helps a lot, I didn’t notice it was already released, my apologies,

1 Like