Tempo Metrics Generator not writing to prometheus

I’ve seen a fair few people trying to get this working, and it seems like the docs are missing an example for people to work from, so hopefully that can be the example.

Example prior post: How enable Service Graph with Tempo

I’m facing the same issue as others, where I believe everything is configured and enabled, but there are no span metrics being written into prometheus, but no idea how to debug it and get it working.

Here’s how everything is set up right now:

docker-compose.yml

version: "3.8"

services:
  loki:
    image: grafana/loki:2.9.3
    command: -config.file=/etc/loki/local-config.yaml
    ports:
      - "3100:3100"

  prometheus:
    image: prom/prometheus:v2.45.0
    ports:
      - "9090:9090"
    volumes:
      - ./etc/prometheus:/workspace
    command:
      - --config.file=/workspace/prometheus.yml
      - --enable-feature=remote-write-receiver
      - --enable-feature=exemplar-storage
    depends_on:
      - loki


  tempo:
    image: grafana/tempo:2.3.1
    command: 
      - "--target=all"
      - "--storage.trace.backend=local"
      - "--storage.trace.local.path=/var/tempo"
      - "--auth.enabled=false"
      - "--config.file=/etc/tempo/tempo.yaml"
    ports:
      - "14250:14250"
    depends_on:
      - loki
    volumes:
      - ./etc/tempo:/etc/tempo


  grafana:
    image: grafana/grafana:10.2.2
    ports:
      - "3000:3000"
    environment:
      - GF_AUTH_ANONYMOUS_ENABLED=true
      - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
      - GF_AUTH_DISABLE_LOGIN_FORM=true
    volumes:
      - ./etc/grafana/:/etc/grafana/provisioning/datasources
      - ./etc/dashboards.yaml:/etc/grafana/provisioning/dashboards/dashboards.yaml
      - ./etc/dashboards:/etc/grafana/dashboards
    depends_on:
      - loki
      - prometheus

Traces in other services (which I’ve removed from the compose for simplicity) are being received and show up fine in the trace explorer in tempo, but the service graph has its normal “This isnt configured” and going to prometheus and looking for the span metrics shows nothing is published.

My tempo.yaml is:

metrics_generator:
  registry:
    external_labels:
      source: tempo
      cluster: docker-compose
  storage:
    path: /tmp/tempo/generator/wal
    remote_write:
    - url: http://prometheus:9090/api/v1/write
      send_exemplars: true

overrides:
  defaults:
    metrics_generator:
      processors: [service-graphs, span-metrics]

And my grafana datasource.yaml is:

# config file version
apiVersion: 1

# list of datasources that should be deleted from the database
deleteDatasources:
  - name: Prometheus
    orgId: 1

# list of datasources to insert/update depending
# what's available in the database
datasources:
  # <string, required> name of the datasource. Required
  - uid: prometheus
    orgId: 1
    name: Prometheus
    type: prometheus
    typeName: Prometheus
    access: proxy
    url: http://prometheus:9090
    password: ''
    user: ''
    database: ''
    basicAuth: false
    isDefault: true
    jsonData:
      exemplarTraceIdDestinations:
        - datasourceUid: tempo
          name: TraceID
      httpMethod: POST
    readOnly: false
    editable: true
  - uid: tempo
    orgId: 1
    name: Tempo
    type: tempo
    typeName: Tempo
    access: proxy
    url: http://tempo
    password: ''
    user: ''
    database: ''
    basicAuth: false
    isDefault: false
    jsonData:
      serviceMap:
        datasourceUid: 'prometheus'
      nodeGraph:
        enabled: true
      search:
        hide: false
      lokiSearch:
        datasourceUid: loki
      tracesToLogs:
        datasourceUid: loki
        filterBySpanID: false
        filterByTraceID: true
        mapTagNamesEnabled: false
        tags:
          - service
    readOnly: false
    editable: true
  - uid: loki
    orgId: 1
    name: Loki
    type: loki
    typeName: Loki
    access: proxy
    url: http://loki:3100
    password: ''
    user: ''
    database: ''
    basicAuth: false
    isDefault: false
    jsonData:
      derivedFields:
        - datasourceUid: tempo
          # enableNameMatcher: true  # swap when released: https://github.com/grafana/grafana/pull/76162
          matcherRegex: 'TraceID.*: "(\w+)"'
          name: trace_id
          url: $${__value.raw}
    readOnly: false
    editable: true

Traces coming through fine:

Service graph isnt:

And the expected metrics in prometheus don’t exist:

I’m unsure what else to try. The tempo.yaml seems to explicitly turn on the service-graphs and span-metrics processors in the default setting, and prometheus has its remote write feature enabled. Does anyone have any idea what could be going wrong?

2 Likes

bump!

@samuelreay @vembry

Note the feature you enabled on Prometheus for receiving remote writes is deprecated.
You can use:

  - '--web.enable-remote-write-receiver'

To provide a setup with Tempo span metrics enabled, I created a sample setup:

This is the Tempo config I use which is generating the span metrics correctly:
https://github.com/cbos/observability-toolkit/blob/main/config/tempo/tempo-config.yaml

I think you miss this under the metrics_generator config


  processor:
    service_graphs:
    span_metrics:

Note the feature you enabled on Prometheus for receiving remote writes is deprecated.
Y⁄ou can use:

 - '--web.enable-remote-write-receiver'

I added this to my prometheus command on docker compose, it accepts remote write now!

I think you miss this under the metrics_generator config

 processor:
   service_graphs:
   span_metrics:

thanks! i ended up browsing grafana/tempo’s docs and found these configs you mentioned

1 Like

Hii can Tempo metrics-generator remote write metrics directly to mimir ? is it possible?

hii @cbos can you pls help me here ?

I don’t have experience with that, but I think that is possible.
In this article is described that OpenTelemetry collector can use remote write to push metrics to Mimir.
So in that case Tempo can push to Mimir as well.

Yes, we are able to send data to mimir.

1 Like