How enable Service Graph with Tempo

I cannot configure Grafana Tempo to produce span-metrics.
I have using spring applications with OpenTelemetryAgent, and I have deployed Grafana, Prometheus and Tempo using Docker Compose.
Docker compose

  grafana:
    image: grafana/grafana:9.5.2 # this is for tempo 2
    container_name: grafana
    hostname: grafana 
    depends_on:
      tempo:
        condition: service_healthy
      prometheus:
        condition: service_healthy
    volumes:
      - ./config/grafana-bootstrap.ini:/etc/grafana/grafana.ini
      - ./config/grafana-datasources.yaml:/etc/grafana/provisioning/datasources/datasources.yaml
    environment:
      - GF_AUTH_ANONYMOUS_ENABLED=true
      - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
      - GF_AUTH_DISABLE_LOGIN_FORM=true
    ports:
      - "3000:3000"
    healthcheck:
      interval: 5s
      retries: 10
      test: wget --no-verbose --tries=1 --spider http://grafana.staged-by-discourse.com || exit 1

  prometheus:
    image: prom/prometheus:v2.41.0
    container_name: prometheus
    hostname: prometheus
    command:
      - --config.file=/etc/prometheus.yaml
      - --web.enable-remote-write-receiver
      - --enable-feature=exemplar-storage
    volumes:
      - ./config/prometheus.yaml:/etc/prometheus.yaml
    ports:
      - "9090:9090"
    healthcheck:
      interval: 5s
      retries: 10
      test: wget --no-verbose --tries=1 --spider http://localhost:9090/status || exit 1

  tempo:
    image: grafana/tempo:1.5.0
    command: [ "-search.enabled=true", "-config.file=/etc/tempo.yaml" ]
    container_name: tempo
    hostname: tempo
    volumes:
      - ./config/tempo-config.yaml:/etc/tempo.yaml
    ports:
      - "3200:3200"
      - "4317:4317"
      - "4318:4318"
    expose:
      - "42168"
    healthcheck:
      interval: 5s
      retries: 10
      test: wget --no-verbose --tries=1 --spider http://localhost:3200/status || exit 1

Here you can see tempo-config.yaml file

server:
  http_listen_port: 3200

distributor:
  search_tags_deny_list:
    - "instance"
    - "version"
  receivers:
    jaeger:
      protocols:
        thrift_http:
        grpc:
        thrift_binary:
        thrift_compact:
    zipkin:
    otlp:
      protocols:
        http:
        grpc:
    opencensus:

ingester:
  trace_idle_period: 10s
  max_block_bytes: 1_000_000
  max_block_duration: 5m

compactor:
  compaction:
    compaction_window: 1h
    max_block_bytes: 100_000_000
    block_retention: 1h
    compacted_block_retention: 10m

storage:
  trace:
    backend: local
    block:
      bloom_filter_false_positive: .05
      index_downsample_bytes: 1000
      encoding: zstd
    wal:
      path: /tmp/tempo/wal
      encoding: snappy
    local:
      path: /tmp/tempo/blocks
    pool:
      max_workers: 100
      queue_depth: 10000

metrics_generator:
  registry:
    collection_interval: 5s   
    external_labels:
      source: tempo
      cluster: docker-compose
  storage:
    path: /tmp/tempo/generator/wal
    remote_write:
      - url: http://prometheus:9090/api/v1/write
        send_exemplars: true

overrides:
  metrics_generator_processors:
    - service-graphs
    - span-metrics

Here you can see prometheus.yaml

global:
  scrape_interval: 15s
  evaluation_interval: 15s

scrape_configs:
- job_name: 'collector'
  scrape_interval: 15s
  static_configs:
    - targets: ['collector:6666']

Finally, grafana-datasources.yaml and grafana-bootstrap.ini

apiVersion: 1

datasources:
- name: Prometheus
  type: prometheus
  access: proxy
  orgId: 1
  url: http://prometheus:9090
  basicAuth: false
  isDefault: false
  version: 1
  editable: true
  jsonData:
    httpMethod: GET
- name: Tempo
  type: tempo
  access: proxy
  orgId: 1
  url: http://tempo:3200
  jsonData:
    httpMethod: GET
    serviceMap:
      datasourceUid: 'Prometheus'
  basicAuth: false
  isDefault: true
  version: 1
  editable: true
  apiVersion: 1
  uid: tempo
[feature_toggles]
enable = tempoSearch tempoBackendSearch tempoServiceGraph

I have also enabled Service Graph in the the Tempo DataSource configuration in Grafana, as you can see below.

I can see the traces (as you can see below) but I the Service Graph says there are not data available. I am using Tempo 1.5.X version. I tried to use 2.1.X (migrating the configuration) and I got the same result.

I thing I am missing something (maybe in the prometheus configuration), but I do not know how fix it.

Can you help me?
Thanks

can you check if you have all spam metrics in your promethous?

you can use explore to check for traces_spanmetrics_latency, traces_spanmetrics_calls_total and traces_spanmetrics_size_total metrics?

also see metrics-generator config docs for details of metrics-generator config.

Thanks for the response @surajsidh

The spanmetrics metrics are missing in Prometheus. Maybe the issue is in the span_metrics.storage config, but I have enabled Remote Writes according to the documentation, see docker-compose above

      - --web.enable-remote-write-receiver

Tempo is configured to write metrics in the endpoint (see tempo-config.yaml above):

    remote_write:
      - url: http://prometheus:9090/api/v1/write
        send_exemplars: true

I cannot see any related message in the logs.
(I have simplified a bit the configuration in my post).

Do you know if there is an example of Tempo configuration (using K8S o Docker Compose) with the span_metrics running?

Many thanks

Solved

I forgot the property in the tempo-config.yaml.

metrics_generator_enabled: true
1 Like

Hi, where to add this property, I am having the same problem.
I added it in the root level of the yaml file, but it is still not working.

I wanted to chime in in case someone else had this issue similar to me.

I am setting this up on VMs instead of K8S or other container platforms.

I had the following in my temp/config.yaml file:

metrics_generator:
  registry:
    external_labels:
      source: tempo
      cluster: linux-microservices
  storage:
    path: /tmp/tempo/generator/wal
    remote_write:
    - url: http://x.x.x.x:9090/api/v1/write
      send_exemplars: true

I had grafana and prometheus setup on their vms as well. I was seeing trace data in grafana from tempo but I wasn’t seeing the service graphs.

In my case it was that I needed to add --enable-feature=remote-write-receiver and --enable-feature=exemplar-storage to my prometheus systemd file. My full prometheus systemd file is as follows:

[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target

[Service]
User=prometheus
Group=prometheus
Type=simple
ExecStart=/usr/local/bin/prometheus \
--config.file /etc/prometheus/prometheus.yaml \
--storage.tsdb.path /var/lib/prometheus/ \
--web.console.templates=/etc/prometheus/consoles \
--web.console.libraries=/etc/prometheus/console_libraries \
--enable-feature=remote-write-receiver \
--enable-feature=exemplar-storage


[Install]
WantedBy=multi-user.target

Did a systemctl daemon-reload && systemctl restart prometheus and the service graphs started showing in grafana

for the record I am using the following for my grafana datasource as I wasn’t able to figure out how to configure tempo as a datasource in the grafana web UI:

apiVersion: 1

datasources:
- name: Prometheus
  type: prometheus
  uid: prometheus
  access: proxy
  orgId: 1
  url: http://localhost:9090
  basicAuth: false
  isDefault: false
  version: 1
  editable: false
  jsonData:
    httpMethod: GET
- name: Tempo
  type: tempo
  access: proxy
  orgId: 1
  url: http://x.x.x.x:3200
  basicAuth: false
  isDefault: true
  version: 1
  editable: false
  apiVersion: 1
  uid: tempo
  jsonData:
    httpMethod: GET
    serviceMap:
      datasourceUid: prometheus

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.