MetricsGenerator in k8s not working

,

Hi,

I’m testing a grafana tempo chart (v2.4.2) on my k8s cluster, everything works pretty well except the MetricsGenerator component.
No traces_spanmetrics_* metrics are generated.

tempo-config.yaml

multitenancy_enabled: false
cache:
  caches:
    - memcached:
        host: observability-rci-memcached
        service: memcache
        timeout: 500ms
        consistent_hash: true
      roles:
        - bloom
        - trace-id-index
compactor:
  compaction:
    block_retention: 48h
  ring:
    kvstore:
      store: memberlist
distributor:
  ring:
    kvstore:
      store: memberlist

  metric_received_spans:
    enabled: true

  receivers:
    jaeger:
      protocols:
        thrift_http:
          endpoint: 0.0.0.0:14268
        grpc:
          endpoint: 0.0.0.0:14250
    otlp:
      protocols:
        http:
          endpoint: 0.0.0.0:55681
        grpc:
          endpoint: 0.0.0.0:4317
querier:
  frontend_worker:
    frontend_address: observability-rci-grafana-tempo-query-frontend-headless:9095
ingester:
  lifecycler:
    ring:
      replication_factor: 1
      kvstore:
        store: memberlist
    tokens_file_path: /bitnami/grafana-tempo/data/tokens.json
metrics_generator:
  ring:
    kvstore:
      store: memberlist
  processor:
    service_graphs:
      # Wait is the value to wait for an edge to be completed.
      wait: 10s
    span_metrics:
      histogram_buckets: [0.002, 0.004, 0.008, 0.016, 0.032, 0.064, 0.128, 0.256, 0.512, 1.02, 2.05, 4.10]
      enable_target_info: true
    local_blocks:
      filter_server_spans: false

  registry:
    collection_interval: 15s
    external_labels:
      source: tempo
      cluster: test

  storage:
    path: /bitnami/grafana-tempo/data/wal
    remote_write:
      - send_exemplars: true
        url: http://rancher-monitoring-prometheus.cattle-monitoring-system.svc.cluster.local:9090/api/v1/write
memberlist:
  abort_if_cluster_join_fails: false
  join_members:
    - observability-rci-grafana-tempo-gossip-ring
overrides:
  per_tenant_override_config: /bitnami/grafana-tempo/conf/overrides.yaml
server:
  http_listen_port: 3200
storage:
  trace:
    backend: s3
    s3:
      bucket: ${S3_BUCKET}
      endpoint: ${S3_ENDPOINT}
      access_key: ${S3_ACCESS_KEY}
      secret_key: ${S3_SECRET_KEY}
    blocklist_poll: 5m
    local:
      path: /bitnami/grafana-tempo/data/traces
    wal:
      path: /bitnami/grafana-tempo/data/wal

overrides.yaml

overrides:
  defaults:
    metrics_generator:
      processors: [service-graphs, span-metrics, local-blocks]
      max_active_series: 1000

Checking distributor’s status page (http://distributor/metrics-generator/ring?tokens=true) show metrics-generator is active and have 100% ownership

Checking http://metrics-generator/status/overrides/defaults

Runtime overrides
Source of runtime overrides: defaults

metrics_generator:
  processors:
  - service-graphs
  - span-metrics
  - local-blocks
  ingestion_time_range_slack: 0s

Prometheus has remote-write enabled and tested it from tempo’s namespace

Checking prometheus for metrics:

  • tempo_distributor_ingester_clients has 1 active client
  • tempo_distributor_metrics_generator_clients has 0 active clients
  • traces_spanmetrics_* no metrics at all

I’m probably missing/messed up a configuration but I’m running out of clues…

Thanks

Hello Community,
Does anybody have found the solution/explanation of this issue? Do we have to configure the Grafana alloy?

(Tempo or GET to generate Service Graph data) , is this option workable?