Unable to generate service graph for Rate, error and duration values (RED signals)

Traces to metric is not happening , required help here to achieve this

BELOW IS SCREENSHOT EXPECTATION (refer: Service graph view | Grafana Tempo documentation)

Current Situation :slight_smile:

Here is the current config file of tempo

server:
  http_listen_port: 3200

distributor:
  receivers:                           # this configuration will listen on all ports and protocols that tempo is capable of.
    jaeger:                            # the receives all come from the OpenTelemetry collector.  more configuration information can
      protocols:                       # be found there: https://github.com/open-telemetry/opentelemetry-collector/tree/main/receiver
        thrift_http:                   #
        grpc:                          # for a production deployment you should only enable the receivers you need!
        thrift_binary:
        thrift_compact:
    zipkin:
    otlp:
      protocols:
        http:
        grpc:
    opencensus:

ingester:
  max_block_duration: 5m               # cut the headblock when this much time passes. this is being set for demo purposes and should probably be left alone normally

compactor:
  compaction:             # overall Tempo trace retention. set for demo purposes

metrics_generator:
  registry:
    external_labels:
      source: tempo
      cluster: docker-compose
  storage:
    path: /tmp/tempo/generator/wal
    remote_write:
      - url: http://prometheus:9090/api/v1/write
        send_exemplars: true

storage:
  trace:
    backend: s3                        # backend configuration to use
    wal:
      path: /tmp/tempo/wal             # where to store the the wal locally
    s3:
      bucket: tempo-grafana                    # how to store data in s3
      endpoint: s3-ap-south-1.amazonaws.com
      insecure: true
      

overrides:
  metrics_generator_processors: [service-graphs, span-metrics] # enables metrics generator```

can you check if we have following metrics coming into your configured prom datasource

to use service graphs, you need to enable metrics generator, can you double check and make sure that metrics geneator is enablde. see config steps here

if you are using Grafana less then 9.0.4, service graphs were hidden under the feature toggle tempoServiceGraph, you need to enable that.

Thanks for you reply @surajsidh ,

Let me put my use case here so that you can give me a proper solution

I am instrumenting node js app with opentelemetry auto instrumentation

Here is the node js code :slight_smile:

const sdk = new opentelemetry.NodeSDK({
  traceExporter: new OTLPTraceExporter({
    // optional - default url is http://localhost:4318/v1/traces
    url: "http://localhost:4317/v1/traces",
    // optional - collection of custom headers to be sent with each request, empty by default
    headers: {},
  }),
  metricReader: new PeriodicExportingMetricReader({
    exporter: new OTLPMetricExporter({
      url: 'http://localhost:4317/v1/metrics', // url is optional and can be omitted - default is http://localhost:4318/v1/metrics
      headers: {}, // an optional object containing custom headers to be sent with each request
      concurrencyLimit: 1, // an optional limit on pending requests
    }),
  }),
  instrumentations: [getNodeAutoInstrumentations()],
});
sdk.start();

Here is docker-compose file :

version: "3"
services:

  #And put them in an OTEL collector pipeline...
  otel-collector:
    image: otel/opentelemetry-collector:0.25.0
    ports:
      - "6831:6831"
      - "4317:4317"
      - "4318:4318"
      - "9464:9464"
    volumes:
      - ./otel-collector.yaml:/config/otel-collector.yaml
    command:
      - --config=/config/otel-collector.yaml

  dummy-server:
    build: ./src
    ports:
      - "4000:4000"
      - "80:80"
    depends_on:
      - tempo
      - loki
  
  loki:
    image: grafana/loki
    ports:
      - "3100:3100"

  minio:
    image: minio/minio:latest
    environment:
      - MINIO_ACCESS_KEY=tempo
      - MINIO_SECRET_KEY=supersecret
    ports:
      - "9001:9001"
    entrypoint:
      - sh
      - -euc
      - mkdir -p /data/tempo && /opt/bin/minio server /data --console-address ':9001'
    

  tempo:
    image: grafana/tempo:latest
    command: [ "-config.file=/etc/tempo.yaml" ]
    volumes:
      - ./shared/tempo.yaml:/etc/tempo.yaml
      - ./tempo-data:/tmp/tempo
    ports:
      - "3200:3200"   # tempo
    depends_on:
      - minio

  prometheus:
    image: prom/prometheus:latest
    command:
      - --config.file=/etc/prometheus.yaml
      - --web.enable-remote-write-receiver
      - --enable-feature=exemplar-storage
    volumes:
      - ./shared/prometheus.yaml:/etc/prometheus.yaml
    ports:
      - "9090:9090"

  grafana:
    image: grafana/grafana:9.3.2
    volumes:
      - ./shared/grafana-datasources.yaml:/etc/grafana/provisioning/datasources/datasources.yaml
    environment:
      - GF_AUTH_ANONYMOUS_ENABLED=true
      - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
      - GF_AUTH_DISABLE_LOGIN_FORM=true
      - GF_FEATURE_TOGGLES_ENABLE=traceqlEditor
    ports:
      - "3000:3000"

Otel config file :

receivers:
  otlp:
    protocols:
      grpc:
exporters:
  otlp:
    endpoint: tempo:4317
  prometheus:
    endpoint: "0.0.0.0:9464"

service:
  pipelines:
    traces:
      receivers: [otlp]
      exporters: [otlp]
    metrics:
      receivers: [otlp]
      exporters: [prometheus]

Tempo Config file :

server:
  http_listen_port: 3200

distributor:
  receivers:                           # this configuration will listen on all ports and protocols that tempo is capable of.
    jaeger:                            # the receives all come from the OpenTelemetry collector.  more configuration information can
      protocols:                       # be found there: https://github.com/open-telemetry/opentelemetry-collector/tree/main/receiver
        thrift_http:                   #
        grpc:                          # for a production deployment you should only enable the receivers you need!
        thrift_binary:
        thrift_compact:
    zipkin:
    otlp:
      protocols:
        http:
        grpc:
    opencensus:

ingester:
  max_block_duration: 5m               # cut the headblock when this much time passes. this is being set for demo purposes and should probably be left alone normally

compactor:
  compaction:
    block_retention: 1h                # overall Tempo trace retention. set for demo purposes

metrics_generator:
  registry:
    external_labels:
      source: tempo
      cluster: docker-compose
  storage:
    path: /tmp/tempo/generator/wal
    remote_write:
      - url: http://prometheus:9090/api/v1/write
        send_exemplars: true

storage:
  trace:
    backend: s3                        # backend configuration to use
    wal:
      path: /tmp/tempo/wal             # where to store the the wal locally
    s3:
      bucket: tempo                    # how to store data in s3
      endpoint: minio:9000
      access_key: tempo
      secret_key: supersecret
      insecure: true

overrides:
  metrics_generator_processors: [service-graphs, span-metrics] # enables metrics generator```



Promethues config file : 


global:
scrape_interval: 15s
evaluation_interval: 15s

scrape_configs:

  • job_name: ‘prometheus’
    static_configs:
    • targets: [ ‘localhost:9090’ ]
  • job_name: ‘tempo’
    static_configs:
    • targets: [ ‘tempo:3200’ ]
  • job_name: ‘otel-collector’
    static_configs:
    • targets: [‘otel-collector:9464’]

Hi @sagarmeadepu , can you see if you have traces_spanmetrics_latency, traces_spanmetrics_calls_total, and traces_spanmetrics_size_total in your Promethues. If these metrics are not coming in then, it means that either they are not being generated or not being scraped.

unfortunettly it’s hard for me to replicate this setup without investing significanet time.

I would recommend checking our out demo repo: GitHub - grafana/intro-to-mlt: Introduction to Metrics, Logs and Traces session companion code..

You can run it locally with docker-compose, play around with it and see how it’s configured. Hopefully, this demo will help to trace down your issue :slight_smile:

If issue still presists, can you share logs from tempo, Promethues, and your application.

Yes these metrics are not available ,

As you have suggest let me play with the GitHub example Thanks

Or As i have setup already available we can get into call to check this one

1 Like

@surajsidh ,

In this github example " GitHub - grafana/intro-to-mlt: Introduction to Metrics, Logs, and Traces session companion code.."

The application is exposing metrics in ‘/metrics’ URL , In my scenario i would like to export them to open telemetry with “http://localhost:4317/v1/metrics” url

I have tried changing some of the settings but still getting error “{“stack”:“Error: PeriodicExportingMetricReader: metrics export failed (error Error: 12 UNIMPLEMENTED: unknown service opentelemetry.proto.collector.metrics.v1.MetricsService)\n at doExport (/Users/kamlakara/Documents/grafana_tempo/s3_new/src/node_modules/@opentelemetry/sdk-metrics/build/src/export/PeriodicExportingMetricReader.js:75:23)\n at process._tickCallback (internal/process/next_tick.js:68:7)”,“message”:“PeriodicExportingMetricReader: metrics export failed (error Error: 12 UNIMPLEMENTED: unknown service opentelemetry.proto.collector.metrics.v1.MetricsService)”,“name”:“Error”}”.

looks like this error is coming from opentelemetry/sdk-metrics, I would recommend looking at OTEL SDK docs, or asking in OTEL community :slight_smile:

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.