server:
http_listen_port: 3200
distributor:
receivers: # this configuration will listen on all ports and protocols that tempo is capable of.
jaeger: # the receives all come from the OpenTelemetry collector. more configuration information can
protocols: # be found there: https://github.com/open-telemetry/opentelemetry-collector/tree/main/receiver
thrift_http: #
grpc: # for a production deployment you should only enable the receivers you need!
thrift_binary:
thrift_compact:
zipkin:
otlp:
protocols:
http:
grpc:
opencensus:
ingester:
max_block_duration: 5m # cut the headblock when this much time passes. this is being set for demo purposes and should probably be left alone normally
compactor:
compaction: # overall Tempo trace retention. set for demo purposes
metrics_generator:
registry:
external_labels:
source: tempo
cluster: docker-compose
storage:
path: /tmp/tempo/generator/wal
remote_write:
- url: http://prometheus:9090/api/v1/write
send_exemplars: true
storage:
trace:
backend: s3 # backend configuration to use
wal:
path: /tmp/tempo/wal # where to store the the wal locally
s3:
bucket: tempo-grafana # how to store data in s3
endpoint: s3-ap-south-1.amazonaws.com
insecure: true
overrides:
metrics_generator_processors: [service-graphs, span-metrics] # enables metrics generator```
to use service graphs, you need to enable metrics generator, can you double check and make sure that metrics geneator is enablde. see config steps here
if you are using Grafana less then 9.0.4, service graphs were hidden under the feature toggletempoServiceGraph, you need to enable that.
Let me put my use case here so that you can give me a proper solution
I am instrumenting node js app with opentelemetry auto instrumentation
Here is the node js code
const sdk = new opentelemetry.NodeSDK({
traceExporter: new OTLPTraceExporter({
// optional - default url is http://localhost:4318/v1/traces
url: "http://localhost:4317/v1/traces",
// optional - collection of custom headers to be sent with each request, empty by default
headers: {},
}),
metricReader: new PeriodicExportingMetricReader({
exporter: new OTLPMetricExporter({
url: 'http://localhost:4317/v1/metrics', // url is optional and can be omitted - default is http://localhost:4318/v1/metrics
headers: {}, // an optional object containing custom headers to be sent with each request
concurrencyLimit: 1, // an optional limit on pending requests
}),
}),
instrumentations: [getNodeAutoInstrumentations()],
});
sdk.start();
server:
http_listen_port: 3200
distributor:
receivers: # this configuration will listen on all ports and protocols that tempo is capable of.
jaeger: # the receives all come from the OpenTelemetry collector. more configuration information can
protocols: # be found there: https://github.com/open-telemetry/opentelemetry-collector/tree/main/receiver
thrift_http: #
grpc: # for a production deployment you should only enable the receivers you need!
thrift_binary:
thrift_compact:
zipkin:
otlp:
protocols:
http:
grpc:
opencensus:
ingester:
max_block_duration: 5m # cut the headblock when this much time passes. this is being set for demo purposes and should probably be left alone normally
compactor:
compaction:
block_retention: 1h # overall Tempo trace retention. set for demo purposes
metrics_generator:
registry:
external_labels:
source: tempo
cluster: docker-compose
storage:
path: /tmp/tempo/generator/wal
remote_write:
- url: http://prometheus:9090/api/v1/write
send_exemplars: true
storage:
trace:
backend: s3 # backend configuration to use
wal:
path: /tmp/tempo/wal # where to store the the wal locally
s3:
bucket: tempo # how to store data in s3
endpoint: minio:9000
access_key: tempo
secret_key: supersecret
insecure: true
overrides:
metrics_generator_processors: [service-graphs, span-metrics] # enables metrics generator```
Promethues config file :
Hi @sagarmeadepu , can you see if you have traces_spanmetrics_latency, traces_spanmetrics_calls_total, and traces_spanmetrics_size_total in your Promethues. If these metrics are not coming in then, it means that either they are not being generated or not being scraped.
unfortunettly it’s hard for me to replicate this setup without investing significanet time.
The application is exposing metrics in ‘/metrics’ URL , In my scenario i would like to export them to open telemetry with “http://localhost:4317/v1/metrics” url
I have tried changing some of the settings but still getting error “{“stack”:“Error: PeriodicExportingMetricReader: metrics export failed (error Error: 12 UNIMPLEMENTED: unknown service opentelemetry.proto.collector.metrics.v1.MetricsService)\n at doExport (/Users/kamlakara/Documents/grafana_tempo/s3_new/src/node_modules/@opentelemetry/sdk-metrics/build/src/export/PeriodicExportingMetricReader.js:75:23)\n at process._tickCallback (internal/process/next_tick.js:68:7)”,“message”:“PeriodicExportingMetricReader: metrics export failed (error Error: 12 UNIMPLEMENTED: unknown service opentelemetry.proto.collector.metrics.v1.MetricsService)”,“name”:“Error”}”.