Hi!
I’m having a problem using tempo and grafana. Looks like the search sometimes works and sometimes it doesn’t. For example, I’m trying to search for logs for a specific time - from 13:00 to 13:05 - and grafana doesn’t find anything. But when I was looking for another period - for examle from 12:00 to 14:00 grafana shown me logs in period from 13:00 to 13:05.
I looked at the tempo logs and found that few errors were appearing intermittently. I’m not sure this erros affect this issue with search, but I haven’t found any explanation why these errors occur.
First error:
level=error ts=2024-03-22T06:36:01.37975115Z caller=poller.go:156 msg="failed to poll or create index for tenant" tenant=single-tenant err="open /tmp/tempo/blocks/single-tenant/5c1805c7-1fa1-462a-b5fb-b21c896f206b: no such file or directory"
I’m using tempo and grafana in docker and this path represents a volume. But I’m not sure that anything other than tempo can affect this.
Second error:
level=error ts=2024-03-22T12:42:48.49124478Z caller=frontend_processor.go:71 msg="error processing requests" address=127.0.0.1:9095 err="rpc error: code = Canceled desc = context canceled"
This error occurs every time when I’m searching somethis via grafana explore. What is Tempo trying to do?
And last error:
level=error ts=2024-03-22T13:56:10.910465446Z caller=rate_limited_logger.go:27 msg="pusher failed to consume trace data" err="DoBatch: InstancesCount <= 0"
There is my tempo config:
server:
http_listen_port: 3200
distributor:
receivers:
otlp:
protocols:
http:
grpc:
ingester:
max_block_duration: 5m
compactor:
compaction:
block_retention: 168h
metrics_generator:
registry:
external_labels:
source: tempo
storage:
path: /tmp/tempo/generator/wal
remote_write:
- url: http://prometheus:9090/api/v1/write
send_exemplars: true
storage:
trace:
backend: local
wal:
path: /tmp/tempo/wal
local:
path: /tmp/tempo/blocks
overrides:
metrics_generator_processors: [service-graphs, span-metrics]
Can someone help me with this errors?