Tempo sometimes has autocomplete and sometimes none

I am using Tempo. Sometimes it has autocomplete and it works well. Other times, it does not work at all or works but barely. Any suggestions on how to make it work reliably? Thanks.

Hi, all auto-complete comes from recent received data in the ingesters. This is controlled by complete_block_timeout which defaults to 15 minutes. This means that if no data is received for the last 15 minutes then auto-complete will no longer return any values. This is a common scenario in clusters that receive variable workloads or testing.

If this doesn’t explain it then we can dig further and look for errors and configuration issues.

Hello. Thank you for your two replies.

I definitely use braces. The timeout is 15 minutes. I just sent traces to it and tried again. Still no autocomplete.

Here are two screenshots. The first shows the autocomplete. The other shows the traces and their attributes. Also, I tried { span. } then pressed control-space after typing ‘.’, Again, no useful autocomplete.

Maybe I have something wrong with the docker compose files. Anything come to mind to look at? I will include those as well.

Thank you very much.

Kareem


docker-compose.yaml:

version: “3”
services:

loki:

image: grafana/loki:latest

command: “-config.file=/etc/loki/config.yaml”

ports:

- 3101:3100

volumes:

- ./loki.yaml:/etc/loki/config.yaml

depends_on:

- minio

promtail:

image: grafana/promtail:latest

command: -config.file=/etc/promtail/config.yaml

volumes:

- ./promtail.yaml:/etc/promtail/config.yaml

- /var/run/docker.sock:/var/run/docker.sock

depends_on:

- loki

minio:
image: minio/minio:latest
command: server /data
volumes:

  • ./data/minio-data:/data
    environment:
  • MINIO_ROOT_USER=grafana
  • MINIO_ROOT_PASSWORD=grafana_secret
    ports:
  • “9000:9000”
  • “9001:9001”
    entrypoint:
  • sh
  • -euc
  • mkdir -p /data/tempo /data/loki-data /data/loki-ruler && /opt/bin/minio server /data --console-address ‘:9001’

To eventually offload to Tempo…

tempo:
image: grafana/tempo:latest
command: [ “-config.file=/etc/tempo.yaml” ]
volumes:

  • ./tempo.yaml:/etc/tempo.yaml
  • ./tempo-data:/tmp/tempo
    ports:
  • “3200:3200” # tempo
  • “4317:4317” # otlp grpc
  • “4318:4318” # otlp http
    depends_on:
  • minio

And put them in an OTEL collector pipeline…

otel-collector:
image: otel/opentelemetry-collector:latest
command: [ “–config=/etc/otel-collector.yaml” ]
volumes:

  • ./otel-collector.yaml:/etc/otel-collector.yaml
    depends_on:
  • tempo

prometheus:
image: prom/prometheus:latest
command:

  • –config.file=/etc/prometheus.yaml
  • –web.enable-remote-write-receiver
  • –enable-feature=exemplar-storage
    volumes:
  • ./prometheus.yaml:/etc/prometheus.yaml
    ports:
  • “9090:9090”

grafana:
image: grafana/grafana:latest
volumes:

  • ./grafana-datasources.yaml:/etc/grafana/provisioning/datasources/datasources.yaml
  • ./grafana-plugins-bundled:/usr/share/grafana/plugins-bundled
  • ./grafana-data:/var/lib/grafana
    environment:
  • GF_AUTH_ANONYMOUS_ENABLED=true
  • GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
  • GF_AUTH_DISABLE_LOGIN_FORM=true
  • GF_FEATURE_TOGGLES_ENABLE=traceqlEditor
  • GF_LOG_MODE=console
  • GF_LOG_LEVEL=info
    ports:
  • “3000:3000”

volumes:

- type: bind

source: ./minio-data

target: /data

bind:

propagation: shared

tempo.yaml

server:
http_listen_port: 3200

distributor:
receivers: # this configuration will listen on all ports and protocols that tempo is capable of.
jaeger: # the receives all come from the OpenTelemetry collector. more configuration information can
protocols: # be found there: https://github.com/open-telemetry/opentelemetry-collector/tree/main/receiver
thrift_http: #
grpc: # for a production deployment you should only enable the receivers you need!
thrift_binary:
thrift_compact:
zipkin:
otlp:
protocols:
http:
grpc:
opencensus:

ingester:

amount of time a trace must be idle before flushing it to the wal.

(default: 10s)

trace_idle_period: 20s

how often to sweep all tenants and move traces from live → wal → completed blocks.

(default: 10s)

flush_check_period: 3s

Traces are written to blocks. Blocks are cut when they reach a size or length of time.

Traces are first stored in the Write Ahead Log (WAL).

After a threshold, traces are compacted to a block and written to the backend storage.

maximum size of a block before cutting it

(default: 524288000 = 500MB)

max_block_bytes: 524288000

maximum length of time before cutting a block

(default: 30m)

max_block_duration: 30m

duration to keep blocks in the ingester after they have been flushed

(default: 15m)

complete_block_timeout: 15m

Flush all traces to backend when ingester is stopped

flush_all_on_shutdown: false

lifecycler:
ring:
kvstore:
store: inmemory
replication_factor: 1

Query Frontend configuration block

query_frontend:
search:

query_backend_after and query_ingesters_until together control where the query-frontend searches for traces.

Time ranges before query_ingesters_until will be searched in the ingesters only.

Time ranges after query_backend_after will be searched in the backend/object storage only.

Time ranges from query_backend_after through query_ingesters_until will be queried from both locations.

query_backend_after must be less than or equal to query_ingesters_until.

(default: 15m)

query_backend_after: 15m

(default: 30m)

query_ingesters_until: 30m

metrics_generator:
registry:
external_labels:
source: tempo
cluster: docker-compose
storage:
path: /tmp/tempo/generator/wal
remote_write:

compactor:
compaction:

Optional. Duration to keep blocks. Default is 14 days (336h).

block_retention: 336h

Optional. Duration to keep blocks that have been compacted elsewhere. Default is 1h.

compacted_block_retention: 48h

storage:
trace:
backend: s3 # backend configuration to use
wal:
path: /tmp/tempo/wal # where to store the the wal locally
s3:
bucket: tempo # how to store data in s3
endpoint: minio:9000
access_key: grafana
secret_key: grafana_secret
insecure: true

overrides:
metrics_generator_processors: [service-graphs, span-metrics] # enables metrics generator

Thank you for the very detailed information. Don’t see anything obviously missing from the configuration.

Here are a few more ideas to explore:

  1. Is Tempo or Grafana logging any errors?

  2. Can you check which versions of Grafana and Tempo you are running? Instead of latest maybe try fixed versions like grafana/tempo:2.2.2 and grafana/grafana:10.1.1

  3. Can you try calling the Tempo auto-complete API directly? We can start with /api/search/tags which returns the list of attribute names. It doesn’t require any parameters. Before calling it, first verify there is data stored in the ingester area which is /tmp/tempo/wal based on the configuration above.

Hello,

Thank you very much for the help.

  1. No errors.
  2. Used the suggested versions. No autocomplete works!! So very nice.

Again, thank you very much.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.