Hi Grafana team,I have a question regarding Grafana Tempo. I have Tempo Distributed deployed on a Kubernetes cluster using a Helm chart: https://github.com/grafana/helm-charts/tree/main/charts/tempo-distributed.
I’m using a Google bucket as storage. I need to fetch tracing data from the Nginx controller and then use this data in a k6 load test. My software will generate a large amount of data (around a million requests per day). I’m trying to fetch data through the Tempo API, but I’m encountering some limitations in the response. Could you please advise me on how to retrieve data from the backend without being limited by these constraints?I’m trying so:
curl -G -s -u $USER:$PASSWORD https://grafana-tempo.blabla.com/api/search --data-urlencode 'q={span.http.server_name="myapp.com"}' --data-urlencode 'limit=100000000' --data-urlencode 'start=1701424800' --data-urlencode 'end=1701788400' | jq
API answer:
"metrics": {
"inspectedBytes": "1639057",
"totalBlocks": 34,
"completedJobs": 34,
"totalJobs": 34,
"totalBlockBytes": "1639057"
}
}
But i’m getting back only traces for 2-3 days, instead of timerange 01.12 - 05.12
When i’m searching without timerange:
curl -G -s -u $USER:$PASSWORD https://grafana-tempo.blabla.com/api/search --data-urlencode 'q={span.http.server_name="myapp.com"}' | jq
i’m getting even less results as with timeranges.
API answer:
"metrics": {
"inspectedBytes": "386994",
"completedJobs": 1,
"totalJobs": 1
}
}
I know about search limit of 168 hours. But what about traces limits?
Is there any option to search traces from a backend?!
Some values from config file:
compactor: │
│ compaction: │
│ block_retention: 336h │
│ compacted_block_retention: 1h │
│ compaction_cycle: 30s │
│ compaction_window: 1h │
│ max_block_bytes: 107374182400 │
│ max_compaction_objects: 6000000 │
│ max_time_per_tenant: 5m │
│ retention_concurrency: 10 │
│ v2_in_buffer_bytes: 5242880 │
│ v2_out_buffer_bytes: 20971520 │
│ v2_prefetch_traces_count: 1000 │
│ ring: │
│ kvstore: │
│ store: memberlist │
│ distributor: │
│ log_received_spans: │
│ enabled: true │
│ filter_by_status_error: false │
│ include_all_attributes: false │
│ receivers: │
│ otlp: │
│ protocols: │
│ grpc: │
│ endpoint: 0.0.0.0:4317 │
│ http: │
│ endpoint: 0.0.0.0:4318 │
│ ring: │
│ kvstore: │
│ store: memberlist │
│ ingester: │
│ lifecycler: │
│ ring: │
│ kvstore: │
│ store: memberlist │
│ replication_factor: 3 │
│ tokens_file_path: /var/tempo/tokens.json
memberlist: │
│ abort_if_cluster_join_fails: false │
│ bind_addr: [] │
│ bind_port: 7946 │
│ gossip_interval: 1s │
│ gossip_nodes: 2 │
│ gossip_to_dead_nodes_time: 30s │
│ join_members: │
│ - dns+grafana-tempo-gossip-ring:7946 │
│ leave_timeout: 5s │
│ left_ingesters_timeout: 5m │
│ max_join_backoff: 1m │
│ max_join_retries: 10 │
│ min_join_backoff: 1s │
│ node_name: "" │
│ packet_dial_timeout: 5s │
│ packet_write_timeout: 5s │
│ pull_push_interval: 30s │
│ randomize_node_name: true │
│ rejoin_interval: 0s │
│ retransmit_factor: 2 │
│ stream_timeout: 10s │
│ multitenancy_enabled: false │
│ overrides: │
│ metrics_generator_processors: [] │
│ per_tenant_override_config: /runtime-config/overrides.yaml │
│ querier: │
│ frontend_worker: │
│ frontend_address: grafana-tempo-query-frontend-discovery:9095 │
│ max_concurrent_queries: 20 │
│ search: │
│ external_backend: null │
│ external_endpoints: [] │
│ external_hedge_requests_at: 8s │
│ external_hedge_requests_up_to: 2 │
│ prefer_self: 10 │
│ query_timeout: 300s │
│ trace_by_id: │
│ query_timeout: 10s │
│ query_frontend: │
│ max_retries: 2 │
│ search: │
│ concurrent_jobs: 1000 │
│ target_bytes_per_job: 104857600 │
│ trace_by_id: │
│ hedge_requests_at: 2s │
│ hedge_requests_up_to: 2 │
│ query_shards: 50 │
│ server: │
│ grpc_server_max_recv_msg_size: 4194304 │
│ grpc_server_max_send_msg_size: 4194304 │
│ http_listen_port: 3100 │
│ http_server_read_timeout: 30s │
│ http_server_write_timeout: 30s │
│ log_format: logfmt │
│ log_level: info │
│ storage: │
│ trace: │
│ backend: gcs
I will be very grateful for your help!