Nr of blob storage API calls grows with used storage?

I have tempo running for about a week in a dev environment and I have collected about 125MB of traces .Tempo installation is with single instance helm everything default with configured Azure blob storage .
I see on azure side that the number of transactions used grows with storage used : (last 7 days)

Graph of storage growth in last 7 days:

Is this normal ? What if I grow to 2G , will I reach rate limiting on azure blob storage API ?

Technically, yes, the more data/blocks you have stored the more Tempo will query the backend. However, that does seem like it may be excessive. Is that 1k GET requests per second? In our largest cluster with over 100TB of traces we average ~150 GET requests per second and that’s in a cluster that is being constantly queried.

  • Are you running the vulture or any other tool that is constantly querying the backend?
  • What version of Tempo are you on?
  • How long is your blocklist? (tempodb_blocklist_length)

I had a 7 days interval in grafana for azure storage . I think that might have influenced the big picture graph. My mistake ,requests were not per second ,746K transactions in total in 7 days. So 746K in per/second is 1.2 api query per second. If I filter for the last 15 min , I get :

They look a bit choppy . I am not sure how often does Azure sample on their end .(Using this dashboard Microsoft Azure Storage | Grafana Labs)

Our block length is :tempodb_blocklist_length{tenant="single-tenant"} 158.

There is one detail I see , each GET request is actually 2 requests GetBlob and GetBlobProperties.So it counts as 2 transactions.
In the last 15 minutes there are about 3 requests per second. So if I extrapolate 3req/sec @ 125MB of storage to 10G linearly I would theoretically get 240 requests per second at 10G of storage … which should be very off
Is it possible to approximate a “rule of thumb” formula for nr of get requests against total size trace blobs?

So in Tempo the most common GET requests are going to come from polling. One or two compactors scan the entire bucket and list every block, create an object called index.json.gz and the other components just pull that directly. Listing every block includes pulling the meta.json file and, if that fails, looking for a compacted.meta.json file.

So, the queries per second while listing the bucket is more closely related to your blocklist length then your total bytes stored. To make adjustments you can tune your compactors to keep your blocklist lower or tune polling to occur less frequently.

Adjusting polling can be a little tricky b/c there are a few components that depend on the polling cycles. I would recommend reading:

Your blocklist is already quite small so it may be tough to get it smaller, but once you get into the 1000s or 10ks of blocks there are a few tricks that can help. On top of just scaling compactors you can also try:

compactor:
  compaction:
    max_compaction_objects: <- increase this to make larger blocks
    compaction_cycle: <- reduce this if you are in a multitenant environment
    compaction_window: <- bring this all the way down to 5m to allow more compactors to participate in newly created blocks.
1 Like

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.