Huge small object count bytes to 500Kb

Hi Team,

I am facing huge object counts in S3 of Loki logs, ranging from byte to 500 KB. We are deep-archiving S3 objects, incurring huge costs because of the huge object count. We want to increase the average object size to at least 2-3MB to fix the cost. We are having huge logs so that chunks are getting flushed after full and and even in the Loki metrics, chunk size shows 2-3MB, but not in S3 average object size. We are using the default Loki compression algorithm Snappy. Please suggest the changes to increase the object size and any metrics suggestions to monitor in Loki.

Current configuration:

ingester:
  max_chunk_age: 45m
  chunk_target_size: 2621440
  flush_check_period: 45s
compactor:
  compaction_interval: 20m
  max_compaction_parallelism: 3

The target size is not precise, and the file size you see in S3 won’t match because it’s compressed.

If you are concerned about number of objects, I’d recommend increasing max_chunk_age and idle period. How much it would help is hard to say, though.