Fake directory size inside loki chunks is grown to 29GB in one day

i am using loki 3.6.4 version,using tsdb filesystem as storage ,i have kept retention of 7days , what i am observing is loki data storage mainly fake folder size is growing very fast and not shrinking , its almost 29GB in one day of using loki.Not sure if compaction is running .
27G /mnt/loki-data-disk/tmp/loki/chunks/fake

below is my loki configuration

auth_enabled: false

server:
  http_listen_port: 3100
  grpc_listen_port: 9096
  log_level: info
common:
  instance_addr: 127.0.0.1
  path_prefix: /mnt/loki-data-disk/tmp/loki
  storage:
    filesystem:
      chunks_directory: /mnt/loki-data-disk/tmp/loki/chunks
      rules_directory: /mnt/loki-data-disk/tmp/loki/rules
  replication_factor: 1
  ring:
    kvstore:
      store: inmemory


limits_config:
  max_streams_per_user: 25000
  per_stream_rate_limit: 512M         # Increased again
  per_stream_rate_limit_burst: 1G     # Doubled
  cardinality_limit: 500000           # Higher log label variety
  ingestion_burst_size_mb: 2000       # Higher ingestion burst
  ingestion_rate_mb: 15000            # Higher ingestion speed
  max_entries_limit_per_query: 300000 # Increased entry limit
  query_timeout: 900s                 # 15 minutes timeout
  max_query_parallelism: 64           # Handle more chunks simultaneously
  split_queries_by_interval: 10m      # Smaller slices = faster + less failure
  max_streams_matchers_per_query: 20000
  max_query_series: 100000            # More series allowed
  retention_period: 168h             # 7 days
  unordered_writes: true              # Handles out-of-order log timestamps
  metric_aggregation_enabled: true
  enable_multi_variant_queries: true

schema_config:
  configs:
    - from: 2020-10-24
      store: tsdb
      object_store: filesystem
      schema: v13
      index:
        prefix: index_
        period: 24h


ruler:
  alertmanager_url: http://localhost:9093

# By default, Loki will send anonymous, but uniquely-identifiable usage and configuration
# analytics to Grafana Labs. These statistics are sent to https://stats.grafana.org/
#
# Statistics help us better understand how Loki is used, and they show us performance
# levels for most users. This helps us prioritize features and documentation.
# For more information on what's sent, look at
# https://github.com/grafana/loki/blob/main/pkg/analytics/stats.go
# Refer to the buildReport method to see what goes into a report.
#
# If you would like to disable reporting, uncomment the following lines:
#analytics:
#  reporting_enabled: false

Please suggest what needs to be done.

  1. Try adding delete_request_store: filesystem and see if that helps.
  2. Check your Loki logs and filter for compactor, do you see any error?
  3. Check your filesystem storage, do you see chunk files older than 7 days?
  1. Added delete_request_store: filesystem to config , but no result , no compactions have been triggered.
  2. i checked loki logs , i see only one message when loki starts , “starting compactor“ after that i dont see anything about compactor.
  3. no chunk files older than 7 days, it shows only files are 24 hours old.

is this expected behavior from loki when using file system as storage, only when retention is reached, the chunks will get compacted? Please let me know.

I am not sure what you mean, actually. Chunk files never get compacted, they get removed after retention is reached.

1 Like