Grafana Loki is not deleting old chunks from azure blob storage

Loki version: v2.8.4
AKS version: 1.23.x
Storage backend: azure blob storage

From reading the documenation for compactor, i get the impression that compactor is capable of deletion of old chunks from the blob storage.
However, i see that the old chunks of Loki are not getting deleted from the blob stroage even though the necessary configuration is in place.
Could somone be kind enough to tell me what could be wrong in my configuration? When i inspect the logs for the compactor, i see that the marker file is being created. I also see a lot of API calls to: GET /loki/api/v1/delete
However, i dont see any POST calls to /loki/api/v1/delete. So this gives me the impression that no deletion is happening now.
I confirmed the same that chunks from several months ago are still lying in my blob storage.

auth_enabled: false

    server:
      http_listen_port: {{ .Values.loki.containerPorts.http }}
      log_level: debug
    common:
      compactor_address: http://{{ include "grafana-loki.compactor.fullname" . }}:{{ .Values.compactor.service.ports.http }}
      storage:
        azure:
          account_name: abc 
          account_key: abc
          container_name: abc
          use_managed_identity: false
          request_timeout: 0 


    distributor:
      ring:
        kvstore:
          store: memberlist

    memberlist:
      join_members:
        - {{ include "grafana-loki.gossip-ring.fullname" . }}

    ingester:
      lifecycler:
        ring:
          kvstore:
            store: memberlist
          replication_factor: 1
      chunk_idle_period: 2h                # Any chunk not receiving new logs in this time will be flushed
      chunk_block_size: 262144
      chunk_encoding: snappy
      chunk_retain_period: 1m
      max_chunk_age: 2h                     # All chunks will be flushed when they hit this age, default is 1h
      max_transfer_retries: 0
      autoforget_unhealthy: true
      wal:
        dir: {{ .Values.loki.dataDir }}/wal

    limits_config:
      retention_period: 48h
      enforce_metric_name: false
      reject_old_samples: true
      reject_old_samples_max_age: 168h
      max_cache_freshness_per_query: 10m
      split_queries_by_interval: 15m
      per_stream_rate_limit: 10MB
      per_stream_rate_limit_burst: 20MB
      ingestion_rate_mb: 100
      ingestion_burst_size_mb: 30

    schema_config:
      configs:
      - from: 2020-10-24
        store: boltdb-shipper
        object_store: azure
        schema: v11
        index:
          prefix: index_
          period: 24h
        chunks:
            period: 24h

    storage_config:    
      boltdb_shipper:
        shared_store: azure
        active_index_directory: {{ .Values.loki.dataDir }}/loki/index
        cache_location: {{ .Values.loki.dataDir }}/loki/cache
        cache_ttl: 168h
        {{- if .Values.indexGateway.enabled }}
        index_gateway_client:
          server_address: {{ (printf "dns:///%s:9095" (include "grafana-loki.index-gateway.fullname" .)) }}
        {{- end }}
      filesystem:
        directory: {{ .Values.loki.dataDir }}/chunks
      index_queries_cache_config:
        {{- if .Values.memcachedindexqueries.enabled }}
        memcached:
          batch_size: 100
          parallelism: 100
        memcached_client:
          consistent_hash: true
          addresses: dns+{{ include "grafana-loki.memcached-index-queries.host" . }}
          service: http
        {{- end }}

    chunk_store_config:
      max_look_back_period: 2d
      {{- if .Values.memcachedchunks.enabled }}
      chunk_cache_config:
        memcached:
          batch_size: 100
          parallelism: 100
        memcached_client:
          consistent_hash: true
          addresses: dns+{{ include "grafana-loki.memcached-chunks.host" . }}
      {{- end }}
      {{- if .Values.memcachedindexwrites.enabled }}
      write_dedupe_cache_config:
        memcached:
          batch_size: 100
          parallelism: 100
        memcached_client:
          consistent_hash: true
          addresses: dns+{{ include "grafana-loki.memcached-index-writes.host" . }}
      {{- end }}

    table_manager:
      retention_deletes_enabled: true
      retention_period: 2d

    query_range:
      align_queries_with_step: true
      max_retries: 5
      cache_results: true
      results_cache:
        cache:
          {{- if .Values.memcachedfrontend.enabled }}
          memcached_client:
            consistent_hash: true
            addresses: dns+{{ include "grafana-loki.memcached-frontend.host" . }}
            max_idle_conns: 16
            timeout: 500ms
            update_interval: 1m
          {{- else }}
          enable_fifocache: true
          fifocache:
            max_size_items: 1024
            validity: 24h
          {{- end }}
    {{- if not .Values.queryScheduler.enabled }}
    frontend_worker:
      frontend_address: {{ include "grafana-loki.query-frontend.fullname" . }}:{{ .Values.queryFrontend.service.ports.grpc }}
    {{- end }}

    frontend:
      log_queries_longer_than: 5s
      compress_responses: true
      tail_proxy_url: http://{{ include "grafana-loki.querier.fullname" . }}:{{ .Values.querier.service.ports.http }}

    compactor:
      working_directory: {{ .Values.loki.dataDir }}/retention
      shared_store: azure
      compaction_interval: 10m
      retention_enabled: true
      retention_delete_delay: 2h
      retention_delete_worker_count: 150

    ruler:
      storage:
        type: local
        local:
          directory: {{ .Values.loki.dataDir }}/conf/rules
      ring:
        kvstore:
          store: memberlist
      rule_path: /tmp/loki/scratch
      alertmanager_url: http://abc.bdc.com/alertmanager
      external_url: https://abc.bdc.com/alertmanager

Any help would be very much appreciated.

I don’t see anything wrong, except that you have table manager also enabled. I’d try to disable that and see if that helps.

If not, perhaps enable debug log on the compactor container and see if there is anything helpful in there.

Hey are we able to delete old chunks using the compactor ?

I have the same issue about this, did u fix it ?

I was not able to delete old chunks using compactor. May be I didn’t deepdive enough.
But I used Azure’s Lifecycle Management policy to delete the old chunks as per requirement. I hope this helps.

Please let me know if you get any solution using compactor.

Thanks…

I would also be very interested in this, since I am trying to configure the compactor to delete older logs in my Azure Blob Storage.