What is the correct configuration for multi-store environment?

I’m using Loki with multiple Ceph RGW based object buckets. The log data are stored to different buckets every 10 days.

After I updated Loki to 2.9.x from 2.8.x, compaction for the periods after the upgrade have not been performed correctly. The compactor outputs the following log:

level=info ts=2024-02-07T01:53:02.885348134Z caller=compactor.go:683 msg="compacting table" table-name=index_19760
ts=2024-02-07T01:53:02.885408748Z caller=spanlogger.go:86 level=info msg="building table names cache"
ts=2024-02-07T01:53:02.888781109Z caller=spanlogger.go:86 level=info msg="table names cache built" duration=3.365728ms
ts=2024-02-07T01:53:02.888816505Z caller=spanlogger.go:86 level=info msg="building table names cache"
ts=2024-02-07T01:53:02.935210294Z caller=spanlogger.go:86 level=info msg="table names cache built" duration=46.386093ms
level=info ts=2024-02-07T01:53:02.935252653Z caller=table.go:126 table-name=index_19760 msg="no common index files and user index found"
level=info ts=2024-02-07T01:53:02.935263694Z caller=compactor.go:688 msg="finished compacting table" table-name=index_19760
(snip)

What is the correct configuration?

The configuration I currently used follows:

configuration
(snip)
    compactor:
        working_directory: /data/compactor
(snip)
    schema_config:
        configs:
            - from: "2020-10-24"
              index:
                period: 24h
                prefix: index_
              object_store: s3
              schema: v11
              store: boltdb-shipper
            - from: "2023-04-21"
              index:
                period: 24h
                prefix: index_
              object_store: bucket_202304
              schema: v11
              store: tsdb
            - from: "2023-04-27"
              index:
                period: 24h
                prefix: index_
              object_store: bucket_202305
              schema: v11
              store: tsdb
(snip)
            - from: "2024-03-21"
              index:
                period: 24h
                prefix: index_
              object_store: bucket_20240321
              schema: v11
              store: tsdb
(snip)
    storage_config:
        aws:
            s3: s3://${AWS_ACCESS_KEY_ID}:${AWS_SECRET_ACCESS_KEY}@${BUCKET_HOST}/${BUCKET_NAME}
            s3forcepathstyle: true
        boltdb_shipper:
            active_index_directory: /data/index
            cache_location: /data/boltdb-cache
            shared_store: s3
(snip)
        tsdb_shipper:
            active_index_directory: /data/tsdb-index
            cache_location: /data/tsdb-cache
            shared_store: s3
        named_stores:
            aws:
                bucket_202304:
                    s3: s3://${BUCKET_202304_AWS_ACCESS_KEY_ID}:${BUCKET_202304_AWS_SECRET_ACCESS_KEY}@${BUCKET_202304_BUCKET_HOST}/${BUCKET_202304_BUCKET_NAME}
                    s3forcepathstyle: true
                    signature_version: v4
                    storage_class: STANDARD
                bucket_202305:
                    s3: s3://${BUCKET_202305_AWS_ACCESS_KEY_ID}:${BUCKET_202305_AWS_SECRET_ACCESS_KEY}@${BUCKET_202305_BUCKET_HOST}/${BUCKET_202305_BUCKET_NAME}
                    s3forcepathstyle: true
                    signature_version: v4
                    storage_class: STANDARD
(snip)
                bucket_20240321:
                    s3: s3://${BUCKET_20240321_AWS_ACCESS_KEY_ID}:${BUCKET_20240321_AWS_SECRET_ACCESS_KEY}@${BUCKET_20240321_BUCKET_HOST}/${BUCKET_20240321_BUCKET_NAME}
                    s3forcepathstyle: true
                    signature_version: v4
                    storage_class: STANDARD
    table_manager:
        creation_grace_period: 3h
        poll_interval: 10m
        retention_deletes_enabled: false
        retention_period: 0

What do you mean by compactor not performing correctly?

Regardless, couple of things to try:

  1. Double check your limits_config and make sure retention_period is specified.

  2. If you are using simple scalable mode, try configuring common.compactor_address. New version of simple scalable mode places compactor on the backend component, so keep that in mind.

What do you mean by compactor not performing correctly?

After the update, index objects created by ingesters are left as-is.

root@umezawa-sandbox-5cf6c4645c-czjhq:/# aws s3 ls --endpoint-url http://${BUCKET_HOST} s3://${BUCKET_NAME}/index/index_19760/
2024-02-07 00:01:11       2889 1707263111-ingester-1-1695196349218111984.tsdb.gz
2024-02-07 00:01:29       7739 1707263129-ingester-2-1695196241321173662.tsdb.gz
2024-02-07 00:16:04     137041 1707264004-ingester-0-1695196451578618046.tsdb.gz
2024-02-07 00:16:11     161568 1707264011-ingester-1-1695196349218111984.tsdb.gz
2024-02-07 00:16:29     175540 1707264029-ingester-2-1695196241321173662.tsdb.gz
2024-02-07 00:31:04     206192 1707264904-ingester-0-1695196451578618046.tsdb.gz
2024-02-07 00:31:11     189304 1707264911-ingester-1-1695196349218111984.tsdb.gz
2024-02-07 00:31:29     187950 1707264929-ingester-2-1695196241321173662.tsdb.gz
(snip)
2024-02-08 02:16:04      10572 1707357604-ingester-0-1695196451578618046.tsdb.gz
2024-02-08 02:16:11       9432 1707357611-ingester-1-1695196349218111984.tsdb.gz
2024-02-08 02:35:46       9678 1707357629-ingester-2-1695196241321173662.tsdb.gz
root@umezawa-sandbox-5cf6c4645c-czjhq:/#

Before the update, they were compacted and replaced with single object per tenant and day.

root@umezawa-sandbox-5cf6c4645c-czjhq:/# aws s3 ls --endpoint-url http://${BUCKET_HOST} --recursive s3://${BUCKET_NAME}/index/index_19613/
2023-09-14 03:38:59     647424 index/index_19613/fake/1694662739-compactor-1694552952785-1694661429966-374edc11.tsdb.gz
2023-09-14 02:19:00    8270304 index/index_19613/foreign/1694657940-compactor-1694523901411-1694657289145-9bac740e.tsdb.gz
root@umezawa-sandbox-5cf6c4645c-czjhq:/# 

Double check your limits_config and make sure retention_period is specified.

Yes, retention_period is specified.

If you are using simple scalable mode, try configuring common.compactor_address .

I’m using microservice mode.

common.compactor_address seems configured correctly: http://compactor.logging.svc.cluster.local.:3100

Can you share your entire Loki configuration?

Also, you wouldn’t happen to still have the /config output from before (2.8.*), would you? Perhaps one of the default value for variables changed, if you didn’t change configuration during upgrade.

Can you share your entire Loki configuration?

The configuration immediately before and after the update (A)

The configuration immediately before and after the update
analytics:
    reporting_enabled: false
auth_enabled: true
chunk_store_config:
    chunk_cache_config:
        memcached:
            batch_size: 100
            parallelism: 100
        memcached_client:
            consistent_hash: true
            host: memcached.logging.svc.cluster.local
            service: memcached-client
common:
    compactor_address: http://compactor.logging.svc.cluster.local.:3100
compactor:
    shared_store: s3
    working_directory: /data/compactor
distributor:
    ring:
        kvstore:
            store: memberlist
frontend:
    compress_responses: true
    log_queries_longer_than: 5s
    max_outstanding_per_tenant: 256
    tail_proxy_url: http://querier.logging.svc:3100
frontend_worker:
    frontend_address: query-frontend-headless.logging.svc.cluster.local.:9095
    grpc_client_config:
        max_send_msg_size: 1.048576e+08
    match_max_concurrent: true
ingester:
    chunk_block_size: 262144
    chunk_idle_period: 15m
    lifecycler:
        heartbeat_period: 5s
        interface_names:
            - eth0
        join_after: 30s
        num_tokens: 512
        ring:
            heartbeat_timeout: 1m
            kvstore:
                store: memberlist
            replication_factor: 3
    max_transfer_retries: 0
    wal:
        dir: /loki/wal
        enabled: true
        flush_on_shutdown: true
        replay_memory_ceiling: 7GB
ingester_client:
    grpc_client_config:
        max_recv_msg_size: 6.7108864e+07
    remote_timeout: 1s
limits_config:
    enforce_metric_name: false
    ingestion_burst_size_mb: 20
    ingestion_rate_mb: 50
    ingestion_rate_strategy: global
    max_cache_freshness_per_query: 10m
    max_concurrent_tail_requests: 1000
    max_global_streams_per_user: 0
    max_query_length: 12000h
    max_query_parallelism: 16
    max_streams_per_user: 0
    query_timeout: 3m
    reject_old_samples: true
    reject_old_samples_max_age: 168h
    split_queries_by_interval: 2h
memberlist:
    abort_if_cluster_join_fails: false
    bind_port: 7946
    gossip_interval: 5s
    join_members:
        - loki-gossip-ring.logging.svc:7946
    retransmit_factor: 2
    stream_timeout: 5s
querier:
    max_concurrent: 4
    query_ingesters_within: 2h
query_range:
    align_queries_with_step: true
    cache_results: true
    max_retries: 5
    results_cache:
        cache:
            memcached_client:
                consistent_hash: true
                host: memcached-frontend.logging.svc.cluster.local
                max_idle_conns: 16
                service: memcached-client
                timeout: 500ms
                update_interval: 1m
ruler: {}
schema_config:
    configs:
        - from: "2020-10-24"
          # This first section must be same as the original
          index:
            period: 24h
            prefix: index_
          object_store: s3
          schema: v11
          store: boltdb-shipper
        - from: "2023-04-21"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_202304
          schema: v11
          store: tsdb
        - from: "2023-04-27"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_202305
          schema: v11
          store: tsdb
        - from: "2023-07-01"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20230701
          schema: v11
          store: tsdb
        - from: "2023-08-01"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20230801
          schema: v11
          store: tsdb
        - from: "2023-08-11"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20230811
          schema: v11
          store: tsdb
        - from: "2023-08-21"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20230821
          schema: v11
          store: tsdb
        - from: "2023-09-01"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20230901
          schema: v11
          store: tsdb
        - from: "2023-09-11"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20230911
          schema: v11
          store: tsdb
        - from: "2023-09-21"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20230921
          schema: v11
          store: tsdb
        - from: "2023-10-01"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20231001
          schema: v11
          store: tsdb
        - from: "2023-10-11"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20231011
          schema: v11
          store: tsdb
        - from: "2023-10-21"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20231021
          schema: v11
          store: tsdb
        - from: "2023-11-01"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20231101
          schema: v11
          store: tsdb
        - from: "2023-11-11"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20231111
          schema: v11
          store: tsdb
        - from: "2023-11-21"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20231121
          schema: v11
          store: tsdb
        - from: "2023-12-01"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20231201
          schema: v11
          store: tsdb
        - from: "2023-12-11"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20231211
          schema: v11
          store: tsdb
        - from: "2023-12-21"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20231221
          schema: v11
          store: tsdb
server:
    graceful_shutdown_timeout: 5s
    grpc_server_max_concurrent_streams: 1000
    grpc_server_max_recv_msg_size: 1.048576e+08
    grpc_server_max_send_msg_size: 1.048576e+08
    grpc_server_min_time_between_pings: 10s
    grpc_server_ping_without_stream_allowed: true
    http_listen_port: 3100
    http_server_idle_timeout: 120s
    http_server_write_timeout: 1m
storage_config:
    aws:
        s3: s3://${AWS_ACCESS_KEY_ID}:${AWS_SECRET_ACCESS_KEY}@${BUCKET_HOST}/${BUCKET_NAME}
        s3forcepathstyle: true
    boltdb_shipper:
        active_index_directory: /data/index
        cache_location: /data/boltdb-cache
        shared_store: s3
    index_queries_cache_config:
        memcached:
            batch_size: 100
            parallelism: 100
        memcached_client:
            consistent_hash: true
            host: memcached-index-queries.logging.svc.cluster.local
            service: memcached-client
    tsdb_shipper:
        active_index_directory: /data/tsdb-index
        cache_location: /data/tsdb-cache
        shared_store: s3
    named_stores:
        aws:
            bucket_202304:
                s3: s3://${BUCKET_202304_AWS_ACCESS_KEY_ID}:${BUCKET_202304_AWS_SECRET_ACCESS_KEY}@${BUCKET_202304_BUCKET_HOST}/${BUCKET_202304_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_202305:
                s3: s3://${BUCKET_202305_AWS_ACCESS_KEY_ID}:${BUCKET_202305_AWS_SECRET_ACCESS_KEY}@${BUCKET_202305_BUCKET_HOST}/${BUCKET_202305_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20230701:
                s3: s3://${BUCKET_20230701_AWS_ACCESS_KEY_ID}:${BUCKET_20230701_AWS_SECRET_ACCESS_KEY}@${BUCKET_20230701_BUCKET_HOST}/${BUCKET_20230701_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20230801:
                s3: s3://${BUCKET_20230801_AWS_ACCESS_KEY_ID}:${BUCKET_20230801_AWS_SECRET_ACCESS_KEY}@${BUCKET_20230801_BUCKET_HOST}/${BUCKET_20230801_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20230811:
                s3: s3://${BUCKET_20230811_AWS_ACCESS_KEY_ID}:${BUCKET_20230811_AWS_SECRET_ACCESS_KEY}@${BUCKET_20230811_BUCKET_HOST}/${BUCKET_20230811_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20230821:
                s3: s3://${BUCKET_20230821_AWS_ACCESS_KEY_ID}:${BUCKET_20230821_AWS_SECRET_ACCESS_KEY}@${BUCKET_20230821_BUCKET_HOST}/${BUCKET_20230821_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20230901:
                s3: s3://${BUCKET_20230901_AWS_ACCESS_KEY_ID}:${BUCKET_20230901_AWS_SECRET_ACCESS_KEY}@${BUCKET_20230901_BUCKET_HOST}/${BUCKET_20230901_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20230911:
                s3: s3://${BUCKET_20230911_AWS_ACCESS_KEY_ID}:${BUCKET_20230911_AWS_SECRET_ACCESS_KEY}@${BUCKET_20230911_BUCKET_HOST}/${BUCKET_20230911_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20230921:
                s3: s3://${BUCKET_20230921_AWS_ACCESS_KEY_ID}:${BUCKET_20230921_AWS_SECRET_ACCESS_KEY}@${BUCKET_20230921_BUCKET_HOST}/${BUCKET_20230921_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20231001:
                s3: s3://${BUCKET_20231001_AWS_ACCESS_KEY_ID}:${BUCKET_20231001_AWS_SECRET_ACCESS_KEY}@${BUCKET_20231001_BUCKET_HOST}/${BUCKET_20231001_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20231011:
                s3: s3://${BUCKET_20231011_AWS_ACCESS_KEY_ID}:${BUCKET_20231011_AWS_SECRET_ACCESS_KEY}@${BUCKET_20231011_BUCKET_HOST}/${BUCKET_20231011_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20231021:
                s3: s3://${BUCKET_20231021_AWS_ACCESS_KEY_ID}:${BUCKET_20231021_AWS_SECRET_ACCESS_KEY}@${BUCKET_20231021_BUCKET_HOST}/${BUCKET_20231021_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20231101:
                s3: s3://${BUCKET_20231101_AWS_ACCESS_KEY_ID}:${BUCKET_20231101_AWS_SECRET_ACCESS_KEY}@${BUCKET_20231101_BUCKET_HOST}/${BUCKET_20231101_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20231111:
                s3: s3://${BUCKET_20231111_AWS_ACCESS_KEY_ID}:${BUCKET_20231111_AWS_SECRET_ACCESS_KEY}@${BUCKET_20231111_BUCKET_HOST}/${BUCKET_20231111_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20231121:
                s3: s3://${BUCKET_20231121_AWS_ACCESS_KEY_ID}:${BUCKET_20231121_AWS_SECRET_ACCESS_KEY}@${BUCKET_20231121_BUCKET_HOST}/${BUCKET_20231121_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20231201:
                s3: s3://${BUCKET_20231201_AWS_ACCESS_KEY_ID}:${BUCKET_20231201_AWS_SECRET_ACCESS_KEY}@${BUCKET_20231201_BUCKET_HOST}/${BUCKET_20231201_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20231211:
                s3: s3://${BUCKET_20231211_AWS_ACCESS_KEY_ID}:${BUCKET_20231211_AWS_SECRET_ACCESS_KEY}@${BUCKET_20231211_BUCKET_HOST}/${BUCKET_20231211_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20231221:
                s3: s3://${BUCKET_20231221_AWS_ACCESS_KEY_ID}:${BUCKET_20231221_AWS_SECRET_ACCESS_KEY}@${BUCKET_20231221_BUCKET_HOST}/${BUCKET_20231221_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
table_manager:
    creation_grace_period: 3h
    poll_interval: 10m
    retention_deletes_enabled: false
    retention_period: 0

The current configuration (B)

The current configuration
analytics:
    reporting_enabled: false
auth_enabled: true
chunk_store_config:
    chunk_cache_config:
        memcached:
            batch_size: 100
            parallelism: 100
        memcached_client:
            consistent_hash: true
            host: memcached.logging.svc.cluster.local
            service: memcached-client
common:
    compactor_address: http://compactor.logging.svc.cluster.local.:3100
compactor:
    retention_enabled: true
    working_directory: /data/compactor
distributor:
    ring:
        kvstore:
            store: memberlist
frontend:
    compress_responses: true
    log_queries_longer_than: 5s
    max_outstanding_per_tenant: 2048
    tail_proxy_url: http://querier.logging.svc:3100
frontend_worker:
    frontend_address: query-frontend-headless.logging.svc.cluster.local.:9095
    grpc_client_config:
        max_send_msg_size: 1.048576e+08
    match_max_concurrent: true
ingester:
    chunk_block_size: 262144
    chunk_idle_period: 15m
    lifecycler:
        heartbeat_period: 5s
        interface_names:
            - eth0
        join_after: 30s
        num_tokens: 512
        ring:
            heartbeat_timeout: 1m
            kvstore:
                store: memberlist
            replication_factor: 3
    max_transfer_retries: 0
    wal:
        dir: /loki/wal
        enabled: true
        flush_on_shutdown: true
        replay_memory_ceiling: 7GB
ingester_client:
    grpc_client_config:
        max_recv_msg_size: 6.7108864e+07
    remote_timeout: 1s
limits_config:
    enforce_metric_name: false
    ingestion_burst_size_mb: 20
    ingestion_rate_mb: 50
    ingestion_rate_strategy: global
    max_cache_freshness_per_query: 10m
    max_concurrent_tail_requests: 1000
    max_global_streams_per_user: 0
    max_query_length: 12000h
    max_query_parallelism: 16
    max_streams_per_user: 0
    query_timeout: 3m
    reject_old_samples: true
    reject_old_samples_max_age: 168h
    retention_period: 750d
    split_queries_by_interval: 2h
memberlist:
    abort_if_cluster_join_fails: false
    bind_port: 7946
    gossip_interval: 5s
    join_members:
        - loki-gossip-ring.logging.svc:7946
    retransmit_factor: 2
    stream_timeout: 5s
querier:
    max_concurrent: 4
    query_ingesters_within: 2h
query_range:
    align_queries_with_step: true
    cache_results: true
    max_retries: 5
    results_cache:
        cache:
            memcached_client:
                consistent_hash: true
                host: memcached-frontend.logging.svc.cluster.local
                max_idle_conns: 16
                service: memcached-client
                timeout: 500ms
                update_interval: 1m
ruler: {}
schema_config:
    configs:
        - from: "2020-10-24"
          # This first section must be same as the original
          index:
            period: 24h
            prefix: index_
          object_store: s3
          schema: v11
          store: boltdb-shipper
        - from: "2023-04-21"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_202304
          schema: v11
          store: tsdb
        - from: "2023-04-27"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_202305
          schema: v11
          store: tsdb
        - from: "2023-07-01"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20230701
          schema: v11
          store: tsdb
        - from: "2023-08-01"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20230801
          schema: v11
          store: tsdb
        - from: "2023-08-11"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20230811
          schema: v11
          store: tsdb
        - from: "2023-08-21"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20230821
          schema: v11
          store: tsdb
        - from: "2023-09-01"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20230901
          schema: v11
          store: tsdb
        - from: "2023-09-11"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20230911
          schema: v11
          store: tsdb
        - from: "2023-09-21"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20230921
          schema: v11
          store: tsdb
        - from: "2023-10-01"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20231001
          schema: v11
          store: tsdb
        - from: "2023-10-11"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20231011
          schema: v11
          store: tsdb
        - from: "2023-10-21"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20231021
          schema: v11
          store: tsdb
        - from: "2023-11-01"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20231101
          schema: v11
          store: tsdb
        - from: "2023-11-11"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20231111
          schema: v11
          store: tsdb
        - from: "2023-11-21"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20231121
          schema: v11
          store: tsdb
        - from: "2023-12-01"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20231201
          schema: v11
          store: tsdb
        - from: "2023-12-11"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20231211
          schema: v11
          store: tsdb
        - from: "2023-12-21"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20231221
          schema: v11
          store: tsdb
        - from: "2024-01-01"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20240101
          schema: v11
          store: tsdb
        - from: "2024-01-11"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20240111
          schema: v11
          store: tsdb
        - from: "2024-01-21"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20240121
          schema: v11
          store: tsdb
        - from: "2024-02-01"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20240201
          schema: v11
          store: tsdb
        - from: "2024-02-11"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20240211
          schema: v11
          store: tsdb
        - from: "2024-02-21"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20240221
          schema: v11
          store: tsdb
        - from: "2024-03-01"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20240301
          schema: v11
          store: tsdb
        - from: "2024-03-11"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20240311
          schema: v11
          store: tsdb
        - from: "2024-03-21"
          index:
            period: 24h
            prefix: index_
          object_store: bucket_20240321
          schema: v11
          store: tsdb
server:
    graceful_shutdown_timeout: 5s
    grpc_server_max_concurrent_streams: 1000
    grpc_server_max_recv_msg_size: 1.048576e+08
    grpc_server_max_send_msg_size: 1.048576e+08
    grpc_server_min_time_between_pings: 10s
    grpc_server_ping_without_stream_allowed: true
    http_listen_port: 3100
    http_server_idle_timeout: 120s
    http_server_write_timeout: 1m
storage_config:
    aws:
        s3: s3://${AWS_ACCESS_KEY_ID}:${AWS_SECRET_ACCESS_KEY}@${BUCKET_HOST}/${BUCKET_NAME}
        s3forcepathstyle: true
    boltdb_shipper:
        active_index_directory: /data/index
        cache_location: /data/boltdb-cache
        shared_store: s3
    index_queries_cache_config:
        memcached:
            batch_size: 100
            parallelism: 100
        memcached_client:
            consistent_hash: true
            host: memcached-index-queries.logging.svc.cluster.local
            service: memcached-client
    tsdb_shipper:
        active_index_directory: /data/tsdb-index
        cache_location: /data/tsdb-cache
        shared_store: s3
    named_stores:
        aws:
            bucket_202304:
                s3: s3://${BUCKET_202304_AWS_ACCESS_KEY_ID}:${BUCKET_202304_AWS_SECRET_ACCESS_KEY}@${BUCKET_202304_BUCKET_HOST}/${BUCKET_202304_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_202305:
                s3: s3://${BUCKET_202305_AWS_ACCESS_KEY_ID}:${BUCKET_202305_AWS_SECRET_ACCESS_KEY}@${BUCKET_202305_BUCKET_HOST}/${BUCKET_202305_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20230701:
                s3: s3://${BUCKET_20230701_AWS_ACCESS_KEY_ID}:${BUCKET_20230701_AWS_SECRET_ACCESS_KEY}@${BUCKET_20230701_BUCKET_HOST}/${BUCKET_20230701_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20230801:
                s3: s3://${BUCKET_20230801_AWS_ACCESS_KEY_ID}:${BUCKET_20230801_AWS_SECRET_ACCESS_KEY}@${BUCKET_20230801_BUCKET_HOST}/${BUCKET_20230801_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20230811:
                s3: s3://${BUCKET_20230811_AWS_ACCESS_KEY_ID}:${BUCKET_20230811_AWS_SECRET_ACCESS_KEY}@${BUCKET_20230811_BUCKET_HOST}/${BUCKET_20230811_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20230821:
                s3: s3://${BUCKET_20230821_AWS_ACCESS_KEY_ID}:${BUCKET_20230821_AWS_SECRET_ACCESS_KEY}@${BUCKET_20230821_BUCKET_HOST}/${BUCKET_20230821_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20230901:
                s3: s3://${BUCKET_20230901_AWS_ACCESS_KEY_ID}:${BUCKET_20230901_AWS_SECRET_ACCESS_KEY}@${BUCKET_20230901_BUCKET_HOST}/${BUCKET_20230901_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20230911:
                s3: s3://${BUCKET_20230911_AWS_ACCESS_KEY_ID}:${BUCKET_20230911_AWS_SECRET_ACCESS_KEY}@${BUCKET_20230911_BUCKET_HOST}/${BUCKET_20230911_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20230921:
                s3: s3://${BUCKET_20230921_AWS_ACCESS_KEY_ID}:${BUCKET_20230921_AWS_SECRET_ACCESS_KEY}@${BUCKET_20230921_BUCKET_HOST}/${BUCKET_20230921_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20231001:
                s3: s3://${BUCKET_20231001_AWS_ACCESS_KEY_ID}:${BUCKET_20231001_AWS_SECRET_ACCESS_KEY}@${BUCKET_20231001_BUCKET_HOST}/${BUCKET_20231001_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20231011:
                s3: s3://${BUCKET_20231011_AWS_ACCESS_KEY_ID}:${BUCKET_20231011_AWS_SECRET_ACCESS_KEY}@${BUCKET_20231011_BUCKET_HOST}/${BUCKET_20231011_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20231021:
                s3: s3://${BUCKET_20231021_AWS_ACCESS_KEY_ID}:${BUCKET_20231021_AWS_SECRET_ACCESS_KEY}@${BUCKET_20231021_BUCKET_HOST}/${BUCKET_20231021_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20231101:
                s3: s3://${BUCKET_20231101_AWS_ACCESS_KEY_ID}:${BUCKET_20231101_AWS_SECRET_ACCESS_KEY}@${BUCKET_20231101_BUCKET_HOST}/${BUCKET_20231101_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20231111:
                s3: s3://${BUCKET_20231111_AWS_ACCESS_KEY_ID}:${BUCKET_20231111_AWS_SECRET_ACCESS_KEY}@${BUCKET_20231111_BUCKET_HOST}/${BUCKET_20231111_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20231121:
                s3: s3://${BUCKET_20231121_AWS_ACCESS_KEY_ID}:${BUCKET_20231121_AWS_SECRET_ACCESS_KEY}@${BUCKET_20231121_BUCKET_HOST}/${BUCKET_20231121_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20231201:
                s3: s3://${BUCKET_20231201_AWS_ACCESS_KEY_ID}:${BUCKET_20231201_AWS_SECRET_ACCESS_KEY}@${BUCKET_20231201_BUCKET_HOST}/${BUCKET_20231201_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20231211:
                s3: s3://${BUCKET_20231211_AWS_ACCESS_KEY_ID}:${BUCKET_20231211_AWS_SECRET_ACCESS_KEY}@${BUCKET_20231211_BUCKET_HOST}/${BUCKET_20231211_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20231221:
                s3: s3://${BUCKET_20231221_AWS_ACCESS_KEY_ID}:${BUCKET_20231221_AWS_SECRET_ACCESS_KEY}@${BUCKET_20231221_BUCKET_HOST}/${BUCKET_20231221_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20240101:
                s3: s3://${BUCKET_20240101_AWS_ACCESS_KEY_ID}:${BUCKET_20240101_AWS_SECRET_ACCESS_KEY}@${BUCKET_20240101_BUCKET_HOST}/${BUCKET_20240101_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20240111:
                s3: s3://${BUCKET_20240111_AWS_ACCESS_KEY_ID}:${BUCKET_20240111_AWS_SECRET_ACCESS_KEY}@${BUCKET_20240111_BUCKET_HOST}/${BUCKET_20240111_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20240121:
                s3: s3://${BUCKET_20240121_AWS_ACCESS_KEY_ID}:${BUCKET_20240121_AWS_SECRET_ACCESS_KEY}@${BUCKET_20240121_BUCKET_HOST}/${BUCKET_20240121_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20240201:
                s3: s3://${BUCKET_20240201_AWS_ACCESS_KEY_ID}:${BUCKET_20240201_AWS_SECRET_ACCESS_KEY}@${BUCKET_20240201_BUCKET_HOST}/${BUCKET_20240201_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20240211:
                s3: s3://${BUCKET_20240211_AWS_ACCESS_KEY_ID}:${BUCKET_20240211_AWS_SECRET_ACCESS_KEY}@${BUCKET_20240211_BUCKET_HOST}/${BUCKET_20240211_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20240221:
                s3: s3://${BUCKET_20240221_AWS_ACCESS_KEY_ID}:${BUCKET_20240221_AWS_SECRET_ACCESS_KEY}@${BUCKET_20240221_BUCKET_HOST}/${BUCKET_20240221_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20240301:
                s3: s3://${BUCKET_20240301_AWS_ACCESS_KEY_ID}:${BUCKET_20240301_AWS_SECRET_ACCESS_KEY}@${BUCKET_20240301_BUCKET_HOST}/${BUCKET_20240301_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20240311:
                s3: s3://${BUCKET_20240311_AWS_ACCESS_KEY_ID}:${BUCKET_20240311_AWS_SECRET_ACCESS_KEY}@${BUCKET_20240311_BUCKET_HOST}/${BUCKET_20240311_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
            bucket_20240321:
                s3: s3://${BUCKET_20240321_AWS_ACCESS_KEY_ID}:${BUCKET_20240321_AWS_SECRET_ACCESS_KEY}@${BUCKET_20240321_BUCKET_HOST}/${BUCKET_20240321_BUCKET_NAME}
                s3forcepathstyle: true
                signature_version: v4
                storage_class: STANDARD
table_manager:
    creation_grace_period: 3h
    poll_interval: 10m
    retention_deletes_enabled: false
    retention_period: 0

diff -u A B

diff -u A B
❯ diff -u loki-old.conf loki-current.conf
--- loki-old.conf       2024-02-09 15:54:12.361656026 +0900
+++ loki-current.conf   2024-02-09 15:51:49.906591732 +0900
@@ -13,7 +13,7 @@
 common:
     compactor_address: http://compactor.logging.svc.cluster.local.:3100
 compactor:
-    shared_store: s3
+    retention_enabled: true
     working_directory: /data/compactor
 distributor:
     ring:
@@ -22,7 +22,7 @@
 frontend:
     compress_responses: true
     log_queries_longer_than: 5s
-    max_outstanding_per_tenant: 256
+    max_outstanding_per_tenant: 2048
     tail_proxy_url: http://querier.logging.svc:3100
 frontend_worker:
     frontend_address: query-frontend-headless.logging.svc.cluster.local.:9095
@@ -67,6 +67,7 @@
     query_timeout: 3m
     reject_old_samples: true
     reject_old_samples_max_age: 168h
+    retention_period: 750d
     split_queries_by_interval: 2h
 memberlist:
     abort_if_cluster_join_fails: false
@@ -229,6 +230,69 @@
           object_store: bucket_20231221
           schema: v11
           store: tsdb
+        - from: "2024-01-01"
+          index:
+            period: 24h
+            prefix: index_
+          object_store: bucket_20240101
+          schema: v11
+          store: tsdb
+        - from: "2024-01-11"
+          index:
+            period: 24h
+            prefix: index_
+          object_store: bucket_20240111
+          schema: v11
+          store: tsdb
+        - from: "2024-01-21"
+          index:
+            period: 24h
+            prefix: index_
+          object_store: bucket_20240121
+          schema: v11
+          store: tsdb
+        - from: "2024-02-01"
+          index:
+            period: 24h
+            prefix: index_
+          object_store: bucket_20240201
+          schema: v11
+          store: tsdb
+        - from: "2024-02-11"
+          index:
+            period: 24h
+            prefix: index_
+          object_store: bucket_20240211
+          schema: v11
+          store: tsdb
+        - from: "2024-02-21"
+          index:
+            period: 24h
+            prefix: index_
+          object_store: bucket_20240221
+          schema: v11
+          store: tsdb
+        - from: "2024-03-01"
+          index:
+            period: 24h
+            prefix: index_
+          object_store: bucket_20240301
+          schema: v11
+          store: tsdb
+        - from: "2024-03-11"
+          index:
+            period: 24h
+            prefix: index_
+          object_store: bucket_20240311
+          schema: v11
+          store: tsdb
+        - from: "2024-03-21"
+          index:
+            period: 24h
+            prefix: index_
+          object_store: bucket_20240321
+          schema: v11
+          store: tsdb
 server:
     graceful_shutdown_timeout: 5s
     grpc_server_max_concurrent_streams: 1000
@@ -351,6 +415,51 @@
                 s3forcepathstyle: true
                 signature_version: v4
                 storage_class: STANDARD
+            bucket_20240101:
+                s3: s3://${BUCKET_20240101_AWS_ACCESS_KEY_ID}:${BUCKET_20240101_AWS_SECRET_ACCESS_KEY}@${BUCKET_20240101_BUCKET_HOST}/${BUCKET_20240101_BUCKET_NAME}
+                s3forcepathstyle: true
+                signature_version: v4
+                storage_class: STANDARD
+            bucket_20240111:
+                s3: s3://${BUCKET_20240111_AWS_ACCESS_KEY_ID}:${BUCKET_20240111_AWS_SECRET_ACCESS_KEY}@${BUCKET_20240111_BUCKET_HOST}/${BUCKET_20240111_BUCKET_NAME}
+                s3forcepathstyle: true
+                signature_version: v4
+                storage_class: STANDARD
+            bucket_20240121:
+                s3: s3://${BUCKET_20240121_AWS_ACCESS_KEY_ID}:${BUCKET_20240121_AWS_SECRET_ACCESS_KEY}@${BUCKET_20240121_BUCKET_HOST}/${BUCKET_20240121_BUCKET_NAME}
+                s3forcepathstyle: true
+                signature_version: v4
+                storage_class: STANDARD
+            bucket_20240201:
+                s3: s3://${BUCKET_20240201_AWS_ACCESS_KEY_ID}:${BUCKET_20240201_AWS_SECRET_ACCESS_KEY}@${BUCKET_20240201_BUCKET_HOST}/${BUCKET_20240201_BUCKET_NAME}
+                s3forcepathstyle: true
+                signature_version: v4
+                storage_class: STANDARD
+            bucket_20240211:
+                s3: s3://${BUCKET_20240211_AWS_ACCESS_KEY_ID}:${BUCKET_20240211_AWS_SECRET_ACCESS_KEY}@${BUCKET_20240211_BUCKET_HOST}/${BUCKET_20240211_BUCKET_NAME}
+                s3forcepathstyle: true
+                signature_version: v4
+                storage_class: STANDARD
+            bucket_20240221:
+                s3: s3://${BUCKET_20240221_AWS_ACCESS_KEY_ID}:${BUCKET_20240221_AWS_SECRET_ACCESS_KEY}@${BUCKET_20240221_BUCKET_HOST}/${BUCKET_20240221_BUCKET_NAME}
+                s3forcepathstyle: true
+                signature_version: v4
+                storage_class: STANDARD
+            bucket_20240301:
+                s3: s3://${BUCKET_20240301_AWS_ACCESS_KEY_ID}:${BUCKET_20240301_AWS_SECRET_ACCESS_KEY}@${BUCKET_20240301_BUCKET_HOST}/${BUCKET_20240301_BUCKET_NAME}
+                s3forcepathstyle: true
+                signature_version: v4
+                storage_class: STANDARD
+            bucket_20240311:
+                s3: s3://${BUCKET_20240311_AWS_ACCESS_KEY_ID}:${BUCKET_20240311_AWS_SECRET_ACCESS_KEY}@${BUCKET_20240311_BUCKET_HOST}/${BUCKET_20240311_BUCKET_NAME}
+                s3forcepathstyle: true
+                signature_version: v4
+                storage_class: STANDARD
+            bucket_20240321:
+                s3: s3://${BUCKET_20240321_AWS_ACCESS_KEY_ID}:${BUCKET_20240321_AWS_SECRET_ACCESS_KEY}@${BUCKET_20240321_BUCKET_HOST}/${BUCKET_20240321_BUCKET_NAME}
+                s3forcepathstyle: true
+                signature_version: v4
+                storage_class: STANDARD
 table_manager:
     creation_grace_period: 3h
     poll_interval: 10m

Also, you wouldn’t happen to still have the /config output from before (2.8.*), would you?

Because the configuration is passed via ConfigMap, I don’t still have it, I think.
(Does it answer your question?)

This could be your problem then: shared_store: s3.

This could be your problem then: shared_store: s3 .

Which shared_store ?
Should I remove both storage_config.boltdb_shipper.shared_store and storage_config.tsdb_shipper.shared_store ?

I meant adding shared_store: s3 under the compactor block. From your diff it looks like you had it before, but not anymore.

I meant adding shared_store: s3 under the compactor block.

I see.
However, immediately after the update (i.e. shared_store: s3 existed under compactor), compaction was not performed either.

Anyway, I will try it (again).

I tried it on a test environment but compaction was not performend in either case.

When compactor.shared_store: s3 does not exist:

compactor logs
level=info ts=2024-02-14T10:56:41.510455352Z caller=compactor.go:517 msg="applying retention with compaction"
level=info ts=2024-02-14T10:56:41.510551844Z caller=expiration.go:78 msg="overall smallest retention period 1643108201.51, default smallest retention period 1643108201.51"
ts=2024-02-14T10:56:41.510667051Z caller=spanlogger.go:86 level=info msg="building table names cache"
ts=2024-02-14T10:56:41.535092122Z caller=spanlogger.go:86 level=info msg="table names cache built" duration=24.408411ms
ts=2024-02-14T10:56:41.535135744Z caller=spanlogger.go:86 level=info msg="building table names cache"
ts=2024-02-14T10:56:41.55124274Z caller=spanlogger.go:86 level=info msg="table names cache built" duration=16.100724ms
ts=2024-02-14T10:56:41.551277746Z caller=spanlogger.go:86 level=info msg="building table names cache"
ts=2024-02-14T10:56:41.560374037Z caller=spanlogger.go:86 level=info msg="table names cache built" duration=9.089157ms
ts=2024-02-14T10:56:41.560417568Z caller=spanlogger.go:86 level=info msg="building table names cache"
ts=2024-02-14T10:56:41.577357632Z caller=spanlogger.go:86 level=info msg="table names cache built" duration=16.928111ms
ts=2024-02-14T10:56:41.577402016Z caller=spanlogger.go:86 level=info msg="building table names cache"
ts=2024-02-14T10:56:41.591768345Z caller=spanlogger.go:86 level=info msg="table names cache built" duration=14.357803ms
ts=2024-02-14T10:56:41.591828438Z caller=spanlogger.go:86 level=info msg="building table names cache"
ts=2024-02-14T10:56:41.61578621Z caller=spanlogger.go:86 level=info msg="table names cache built" duration=23.932454ms
ts=2024-02-14T10:56:41.615909332Z caller=spanlogger.go:86 level=info msg="building table names cache"
ts=2024-02-14T10:56:41.689920131Z caller=spanlogger.go:86 level=info msg="table names cache built" duration=73.998496ms
ts=2024-02-14T10:56:41.689980966Z caller=spanlogger.go:86 level=info msg="building table names cache"
ts=2024-02-14T10:56:41.702808219Z caller=spanlogger.go:86 level=info msg="table names cache built" duration=12.817174ms
ts=2024-02-14T10:56:41.70289959Z caller=spanlogger.go:86 level=info msg="building table names cache"
ts=2024-02-14T10:56:41.71793017Z caller=spanlogger.go:86 level=info msg="table names cache built" duration=15.021292ms
ts=2024-02-14T10:56:41.717990403Z caller=spanlogger.go:86 level=info msg="building table names cache"
ts=2024-02-14T10:56:41.736632591Z caller=spanlogger.go:86 level=info msg="table names cache built" duration=18.63283ms
ts=2024-02-14T10:56:41.736687354Z caller=spanlogger.go:86 level=info msg="building table names cache"
ts=2024-02-14T10:56:41.784427748Z caller=spanlogger.go:86 level=info msg="table names cache built" duration=47.73245ms
ts=2024-02-14T10:56:41.784485949Z caller=spanlogger.go:86 level=info msg="building table names cache"
ts=2024-02-14T10:56:41.820140435Z caller=spanlogger.go:86 level=info msg="table names cache built" duration=35.644848ms
ts=2024-02-14T10:56:41.820198955Z caller=spanlogger.go:86 level=info msg="building table names cache"
ts=2024-02-14T10:56:41.899250752Z caller=spanlogger.go:86 level=info msg="table names cache built" duration=79.04302ms
ts=2024-02-14T10:56:41.899300877Z caller=spanlogger.go:86 level=info msg="building table names cache"
ts=2024-02-14T10:56:41.913268246Z caller=spanlogger.go:86 level=info msg="table names cache built" duration=13.959413ms
ts=2024-02-14T10:56:41.913311847Z caller=spanlogger.go:86 level=info msg="building table names cache"
ts=2024-02-14T10:56:41.971944032Z caller=spanlogger.go:86 level=info msg="table names cache built" duration=58.611105ms
ts=2024-02-14T10:56:41.971992433Z caller=spanlogger.go:86 level=info msg="building table names cache"
ts=2024-02-14T10:56:41.993326256Z caller=spanlogger.go:86 level=info msg="table names cache built" duration=21.3268ms
ts=2024-02-14T10:56:41.993372523Z caller=spanlogger.go:86 level=info msg="building table names cache"
ts=2024-02-14T10:56:42.013117717Z caller=spanlogger.go:86 level=info msg="table names cache built" duration=19.73767ms
ts=2024-02-14T10:56:42.013162711Z caller=spanlogger.go:86 level=info msg="building table names cache"
ts=2024-02-14T10:56:42.037160689Z caller=spanlogger.go:86 level=info msg="table names cache built" duration=23.990864ms
ts=2024-02-14T10:56:42.037207978Z caller=spanlogger.go:86 level=info msg="building table names cache"
ts=2024-02-14T10:56:42.091345613Z caller=spanlogger.go:86 level=info msg="table names cache built" duration=54.130662ms
ts=2024-02-14T10:56:42.091386139Z caller=spanlogger.go:86 level=info msg="building table names cache"
ts=2024-02-14T10:56:42.109776082Z caller=spanlogger.go:86 level=info msg="table names cache built" duration=18.382308ms
ts=2024-02-14T10:56:42.109840834Z caller=spanlogger.go:86 level=info msg="building table names cache"
ts=2024-02-14T10:56:42.123264008Z caller=spanlogger.go:86 level=info msg="table names cache built" duration=13.41551ms
ts=2024-02-14T10:56:42.123306999Z caller=spanlogger.go:86 level=info msg="building table names cache"
ts=2024-02-14T10:56:42.166992271Z caller=spanlogger.go:86 level=info msg="table names cache built" duration=43.676716ms
ts=2024-02-14T10:56:42.167024322Z caller=spanlogger.go:86 level=info msg="building table names cache"
ts=2024-02-14T10:56:42.183194876Z caller=spanlogger.go:86 level=info msg="table names cache built" duration=16.161838ms
ts=2024-02-14T10:56:42.183231135Z caller=spanlogger.go:86 level=info msg="building table names cache"
ts=2024-02-14T10:56:42.208981451Z caller=spanlogger.go:86 level=info msg="table names cache built" duration=25.740788ms
ts=2024-02-14T10:56:42.209022358Z caller=spanlogger.go:86 level=info msg="building table names cache"
ts=2024-02-14T10:56:42.225808993Z caller=spanlogger.go:86 level=info msg="table names cache built" duration=16.777709ms
ts=2024-02-14T10:56:42.22590833Z caller=spanlogger.go:86 level=info msg="building table names cache"
ts=2024-02-14T10:56:42.236764553Z caller=spanlogger.go:86 level=info msg="table names cache built" duration=10.848028ms
level=info ts=2024-02-14T10:56:42.236846548Z caller=compactor.go:683 msg="compacting table" table-name=index_19767
ts=2024-02-14T10:56:42.236965472Z caller=spanlogger.go:86 level=info msg="building table names cache"
ts=2024-02-14T10:56:42.285300004Z caller=spanlogger.go:86 level=info msg="table names cache built" duration=48.326998ms
ts=2024-02-14T10:56:42.285346652Z caller=spanlogger.go:86 level=info msg="building table names cache"
ts=2024-02-14T10:56:42.333122473Z caller=spanlogger.go:86 level=info msg="table names cache built" duration=47.765712ms
level=info ts=2024-02-14T10:56:42.333171174Z caller=table.go:126 table-name=index_19767 msg="no common index files and user index found"
level=info ts=2024-02-14T10:56:42.333189529Z caller=compactor.go:688 msg="finished compacting table" table-name=index_19767

When it does exist:

compactor logs
level=info ts=2024-02-14T11:27:08.201541482Z caller=compactor.go:508 msg="compactor startup delay completed"
level=info ts=2024-02-14T11:27:08.201671047Z caller=compactor.go:562 msg="compactor started"
level=info ts=2024-02-14T11:27:08.201686246Z caller=marker.go:177 msg="mark processor started" workers=150 delay=2h0m0s
level=info ts=2024-02-14T11:27:08.201938Z caller=compactor.go:517 msg="applying retention with compaction"
level=info ts=2024-02-14T11:27:08.20199077Z caller=expiration.go:78 msg="overall smallest retention period 1643110028.201, default smallest retention period 1643110028.201"
ts=2024-02-14T11:27:08.202108411Z caller=spanlogger.go:86 level=info msg="building table names cache"
level=info ts=2024-02-14T11:27:08.202662475Z caller=marker.go:202 msg="no marks file found"
ts=2024-02-14T11:27:08.212414073Z caller=spanlogger.go:86 level=info msg="table names cache built" duration=10.264984ms
level=info ts=2024-02-14T11:27:08.212484886Z caller=compactor.go:683 msg="compacting table" table-name=index_19767
level=error ts=2024-02-14T11:27:08.212536744Z caller=compactor.go:523 msg="failed to run compaction" err="index store client not found for bucket_20240211"

any ideas?

One more thing to try is to try setting shared_store to named_stores, since that’s where you have your various index buckets configured.

The last thing to try would probably be looking at your various S3 buckets. To be frank I think you have way too many, but I digress. If I remember correctly, you should want to keep all index in the same bucket, and you can use multiple S3 buckets for the chunk storage. There is no reason to want to have multiple buckets for indexes, because they are small.

I don’t use multiple S3 buckets for indexes, so I can’t say for sure. But this should be easy to test by standing up a new cluster, send some logs, change index settings, and then send some more logs. And if this is a problem I’d recommend not doing this unless there is a good reason to.

One more thing to try is to try setting shared_store to named_stores, since that’s where you have your various index buckets configured.

I think that that would not work because index objects are stored only to s3 store, not to bucket_202304 etc. (I re-confirmed it just now)

If I remember correctly, you should want to keep all index in the same bucket, and you can use multiple S3 buckets for the chunk storage.

I want to use multiple buckets per period for the chunk storage. I don’t care about which bucket is used for index storage. When I first configured multiple buckets, Loki has begun to store index to the “first” bucket (i.e. s3 store).

The reason I use multiple buckets is that the team maintains Ceph cluster in my company has said that the number of objects in each bucket should be limited to 20M, which is very small compared to the scale of my Loki cluster / Kubernetes cluster.

I went back and looked at your error again with shared_store configured, it says:

level=error ts=2024-02-14T11:27:08.212536744Z caller=compactor.go:523 msg="failed to run compaction" err="index store client not found for bucket_20240211"

That’s oddly specific, do you see anything in your index storage for that specific bucket?

Do you want to know whether the index storage (s3 store) contains some index object for the period using bucket_20240211 (the current period) or not ?

Perhaps, but I don’t know if it’s easy to look for it. I just thought that it’s oddly specific.

At least point I don’t know how much more help I can be. I would still recommend you to use identical configuration (except use only 1 S3 bucket for chunk storage) on a test cluster, and try to replicate the issue and see if there is a pattern.

case 1: The original configuration (compactor.shared_store is not set, multiple buckets) → “no common index files and user index found”

case 2: compactor.shared_store: s3, multiple buckets → “index store client not found for bucket_20240301”

case 3: compactor.shared_store is not set, single bucket → OK

case 4: compactor.shared_store: s3, single bucket → OK

case 5: compactor.shared_store: s3, multiple bucket, add s3 to named_store → configuration error

(logs and index bucket object list is omitted because they are toooo long)

hmm

According to the upgrade guide, it says “Compactor now supports index compaction on multiple buckets/object stores”.

I am wondering what configuration was used to test the support.

This seems to be the pull request that added compactor multistore: compactor: multi-store support by ashwanthgoli · Pull Request #7447 · grafana/loki · GitHub

According to the doc it would seem that your original configuration was correct (to NOT explicitly configure shared-store) which should default to performing compaction on all storages.

Your original error ( no common index files and user index found) seems to be from this: loki/pkg/compactor/table.go at e71964cca461c9da6515ce0b25467fa8d17b3598 · grafana/loki · GitHub

This would imply that it’s not able to list your index files for at least some of the storage. Unfortunately I don’t have a lot of experiences with multi-store setup. At least point I would recommend you to post in the slack channel and see if someone more knowledgable than I can engage, or even post on the Loki GitHub repo as well.

Lastly, i thought compactor multi-store wasn’t introduced until 2.9, how did you get it to work in 2.8? Interesting.

As I wrote before, index objects are stored only to the “first” bucket. I’m not sure about what the (old) compactor exactly does, but maybe the compactor only knows about the “first” bucket and just compacts indexes in it without seeing chunk objects.

1 Like