Problem with compaction .. Cant seem to find the index correctly

Hi,
I am having a problem with compaction.The compactor seems to want to look at an index that does exist but I’m not sure it can properly process it. We’re using an s3 compatible storage service .

Ingestion is fine and querying is fine but compaction seems to be failing and keeping more files than I really want to at this stage.

The container logs show

compactor:

working_directory: /mnt/loki/boltdb-shipper-compactor

level=info ts=2021-11-04T06:25:37.522034442Z caller=module_service.go:59 msg=initialising module=compactor

level=info ts=2021-11-04T06:35:37.545794267Z caller=compactor.go:271 msg="compacting table" table-name=index_18935

level=error ts=2021-11-04T06:35:37.554468676Z caller=compactor.go:213 msg="failed to compact files" table=index_18935 err="empty db name, object key: index_18935/"

level=error ts=2021-11-04T06:35:37.55457339Z caller=compactor.go:160 msg="failed to run compaction" err="empty db name, object key: index_18935/"

There is other stuff in the container logs but nothing related to the compactor.

I went through the loki code looking and it looks like it’s failing in loki/pkg/storage/stores/shipper/compactor/table.go

Which I think calls

/Users/s64246/dataeng/lokisrc/loki/pkg/storage/stores/shipper/storage/client.go
18,2: ListFiles(ctx context.Context, tableName string) (IndexFile, error)
54,30: func (s *indexStorageClient) ListFiles(ctx context.Context, tableName string) (IndexFile, error) {

and since we’re an s3 compatible storage system , it might not be as compatible as we’d like.

That’s just a hunch, could be way off the mark but I didn’t feel like working out how to rebuild the container to debug things if I was heading off in the wrong direction.


 s3cmd ls -l s3://lokidatanew/index
DIR                                                    s3://lokidatanew/index/delete_requests/
                          DIR                                                    s3://lokidatanew/index/index_18935/
2021-11-04 06:25            0  d41d8cd98f00b204e9800998ecf8427e     STANDARD     
s3://lokidatanew/index/

 s3cmd ls -l s3://lokidatanew/index/index_18935
                          DIR                                                    s3://lokidatanew/index/index_18935/

 s3cmd ls -l s3://lokidatanew/index/index_18935/

2021-11-04 06:25            0  d41d8cd98f00b204e9800998ecf8427e     STANDARD     s3://lokidatanew/index/index_18935/
2021-11-04 06:25       303932  1f9402a8b6a1dd07f1ee5741c38106d9     STANDARD     s3://lokidatanew/index/index_18935/88a286490c85-1634938808363378555-1636005600.gz
2021-11-04 06:31       223228  7a1c2598e601c3aeba6597a956ff2339     STANDARD     s3://lokidatanew/index/index_18935/88a286490c85-1634938808363378555-1636006500.gz
2021-11-04 06:31       332056  0005054a786f996dad48acd321d4f2da     STANDARD     s3://lokidatanew/index/index_18935/88a286490c85-1634938808363378555-1636007137.gz
2021-11-04 06:46       197057  8c6107bd9870f0542a332c2d33111434     STANDARD     s3://lokidatanew/index/index_18935/88a286490c85-1634938808363378555-1636007400.gz
2021-11-04 07:01       360891  2ea580f9ccaa95937ccfa6e5028eb7c1     STANDARD     s3://lokidatanew/index/index_18935/88a286490c85-1634938808363378555-1636008300.gz

 s3cmd ls -l s3://lokidatanew/index/delete_requests/

2021-11-04 06:30            0  d41d8cd98f00b204e9800998ecf8427e     STANDARD     s3://lokidatanew/index/delete_requests/
2021-11-04 07:00          174  20b9908837ca93bc04d3d000dbee665b     STANDARD     s3://lokidatanew/index/delete_requests/delete_requests.gz

My config is

auth_enabled: true

server:
  http_listen_port: 3100
  grpc_listen_port: 9096

ingester:
  wal:
    enabled: true
    dir: /mnt/loki/wal
  lifecycler:
    address: 127.0.0.1
    ring:
      kvstore:
        store: inmemory
      replication_factor: 1
    final_sleep: 0s
  chunk_idle_period: 1h      
  max_chunk_age: 30m          
  chunk_target_size: 1048576  #
  chunk_retain_period: 5m    #
  max_transfer_retries: 0     # Chunk transfers disabled

schema_config:
    configs:
    - from: 2020-05-15
      store: boltdb-shipper
      object_store: s3
      schema: v11
      index:
        prefix: index_
        period: 24h

storage_config:
  aws:
    s3: s3://XXXX:YYYY@HCP_platform/lokidatanew
    s3forcepathstyle: true

  boltdb_shipper:
    active_index_directory: /mnt/loki/index
    shared_store: s3
    cache_location: /mnt/loki/boltdb-cache

limits_config:
  enforce_metric_name: false
  reject_old_samples: true
  reject_old_samples_max_age: 72h
  retention_period: 72h
  per_tenant_override_config: /mnt/loki/override-config.yaml
  ingestion_rate_mb: 24

compactor:
  working_directory: /mnt/loki/boltdb-shipper-compactor
  shared_store: s3
  compaction_interval: 10m
  retention_enabled: true
  retention_delete_delay: 30m
  retention_delete_worker_count: 1

chunk_store_config:
  max_look_back_period: 72h

table_manager:
  retention_deletes_enabled: true
  retention_period: 72h

ruler:
  storage:
    type: local
    local:
      directory: /mnt/loki/rules
  rule_path: /mnt/loki/rules-temp
  alertmanager_url: http://localhost:9093
  ring:
    kvstore:
      store: inmemory
  enable_api: true

( table manager is set to enabled but setting it to false didn’t help with compaction. Hopefully there isn’t a conflict )

Hope you can help.

Regards,

Greg

Thank you anonymous coder who worked on Loki compaction. I updated Loki to 2.4.1 and compaction doesn’t barf any more.

Regards,

Greg

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.