Verify if storage config in future is working now?

I have 2 schema_configs, the first one is a local minio instance that was used now. But I like to migrate to amazon-s3 so I added a second storage that is used in some days.
But how do I make sure that the bucket connection is actually working now? There is nothing in the log that anything failed nor are there any files in the aws bucket.

Is there a way to check if that is working when the '- from: …" is triggered and the new storage_config is used?

auth_enabled: false

server:
  http_listen_address: 0.0.0.0
  http_listen_port: 3100

ingester:
  lifecycler:
    address: 127.0.0.1
    ring:
      kvstore:
        store: consul
        prefix: service/loki/collectors/
        consul:
          host: 172.17.0.1:8500
      replication_factor: 1
    final_sleep: 0s
  wal:
    dir: "/tmp/wal"

schema_config:
  configs:
    - from: "2024-06-28"
      index:
        period: 24h
        prefix: index_
      object_store: aws
      schema: v13
      store: tsdb
    - from: "2024-12-15" # <---- A date in the future on wich the new db schema is used
      index:
        period: 24h
        prefix: index_
      object_store: amazon-s3
      schema: v13
      store: tsdb

compactor:
  working_directory: /loki/compactor
  delete_request_store: amazon-s3
  retention_enabled: true

storage_config:
  aws:
    s3: https://${MINIO_ACCESS_KEY}:${MINIO_SECRET_KEY}@s3api.domain.com
    s3forcepathstyle: true
    bucketnames: loki
  # New tsdb-shipper configuration
  tsdb_shipper:
    active_index_directory: /loki/tsdb-index
    cache_location: /loki/tsdb-cache
  named_stores:
    aws:
      amazon-s3:
        region: eu-central-1
        bucketnames: sfw-loki
        s3forcepathstyle: false
        access_key_id: ${AWS_S3_LOKI_ACCESS_KEY_ID}
        secret_access_key: ${AWS_S3_LOKI_SECRET_ACCESS_KEY}

limits_config:
  reject_old_samples: true
  retention_period: 90d
  max_query_lookback: 90d
  ingestion_rate_mb: 20
  ingestion_burst_size_mb: 30
  per_stream_rate_limit: "5MB"
  per_stream_rate_limit_burst: "20MB"
  shard_streams:
      enabled: true

query_scheduler:
  # the TSDB index dispatches many more, but each individually smaller, requests. 
  # We increase the pending request queue sizes to compensate.
  max_outstanding_requests_per_tenant: 32768

querier:
  # Each `querier` component process runs a number of parallel workers to process queries simultaneously.
  # You may want to adjust this up or down depending on your resource usage
  # (more available cpu and memory can tolerate higher values and vice versa),
  # but we find the most success running at around `16` with tsdb
  max_concurrent: 16

I approached it different, changing the date when the new storage method is used to today. Restarting loki and checking the logs there. I noticed that the service account has no policy to write something into the bucket. So I requested those. WIP…

Now that it does not work for the moment I switched back the date fro the new storage to a date in the future and it uses again the old bucket. Downtime for about 2 min. That’s fine for me.