Multiple clusters using the same S3 bucket to store loki logs, but logs are still separate somehow?

I have 2 separate clusters both using Loki with a config like this:

  config:
    auth_enabled: false

    storage_config:
      aws:
        s3: s3://us-east-1
        bucketnames: a-shared-loki-bucket
      boltdb_shipper:
        active_index_directory: /data/loki/index
        shared_store: s3
        cache_location: /data/loki/boltdb-cache

(Notice that auth_enabled is false, and both clusters would be sending the same or no tenant ID)

I have loki instances running in both a blue and a green Kubernetes cluster. Both have the same config as above and are using the same s3 bucket.

However, when I query Loki in the blue cluster, I only see logging from pods in the blue cluster. When I query Loki in the green cluster, I only see logging from pods in the green cluster.

How are the clusters sharing an s3 bucket for Loki, but somehow not actually sharing the logs?

I am deploying Loki via the official helm chart, it gets deployed with grafana and promtail and prometheus.