Multiple Loki sending to one S3

Hi! My name is Emi. Question about multiple Loki using same S3 bucket for its storage. We have multiple k8s cluster which are not peered but we need to have central place to store logs from those cluster.

I could use multi tenancy if those cluster are peered but I can’t. But sending to same S3 bucket with single tenancy mode will be problem cuz orgId will not be unique.

Is there good way to solve this? Can I just add tenantID or something unique in single tenant mode?

Thank you!!


You could enable multi-tenant mode and just have a proxy in front of Loki add the X-Scope-OrgID header to all incoming packets to set the org for all incoming requests (you would want this for reads and writes)

It also should be ok to send traffic from multiple Loki servers to the same bucket with the same tenant ID, just be aware that queries would include data from both tenants.

1 Like

If I understand correctly, if we had 2 separate clusters both using Loki with a config like this:

    auth_enabled: false

        s3: s3://us-east-1
        bucketnames: a-shared-loki-bucket
        active_index_directory: /data/loki/index
        shared_store: s3
        cache_location: /data/loki/boltdb-cache

This would work, but queries would just include data from both tenants?

(Notice that auth_enabled is false, and both clusters would be sending the same or no tenant ID)

Thank you!

EDIT: I tried to prove this out, and for some reason, even though both clusters have a loki instance… running queries against Loki in grafana only returns data from that cluster.

So for example, we have loki running in a blue and a green EKS cluster. Both have the same config as above and are using the same s3 bucket.

However, when I query Loki in the blue cluster, I only see logging from pods in the blue cluster. When I query Loki in the green cluster, I only see logging from pods in the green cluster.

How are the clusters sharing an s3 bucket for Loki, but somehow not actually sharing the logs?