Grafana Loki implementation on multiple kubernetes clusters

I have 5 Kubernetes clusters, and I’m preparing to implement Loki to collect logs from all the clusters. I’m considering 3 architectures:

  1. Install the Loki stack in a single cluster (observe cluster) and configure Promtail on the other clusters to send logs. However, I’m concerned that sending all logs from the 5 clusters to a single S3 and to the ingester of the observe cluster could introduce latencies and errors.

  2. Install the Loki stack on each cluster, with each cluster having its own S3 storage. This architecture won’t allow me to have a global view of all clusters via a single datasources Grafana ( observe cluster) .

  3. Install a querier on the observe cluster, which would query the S3 storage and queriers from other clusters (similar to Thanos).

Could someone help me identify the best solution? Based on your experiences, what method would you recommend?
Thanks :smiling_face::smiling_face:

I’d say #1. Unless your log volumes are really heavy you should not need more than one Loki cluster (you can configure multiple S3 bucket as storage for Loki to sort of spread the load out, if necessary).