Read-only cluster separated from main cluster

When we use remote storage (e.g. S3) and boltdb-shipper, is it possible to deploy another separated loki cluster which doesn’t join the hash ring in the main cluster to process read-only request?

I don’t see any reason you cannot, provided you run only the reader (disable compactor, for example). But why would you want to do this, when you can simply separate writers and readers?

Hi @tonyswumac , i keep getting msg="error processing request" try=3 err="rpc error: code = Code(500) desc = empty ring\n when running the -target=read outsinde of the main cluster. In memberlist.join_members, i only specified the the read only instance that is outside of the main cluster.

Just in case we need to do separate read analysis outside of the main cluster with different network which make it hard to connect to existing instance.

Judging by the error message your other readers probably aren’t able to communicate with the writers. I’d make sure the separate readers have identical settings.

Just to be clear, I believe if you aren’t using query frontend readers are self contained, so you can have multiple readers with identical configuration and they won’t send traffic between each other, and you can divide traffic on load balancer. If you are using query frontend, then you’ll probably want two sets of QF + Queries in order to have complete separation.

So are you saying that the read component still need to connect to write components through memberlist join? if so then we cannot say that we can deploy separated and isolated read cluster right? since my motivation for doing this is that i can’t join the memberlist (read and write) due to isolated network between the cluster.

I used QF both in the main cluster and the secondary cluster.

yes, i suspect that’s what you need. You can try to spin up a secondary cluster with identical components, and make doubly sure that compactor isn’t running and writers aren’t taking traffic, but in my opinion that’s probably rather dangerous.

i see, we do still need to deploy the write but we just need to make sure it does not receive write request. also we need to make sure the compactor isn’t running (though i’m not sure how to do that when using simple deployment mode.

btw, what about using read only access to the chunck store. This is trivial with s3. Write will fail but will it affect read operation?

Worth a try. I haven’t had the need to do this, so I can only speculate, you’ll have to try all the options available and see if it works or not.

1 Like

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.