Hi. It is not explicitly mentioned in the documentatation. Rather the documentation on loki deployment modes has “Cloud storage” in the graphs.
Does loki support scalable mode deployment with filesystem object storage? In other words, would multiple loki writers conflict with each other when using the same filesystem? In other words, does loki employs locking when using the filesystem object storage?
Thanks.
You can run simple scalable mode on filesystem as long as all Loki containers have access to the same filesystem (such as NFS). It’s not supported, and you probably will run into limitation or performance issue rather quickly if you are considering NFS, and I would recommend only doing this if it’s your only solution.
1 Like
Hi, wait, you confused me. First sentence is “you can”, second sentence is “not supported”.
No nfs, its all on local hard drive. I think its xfs filesystem.
Just because you “can” do something doesn’t mean it’s “supported” (not supported meaning it will not be considered a bug if something in the code breaks the behavior).
If you are thinking about using just the filesystem, then you cannot use simple scalable mode. When deploying with simple scalable mode or microservice mode all containers must have access to the same backend storage.
1 Like
Hi. Ooh, I understand. Thank you.
I want to run simple scalable mode with containers on one machine. Does such setup makes sense? Such setup does not make sense, because it is still resources on one machine? Thank you.
The purpose of simple scalable mode is to make Loki scalable. If you intend to run Loki on one machine only it doesn’t make sense to use scalable mode since you’ll never scale. I’d just go with one monolithic container.
1 Like
Thank you. That means I’ll wait till we have s3. Thanks.
Hi @tonyswumac as always, thank you for your support.
Coming back to the issue, the issue is that there are bugs in Loki and specific queries to Loki cause Loki to blow up and die out of memory.
I do not want the whole Loki process to die out of memory, so I want to split responsibilities between processes. It makes sense. As of now, if a user executes a big enough query in Loki , Loki starts allocating over 120GB of memory and kills the server and dies out of memory.
Moreover, every single out-of-memory kill and Loki restart imposes a risk that the file system might be not recoverable, and our data might get lost. While the data are on RAID, Loki dieing mid-write can cause problems.
Does this make sense?
How would I be able to split loki instance between writers and readers so that they are separate processes?
Relevant github issues:
Thanks.
You would have to run in simple scalable mode, to split write and read traffic. You’ll need a container platform such as EKS, and S3 storage.
1 Like