Howdy everyone, as the title suggests. I’m currently using to lokistack, and there’s no denying that I really like this tech stack. But it currently collects console logs, does it support collecting logs inside containers? If there is any practice in this area, I hope you can point me in the right direction or let me refer to it. Good luck with the lokistack project~!
Your question is not directly related to Loki, just want to point that out.
In general it’s not the best idea for application to write logs to within the container. The most obvious drawback is that once the container terminates you lose all the logs. If you cannot change the behavior of the container, you have a couple of options:
- Use a sidecar container to read logs. The sidecar container can be anything really, Alloy, fluentd, fluentbit, doesn’t really matter. You’ll need to make sure you configure a volume that can be shared by the main and sidecar container where logs are written to.
- Mount a docker volume (or a directory from host) into the container where the logs are written to, and configure Alloy to read from the directory from host.
- Build alloy (or other logging agent) into the container. This is probably the worst option to go for.
#1 is probably your best bet.
I also agree with you that writing logs inside the container does cause logs to be lost because the container hangs. But my current service is deployed to k8s, we can start a pv volume inside k8s to store these logs exclusively. The logs written to the container are mounted to the pv, so that no matter, whether the container is running or not, then my logs will still be retained. But this will cause a problem, because of the need to mount the logs to the pv, this will take up a certain amount of IO, disk space resources. Personally, I still recommend the log output to the console, the log output to the console, is a great idea, and if you are afraid of losing can also use filebeat such tools in real time to collect, so don’t be afraid of container downtime, as long as the container is online will be able to collect, of course, the container is down there will be no logs to speak of. It’s a shame that my superiors can’t relate to this suggestion, let’s export the logs to the container and then collect them.
I have another question, after deploying this loki-stack tech stack so far, I’ve noticed that the test environments have a low number of pod replicas and are not stressful to use. But when I use it in the production environment, it is noticeable. Queries are quite a bit slower, and sometime it takes quite a while to load some tags on the grafana side before they appear.
Take for example the following feature:
Sometimes it loads fast, sometimes it loads slow. As my cluster got bigger the slower the tab loaded. Do I need to adjust the number of replicas of some loki-stack related features to scale up a bit to solve this?
[root@jxwom ~]# kubectl get po -n loki
NAME READY STATUS RESTARTS AGE
loki-0 1/1 Running 0 33h
loki-grafana-7476b8bd9d-cr99b 2/2 Running 0 33h
loki-promtail-6rrsc 1/1 Running 1 (33h ago) 2d6h
loki-promtail-9bcsq 1/1 Running 0 33h
loki-promtail-9xhrt 1/1 Running 0 30h
As far as I know this loki-0 service is responsible for collecting, storing and querying logs.
I think I’m going to have to augment it as well as this visualisation of grafana
Can I do this to achieve the results I’m hoping for, please? Increased access speed and ability to load full tabs quickly
Loki’s performance comes from distribution. If you have reasonably sizable log volume I’d advise you to deploy using Simple Scalable Mode.
Hmmmmm, the amount of logs is a sudden spike lately. I’ll try it this way you say, thanks for the reply bro!