I am using SSD with S3 as backend. I am pushing large amount of logs to Loki via K6(300K logs every 2 mins). My EKS and S3 are in the same region. If I try to search a string from the previous day(24 hours) I start seeing issues with the backend afterwords. It seems there is a problem with writing to S3 during the read operation. Is there anything that can be done for this? Thanx in advance
Thanx for your response. I have 3 Read and 3 Write Pods. The write operation occurs via a cronjob triggering a K6 job to push the logs every 2 minutes. When there is no other operation it all works well.
If I start filtering for text within a 24 hour window(which takes about 2-3 minutes to complete) I eventually get the result.
However if I go back and search the complete log for the past 15 minutes for example I see a gap in the logs which should have been pushed via K6 overlapping with the time I initiated the search. I dont see any errors in K6 logs or write pods… Thanx again in advance.
Thanx for your help. I am deploying Loki SSD using the related helm chart. I didnt think ring was mandatory. Here is the result of the curl command(curl ‘http://localhost:3100/ring’) to one of the write pods. Is ring mandatory?
Ring Status
Ring Status
Current time: 2024-02-29 04:12:01.583764656 +0000 UTC m=+28351.302502624
That looks pretty normal. Then I would double check on your frontend (if you are using the helm chart it should be nginx) and make sure it’s working normally.
I do not think the issue you are having is related to Loki reader or writer containers directly. If you are using simple scalable mode and the frontend nginx is configured correctly, traffic should be routed to reader and writer accordingly meaning they should not interfere with each other.
There are also a lot of metrics exposed by Loki, I’d recommend looking at the S3 related ones and see if there is any latency spike or errors.