Ingest large volume of logs using monolithic installation

Hi, I am trying to ingest a very large volume of logs in grafana loki monolithic installation. I notice that aws lamda extension using a promtail client is skipping large log entries from the lambda function. I see following error message in cloudwatch logs. I would greatly appreciate if any body can guide me how to resolve this issue.
2024/05/01 23:20:00 promtail.ClientProto: Unexpected HTTP status code: 500, message: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (27575603 vs. 4194304)

I believe you can fix that error message by tuning grpc_server_max_recv_msg_size and grpc_server_max_send_msg_size.

BTW, how “large” is large?

Hello @tonyswumac Thanks a lot for your response. I will try this out and get back to you. It’s around 200 MB of logs per execution of the lambda function.

200MB to a monolithic per run sounds pretty heavy. You might want to consider adding some sort of rate limiting to the promtail client inn the lambda function. I’ve never used the lambda / promtail function before, so I am not sure how configurable it is. Worst case you could write your own lambda function and just do Loki API call instead.

It seems to be working after adding following to loki configuration. Now I am able to see the logs
server:
grpc_server_max_recv_msg_size: 28575603
grpc_server_max_send_msg_size: 28575603

@tonyswumac Could you please suggest how to do a production grade configuration of Loki on Amazon EC2 server? Is it possible to scale a monolithic deployment?

Yes, according to the documentation to scale monolithic cluster you’d simply deploy more ec2 instances with the same configuration. You’d need to be using an object storage as backend, of course.

And if you are doing that, you might as well consider going with some sort of container platform such as ECS to make it easier to scale up and down.