Loki Ingestor RAM Utilization increased to 42 GB from 12GB post adding more otel collectors

Hi Team,
On Last Friday we deployed few more otel collectors related to Loki, Since Monday we are seeing active series/chunks got increased from 1.5k to 3k & chunks per series got increased from 6 to 12 .

We were having 2 replicas of ingester utilization 12 GB RAM Each, now the RAM Utilization increased to 42 GB with 3 replicas ingester still facing OOM Error and containers getting restarted.

How can we get the RAM Utlization in control and what can be reason for RAM getting increased from 12 GB to 42GB?

 ts=2025-08-04T15:09:04.502739738Z caller=manager.go:50 component=distributor path=write msg="write operation failed" details="Ingestion rate limit exceeded for user fake (limit: 34952533 bytes/sec) while attempting to ingest '1136' lines totaling '1451507' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased" org_id=fake
level=warn ts=2025-08-04T15:08:21.163558087Z caller=tcp_transport.go:440 component="memberlist TCPTransport" msg="WriteTo failed" addr=10.0.5.156:7946 err="dial tcp <>:7946: i/o timeout"

loki_ingester_chunk_age_seconds, loki_ingester_memory_chunks, loki_ingester_flush_queue_length parameters snaps attached showing increased in parameters.
Snaps attached:



  1. How much additional logs would you say you are shipping to Loki?
  2. I see quite a bit of increase on number of series, I’d say also look into your otel collector configuration and make sure you are not using labels that are unbounded.

I don’t use Loki’s OTLP endpoint, so I don’t really know how it behaves. But you could try using Alloy to transform your OTEL logs and send to Loki using the Loki’s native endpoint instead, and see if the resource usage is different then.