LOKI Consumes Large Disk Space in /share/containers/storage

My Loki is installed on rhel8 as container using podman. I have run into issues with Loki taking up over 120G of disk space in the /share/containers/storage/overlay-containers directory . In comparison my Grafana container only uses 1.9G

I have one volume I attached pointing to loki/data - this is only 9.2G in size so wondering where all the extra space is coming from.

I logged into the container and ran: du -sh /* | short -h
The largest directory returned was /loki at 9.5G along with another 6 folder but all in the M or K size.

If my local share volume is only 9.2G and the largest folder in the container is only 9.5G then how or what is consuming 120G worth of disk space? I have had two instances of Loki running on different servers as podman containers and I have the same problem on each of them.

My current config file incase I did something wrong which is highly possible.
auth_enabled: false

server:
http_listen_port: 3100
grpc_listen_port: 9096

common:
path_prefix: /loki/data
storage:
filesystem:
chunks_directory: /loki/data/chunks
rules_directory: /loki/data/rules
replication_factor: 1
ring:
kvstore:
store: inmemory

schema_config:
configs:
- from: 2020-10-24
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h

storage_config:
boltdb_shipper:
active_index_directory: /loki/data/index
cache_location: /loki/data/boltdb-cache
shared_store: filesystem

compactor:
working_directory: /loki/retention
shared_store: filesystem
compaction_interval: 10m
retention_enabled: true
retention_delete_delay: 2h
retention_delete_worker_count: 150

limits_config:
retention_period: 72h

ruler:
alertmanager_url: http://localhost:9093

By default, Loki will send anonymous, but uniquely-identifiable usage and configuration

analytics to Grafana Labs. These statistics are sent to https://stats.grafana.org/

Statistics help us better understand how Loki is used, and they show us performance

levels for most users. This helps us prioritize features and documentation.

For more information on what’s sent, look at

https://github.com/grafana/loki/blob/main/pkg/usagestats/stats.go

Refer to the buildReport method to see what goes into a report.

If you restart your Loki container does the space get cleared up?

Check Docker logs for your Loki container.

It does not clean up on a restart. I will have to dive into the logs from the actual file location. I ran podman logs loki and that was a mistake because it is displaying on my screen and running forever.

LOKI Consumes Large Disk Space in /share/containers/storage

If LOKI is consuming excessive disk space in /share/containers/storage, it’s likely due to retained logs or improper cleanup. To resolve this:

  1. Check Retention Settings: Verify and reduce the log retention period in LOKI’s configuration.
  2. Enable Compression: Use log compression to save storage.
  3. Clear Unused Logs: Manually clean up unnecessary files in the storage directory.
  4. Monitor Usage: Regularly monitor disk space to avoid future issues.

Optimizing these settings can significantly reduce disk space usage.

We had to go into /.local/share/containers/storage/overlay-containers/f78d1fbdc27be707322aafec385d741915dd8749b15da047a2611c411fe0e3c6/userdata
which is the userdata directory for loki. Inside here is a log file named ctr.log. I am not 100% on how Loki uses this log but it was taking up all the the 120G.

Here is one line from this log:

2025-01-15T04:01:00.045146042-05:00 stderr F level=info ts=2025-01-15T09:01:00.016969642Z caller=metrics.go:159 component=querier org_id=fake traceID=7d53c8e825a416bb latency=fast que$

Wondering exactly what this data is and means and how to keep it from getting to large without having to create some job to clean it up every so often.