Logs disappearing after 11 days in storage

logs delivered by promtail to loki looks deleted in 11 days - query doesn’t find them any more
here’s my /etc/loki/config.yml

auth_enabled: false
server:
http_listen_port: 3100
grpc_listen_port: 9096
log_level: error
grpc_server_max_concurrent_streams: 1000
common:
instance_addr: 172.20.0.239
path_prefix: /tmp/loki
storage:
filesystem:
chunks_directory: /tmp/loki/chunks
rules_directory: /tmp/loki/rules
replication_factor: 1
ring:
kvstore:
store: inmemory
ingester_rf1:
enabled: false
query_range:
results_cache:
cache:
embedded_cache:
enabled: true
max_size_mb: 100
compactor:
working_directory: /tmp/loki/retention
compaction_interval: 15m
retention_enabled: false
retention_delete_delay: 1h
retention_delete_worker_count: 50
delete_request_store: filesystem
schema_config:
configs:
- from: 2020-10-24
store: tsdb
object_store: filesystem
schema: v13
index:
prefix: index_
period: 24h
limits_config:
ingestion_rate_mb: 4
ingestion_burst_size_mb: 6
reject_old_samples: true
reject_old_samples_max_age: 3w
retention_period: 672h
pattern_ingester:
enabled: true
metric_aggregation:
enabled: true
loki_address: localhost:3100
ruler:
alertmanager_url: http://localhost:9093
frontend:
encoding: protobuf

what am i doing wrong? :frowning:

Don’t see anything obvious. Couple of things to try:

  1. Are you sure logs are deleted? check your filesystem for Loki storage and see if there are files older than 11 days.
  2. Are you running Docker? Is the storage mounted properly from host?
  3. Do you have any sort of cronjob that’s doing cleanup outside of Loki for your filesystem?
  4. Do you see logs from compactor that files were deleted?

thank you for answer. let’s see…

  1. you’re right! in storage i’ve found files 17-days old. with data. but explore in grafana doesn’t show it. directory /tmp/loki/chunks/fake contains 909054 directories. everyday near 12:00 like 11:56-11:57 i see thousands of directories created in one minute and they all are empty…
  2. no docker. no
  3. no cronjobs. at least i didn’t make any. /etc/crontab is empty
  4. i don’t know how to find compactor logs :frowning:

help still needed. anyone some ideas?

i’ve disabled compactor like “retention_enabled: false” but nothing changed - explore still cannot find data older than 11 days :frowning:

i’ve tried:

pattern_ingester:
enabled: true
metric_aggregation:
enabled: true
loki_address: localhost:3100
lifecycler:
ring:
replication_factor: 3

but! messages tell me:

Apr 14 14:41:40 hmax-loki loki[7801]: level=error ts=2025-04-14T11:41:40.666422684Z caller=main.go:70 msg=“validating config” err=“CONFIG ERROR: invalid pattern_ingester config: pattern ingester replication factor must be 1”

must be 1 ??? why this setting exist really? :frowning: and how to increase number of ingesters? it seemes that loki cannot handle logs - error 429 occurs :frowning:

and how does it comply with Grafana Loki configuration parameters | Grafana Loki documentation :

# The number of ingesters to write to and read from.
# CLI flag: -distributor.replication-factor
[replication_factor: <int> | default = 3]

so i did a fresh install from the begining. without compactor at all. logs are still 11-days only…
new config:

Summary

auth_enabled: false

server:
http_listen_port: 3100
grpc_listen_port: 9096
log_level: error
grpc_server_max_concurrent_streams: 1000

common:
instance_addr: 172.20.0.238
path_prefix: /tmp/loki
storage:
filesystem:
chunks_directory: /tmp/loki/chunks
rules_directory: /tmp/loki/rules
replication_factor: 1
ring:
kvstore:
store: inmemory

query_range:
results_cache:
cache:
embedded_cache:
enabled: true
max_size_mb: 100

limits_config:
metric_aggregation_enabled: true

schema_config:
configs:
- from: 2020-10-24
store: tsdb
object_store: filesystem
schema: v13
index:
prefix: index_
period: 24h

pattern_ingester:
enabled: true
metric_aggregation:
loki_address: localhost:3100

ruler:
alertmanager_url: http://localhost:9093

frontend:
encoding: protobuf

and output of http://172.20.0.238:3100/config :
config.json (75.9 KB)
json because txt not allowed…

what is wrong with loki??? :(((

here’s data directory info:

Summary

[root@hmax-loki2 ~]# find /tmp/loki/chunks/fake -type f | wc -l
745491
[root@hmax-loki2 ~]# find /tmp/loki/chunks/fake -type d | wc -l
929240
[root@hmax-loki2 ~]# df -a
proc 0 0 0 - /proc
sysfs 0 0 0 - /sys
devtmpfs 4096 0 4096 0% /dev
securityfs 0 0 0 - /sys/kernel/security
tmpfs 4065052 0 4065052 0% /dev/shm
devpts 0 0 0 - /dev/pts
tmpfs 1626024 18728 1607296 2% /run
cgroup2 0 0 0 - /sys/fs/cgroup
pstore 0 0 0 - /sys/fs/pstore
efivarfs 256 27 225 11% /sys/firmware/efi/efivars
bpf 0 0 0 - /sys/fs/bpf
/dev/mapper/cs-root 157220864 9942764 147278100 7% /
systemd-1 - - - - /proc/sys/fs/binfmt_misc
debugfs 0 0 0 - /sys/kernel/debug
mqueue 0 0 0 - /dev/mqueue
hugetlbfs 0 0 0 - /dev/hugepages
tracefs 0 0 0 - /sys/kernel/tracing
none 0 0 0 - /run/credentials/systemd-tmpfiles-setup-dev.service
fusectl 0 0 0 - /sys/fs/fuse/connections
configfs 0 0 0 - /sys/kernel/config
none 0 0 0 - /run/credentials/systemd-sysctl.service
/dev/sda2 983040 272424 710616 28% /boot
/dev/sda1 613160 7652 605508 2% /boot/efi
none 0 0 0 - /run/credentials/systemd-tmpfiles-setup.service
binfmt_misc 0 0 0 - /proc/sys/fs/binfmt_misc
tmpfs 813008 0 813008 0% /run/user/0

lots of empty directories appeared in the last 3 days in chunks dir and logs from 14-11 days are no longer…
disk usage of / is 7%

yesterday:

today:

Hi @szhura,
Please try adding the following to your configuration.

table_manager:
retention_deletes_enabled: true
retention_period: 672h

nothing changed :,(
today logs ending at 17-00 may-01-2025

config1.json (73.0 KB)
here’s new config

maybe the problem is in centos9 ??? somehow…

what mechanizm in loki or grafana can delete data from db/filesystem ?
i mean - besides retention?

maybe exist esxi image with installed grafana-loki stack?