'entry too far behind, entry timestamp is: [], oldest acceptable timestamp

I’ve looked on many posts on this and changed many configurations but I’m still getting this error:

24 06:23:12.850955 +0000 UTC ignored, reason: ‘entry too far behind, entry timestamp is: 2025-09-24T06:23:12Z, oldest acceptable timestamp is: 2025-09-25T06:19:16Z’,\nuser ‘fake’, total ignored: 1 out of 1 for stream: {module=\“Alarms\”, service_name=\“unknown_service\”, severity=\“DEBUG\”}"

I have my config:

limits_config:
  reject_old_samples: false
  reject_old_samples_max_age: 16800h
  retention_period: 16800h
  allow_structured_metadata: false
  ingestion_burst_size_mb: 16
  ingestion_rate_mb: 16
  volume_enabled: true
  unordered_writes: true

So unordered_writesand reject_old_samples are set.

The compose file has command: -config.file=/etc/loki/config.yaml

And if I attach to the container and run:

/ $ cat /etc/loki/config.yaml

auth_enabled: false

server:
http_listen_port: 3100

ingester:
wal:
dir: /loki/wal
lifecycler:
address: 127.0.0.1
ring:
kvstore:
store: inmemory
replication_factor: 1
final_sleep: 0s
chunk_idle_period: 1h
max_chunk_age: 1h
chunk_target_size: 1048576
chunk_retain_period: 30s

schema_config:
configs:

  • from: 2020-10-24
    store: boltdb-shipper
    object_store: filesystem
    schema: v11
    index:
    prefix: index_
    period: 24h

storage_config:
boltdb_shipper:
active_index_directory: /loki/boltdb-shipper-active
cache_location: /loki/boltdb-shipper-cache
cache_ttl: 24h
filesystem:
directory: /loki/chunks

compactor:
working_directory: /loki/boltdb-shipper-compactor

limits_config:
reject_old_samples: false
reject_old_samples_max_age: 16800h
retention_period: 16800h
allow_structured_metadata: false
ingestion_burst_size_mb: 16
ingestion_rate_mb: 16
volume_enabled: true
unordered_writes: true

ruler:
storage:
type: local
local:
directory: /loki/rules
rule_path: /loki/rules-temp
alertmanager_url: localhost
ring:
kvstore:
store: inmemory
enable_api: true

This i from the shell so the yaml is not formated.

And a little bit of context: We have thousands of IOT devices that send logs every 6 hours (they store the logs and send them when they can) to loki. If the device has trouble connecting it may take days to send.

I would recommend you to:

  1. Make sure your logs from your IOT devices are sent in order from old to new. If you are using a log agent this is most likely already the case.
  2. Make sure you have a label that can uniquely identify each IOT device from others.

The reason #2 is important is because in Loki you cannot send older logs to a log stream if newer logs are already present (log stream defined as logs with the same set of labels).

Thanks @tonyswumac, we will have a look. Specially at #2, since we don’t have different labels for each device.

But anyway, I thought unordered_writes was suposed to cover that case. Isn’t it true?

Because once a chunk file is written it is no longer mutable, so if you attempt to send older logs to a stream where newer logs are already present (and written) to storage then it is rejected.

It is explained here: The concise guide to Loki: How to work with out-of-order and older logs | Grafana Labs

Any logs received within one hour of the most recent log received for a stream will be accepted and stored. Any log more than one hour older than the most recent log received for a stream will be rejected with an error that reads: “Entry too far behind.”

1 Like