Log entries not showing in grafana if I parse timestamp from the logs

On a brand new local promtail+loki+grafana setup, I’m trying to parse a small logfile produced a week ago to get familiar with loki-grafana.
If I use the following pipeline, I can see log entries in grafana parsed correctly (note I’m using timestamp2 as a label). Log entries show a “timestamp2” label and the timestamp of the log is the current time (not the time parsed from the log file)

    - regex:
        expression: '^(?P<level>\w+) (?P<timestamp2>\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}) (?P<file_path>[^\s]+):(?P<line_number>\d+): (?P<message>.*)$'
    - labels:
    - output:
        source: message

so, I believe the regex at least is correct as it’s able to parse the log information into labels.

However, when I try using the timestamp from the log file:

    - regex:
        expression: '^(?P<level>\w+) (?P<timestamp>\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}) (?P<file_path>[^\s]+):(?P<line_number>\d+): (?P<message>.*)$'
    - timestamp:
        source: timestamp
        format: "2006/01/02 15:04:05"
    - labels:
    - output:
        source: message

then I am not able to see any log entries in grafana. I’ve tried searching for the last 30 days (the log entries are from last week) but I still do not see the entries in grafana.
Manually searching on the wal files in loki, I can see they contain the entries I’m trying to display, but for some reason I’m still not able to display them in grafana.

Between my tests, I’m deleting promtail’s position files, loki’s /tmp/loki folder.
The loki configuration I’m using the the following:

auth_enabled: false

  http_listen_port: 3100

      store: inmemory
  replication_factor: 1
  path_prefix: /tmp/loki

  - from: 2020-05-15
    store: tsdb
    object_store: filesystem
    schema: v13
      prefix: index_
      period: 24h

    directory: /tmp/loki/chunks

  reject_old_samples: false
  reject_old_samples_max_age: 43800h
  retention_period: 744h

Does anybody have a suggestion on how to troubleshoot this further? or some minimal example I can follow?

  1. Do you have any sample log?

  2. While Loki can take logs with past timestamp, you cannot send older logs “in the same log stream” to Loki (log stream defined as logs with the same set of labels). For example, you already sent some logs to Loki with new timestamp (current time). If you try to send the same logs to Loki with parsed timestamp (which would be older) it will be discarded. You can see this in the ingester logs usually.

The logs are not that special, for example:

INFO 2024/05/31 09:03:36 src/main.rs:190: server started

and they are already ordered. I’m only processing 1 log file and whenever I’m trying new things I’m deleting the promtail position file and loki’s whole /tmp/loki
After tinkering and reading various other posts, I believe the following is happening (can you please confirm/correct?).
The log file is not big enough to be flushed immediately. On one of the tests I was doing, I suddenly saw the logs in grafana after some time (~30min). I do not see them right after starting everything (with any state files deleted).
Reading about injester configuration, I found the following:

  # How long chunks should sit in-memory with no updates before being flushed if
  # they don't hit the max block size. This means that half-empty chunks will
  # still be flushed after a certain period as long as they receive no further
  # activity.
  # CLI flag: -ingester.chunks-idle-period
  chunk_idle_period: 1m

I set the value to 1min and, after restarting everything, I could see the logs much quicker in grafana. Toggling this setting on/off seems to have a direct effect, so I believe is related.
Finally I saw there is also an API to flush: Loki HTTP API | Grafana Loki documentation
If I use that API, I can also see the logs much quicker.
Does this sound like it’s what was causing my logs to not show up yet?

This page The concise guide to Loki: How to work with out-of-order and older logs | Grafana Labs also has some information that looks like it may be applicable here.