Loki drops duplicate log entries

When Loki receives two identical log lines with same timestamp it seems to keep just the first one.
To replicate run
curl -XPOST http://localhost:3100/loki/api/v1/push -H 'Content-Type: application/json' -d '{ "streams": [{ "stream": { "service_name": "test" }, "values": [["1740044173262043400", "TESTING"]] }] }' twice, observe that in Grafana only one log line exists.
Also querying Loki directly with curl -G -s 'http://localhost:3100/loki/api/v1/query_range' --data-urlencode 'query={service_name="test"}' --data-urlencode 'start=1740044173262043000' --data-urlencode 'end=1740044174262043000' | jq . returns only single result.

Is it possible to override this behaviour? I tried setting limits_config.increment_duplicate_timestamp: true, but it seemed to have no effect.

From my personal testing, I don’t believe that’s what the configuration is for. According to documentation:

Alter the log line timestamp during ingestion when the timestamp is the same as the previous entry for the same stream.

It is for the logs arriving at the same in the same stream, it doesn’t say that it’s for the same logs. Also looking at the code here loki/pkg/distributor/distributor.go at f6fcc1194e80935b4d6206901e89cee840cc08be · grafana/loki · GitHub, it looks like it’s as described in documentation, where the timestamp is only incremented if log lines are “different”.

Hopefully someone with more knowledge can comment on this.

Thank you for looking into it!
Not mentioning that the messages were from the same stream explicitly was a miscommunication on my part, I apologize.
The reason for the original question was that in my use case (a system with a very coarse timer) there can be legitimate duplicate log lines. I would like to be able to log them too though if I can’t I can live with that.
What really surprised me was that this deduplication happens also on same log lines on same stream with same timestamp but with different structured metadata.
Fixing that was fairly easy though - I put what I originally sent as metadata into log line itself.

I think the only solution to that is to have your log source produce millisecond or nanosecond in the logs, unfortunately.