Setting Timestamp split into 2 different JSON properties

Given is following Log4j2 log in JsonLayout

{
    "instant": {
        "epochSecond": 1694203862,
        "nanoOfSecond": 864285000
    },
    "thread": "http-nio-8080-exec-9",
    "level": "INFO",
    "loggerName": "com.rumpf.thyme.controller.HomeController",
    "message": "Requesting User 6",
    "endOfBatch": false,
    "loggerFqcn": "org.apache.logging.log4j.spi.AbstractLogger",
    "threadId": 44,
    "threadPriority": 5
}

As you can see, the timestamp is split into Unix timestamp in epochSecond and nanoOfSecond. I would like to combine them to have nano-precision. Is that possible without adding extra fields? I want to avoid adding timestamps in different formats in the same line.

Are you asking for advice on parsing your log with LogQL after writing to Loki, or parsing your log with Promtail before writing to Loki?

In general I recommend having logs written in the right time, then you have one less thing to worry when parsing logs. You should be able to parse the logs with a json filter, use the template filter to combine epoch and nano, then use that for timestamp. Something like this (not tested):

pipeline_stages:
  - json:
      expressions:
        instant:
  - json:
      expressions:
        epochSecond:
        nanoOfSecond:
      source: instant
  - template:
      source: timestring
      template: '{{ .epochSecond }}{{ .nanoOfSecond }}'
  - timestamp:
      source: timestring
      format: UnixNs

parsing your log with LogQL after writing to Loki, or parsing your log with Promtail before writing to Loki?

It’s the ladder: parsing log with Promtail before writing to Loki.

I was trying out your config suggestion with the template, but it seems it only works when nanoOfSecond is 9 digit long. For numbers smaller than 100,000,000 the concatenation will display a wrong result and messing up the log order.

Example:

Input (adding commas to read the numbers easier)

{
    "instant": {
        "epochSecond": 1,694,203,862,
        "nanoOfSecond": 8,642,850
    }
}
Expected: 1694203862.00864285
Actual:   1694203862.864285

8,000,000 ns -> 8,000 us -> 8 ms -> 0.008 s

Guess I will add a new JSON property in Log4j2 layout. Maybe I can find a way to get rid of instant. The alternative would be to set the parameter includeTimeMillis="true", but nowadays you need at least microsecond precision because a lot of things can happen within the same millisecond.

Thank you for your response @tonyswumac

I continued the experiment, this time trying out one of the template functions for a formatted print of nanoseconds.

pipeline_stages:
  - json:
      expressions:
        seconds: instant.epochSecond
        nanos: instant.nanoOfSecond
  - template:
      source: timens
      template: '{{ printf "%d%09d" .seconds .nanos }}'
  - timestamp:
      source: timens
      format: UnixNs

Promtail however cannot parse the template output (at least I got the leading 0 for nanos).

2023-11-25 12:54:45 level=debug ts=2023-11-25T11:54:45.117513457Z caller=timestamp.go:196 component=file_pipeline msg="failed to parse time" err="strconv.ParseInt: parsing \"%!d(string=1700910928)%!d(string=027905700)\": invalid syntax" format=UnixNs value="%!d(string=1700910928)%!d(string=027905700)"

I found a solution now. The key was to use the Sprig Math functions to actually get an integer instead of a formatted string.

I multiply seconds by 1,000,000,000 to get nanoseconds, then add it with nanos

pipeline_stages:
  - json:
      expressions:
        seconds: instant.epochSecond
        nanos: instant.nanoOfSecond
  - template:
      source: timens
      template: "{{ add (mul .seconds 1000000000) .nanos }}"
  - timestamp:
      source: timens
      format: UnixNs