Loki logfmt cannot parse it's own gateway HTTP POST log message


When querying logs from containers and using logfmt, Loki NGinx gateway log messages are shown as erroneous, stating: Error: LogFmtParserErr. Loki is stated as a Compose application with Read, Write components separated and Gateway put in front (example deployment from GitHub).

All POST events from Loki container are shown with the warning sign in Grafana dashboard. Do I need to reformat the log for it to be accepted by logfmt?

	2022-12-14 13:33:50 - - [14/Dec/2022:11:33:50 +0000]  204 "POST /loki/api/v1/push HTTP/1.1" 0 "-" "promtail/2.6.1" "-"
Log labels
__error__	LogfmtParserErr
application	docker
environment	poc
filename	/var/lib/docker/containers/80a25fea1e915e83996654be6de864d7c4aaff86eb8adf6edb6a02fe54c2fcae/80a25fea1e915e83996654be6de864d7c4aaff86eb8adf6edb6a02fe54c2fcae-json.log
hostname	*****************-01
job	container
source	docker
stream	stderr
tenant	local
Detected fields
ts	2022-12-14T11:33:50.995Z
tsNs	1671017630995797208


{job=~"container" } | logfmt

Promtail configuration:

- job_name: docker

  - targets:
      - localhost
      environment: "poc"
      job: container
      source: docker
      application: docker
      tenant: local
      hostname: ${HOSTNAME}
      __path__: /var/lib/docker/containers/*/*log

    - docker: {}

If I restart container I do see some correct log messages, indicating that container have restarted, next to erroneous NGinx HTTP action log messages. So it looks like those log messages are not parsed correctly. This is how they do look in container json log file.

{"log":" - - [14/Dec/2022:13:05:30 +0000]  204 \"POST /loki/api/v1/push HTTP/1.1\" 0 \"-\" \"promtail/2.6.1\" \"-\"\n","stream":"stderr","time":"2022-12-14T13:05:30.211684758Z"}
{"log":" - - [14/Dec/2022:13:05:31 +0000]  204 \"POST /loki/api/v1/push HTTP/1.1\" 0 \"-\" \"promtail/2.6.1\" \"-\"\n","stream":"stderr","time":"2022-12-14T13:05:31.511070958Z"}
{"log":"/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration\n","stream":"stdout","time":"2022-12-14T13:05:32.459302703Z"}
{"log":"/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/\n","stream":"stdout","time":"2022-12-14T13:05:32.459333856Z"}
{"log":"/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh\n","stream":"stdout","time":"2022-12-14T13:05:32.460716519Z"}
{"log":"10-listen-on-ipv6-by-default.sh: info: IPv6 listen already enabled\n","stream":"stdout","time":"2022-12-14T13:05:32.463260431Z"}
{"log":"/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh\n","stream":"stdout","time":"2022-12-14T13:05:32.463281285Z"}
{"log":"/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh\n","stream":"stdout","time":"2022-12-14T13:05:32.469205859Z"}
{"log":"/docker-entrypoint.sh: Configuration complete; ready for start up\n","stream":"stdout","time":"2022-12-14T13:05:32.469892124Z"}
{"log":" - - [14/Dec/2022:13:05:32 +0000]  204 \"POST /loki/api/v1/push HTTP/1.1\" 0 \"-\" \"promtail/2.6.1\" \"-\"\n","stream":"stderr","time":"2022-12-14T13:05:32.621083529Z"}
{"log":" - - [14/Dec/2022:13:05:33 +0000]  204 \"POST /loki/api/v1/push HTTP/1.1\" 0 \"-\" \"promtail/2.6.1\" \"-\"\n","stream":"stderr","time":"2022-12-14T13:05:33.7111807Z"}
{"log":" - - [14/Dec/2022:13:05:35 +0000]  204 \"POST /loki/api/v1/push HTTP/1.1\" 0 \"-\" \"promtail/2.6.1\" \"-\"\n","stream":"stderr","time":"2022-12-14T13:05:35.010451448Z"}
  1. logfmt format is key=value pair separated by space, which is not what your nginx log looks like. For custom log format write your own pattern.

  2. Since you are send all container logs, there will be some that won’t parse (for example your stdout and stderr logs are likely different. You can include __error__ = "" in your filter to remove the log lines that fail to parse.

How’s that NGinx are not key:value? Or logfmt explicitly expects log value to be in key:value format?

This is the log excerpt from the same container, running Loki Gateway component. So if I understand situation, I have to update Promtail config for job_name: docker and add additional match: stage to look for NGINX output and reformat that output to be in key:value. Something like:

{"log":" host:  time: [14/Dec/2022:13:05:32 +0000]  http_code: 204 request: \"POST /loki/api/v1/push HTTP/1.1\" something: 0 agent: \"promtail/2.6.1\" ","stream":"stderr","time":"2022-12-14T13:05:32.621083529Z"}

Please refer to LogQL documentation here: Log queries | Grafana Loki documentation

You do not need to change your Promtail configuration to reformat your logs, nor do I recommend you to. You can write your own pattern to parse the logs after they land in Loki.

Not sure I would agree. This is a log dashboard, which is given to end user. Therefore it needs to: 1. adhere to KISS 2. Be universal to show all logs.

In order to have that, I need my logs to be uniform. And for most part they really are: Windows and IIS, SystemD, rsyslog souces. It just the Loki Nginx container in the container stream stands out. So I rather fix this one in docker:{} pipeline stage, than make complex LogQL to gave exceptions in data stream handling.

HM… I have found another issue and I think this is why log’s are not parsed. I have updated pipeline to restructure logs and I still do get an error if the message has double quotes in it.

To test I have inserted a replacement string. If this test string contains double quotes, then logfmt fails to parse and shows only "Test string " as detected msg field. If string does not contain quotes, then log restructuring works and logfmt accepts log stream.

Question: how to quote in template pipeline?

Escaped single qote does not work and promtail fails to start:

'{{ if eq .tmp_msg "" }}{{ .Value }}{{ else }}level=info ts=2022-12-16T07:15:37.503910058Z caller=liauliau msg=\'{{ .tmp_msg }}\'{{ end }}'

Docker log processing Pipeline:

    - docker: {}
    - match:
        selector: '{stream="stderr"}'
        - regex:
          # {"log":" - - [14/Dec/2022:13:05:32 +0000]  204 \"POST /loki/api/v1/push HTTP/1.1\" 0 \"-\" \"promtail/2.6.1\" \"-\"\n","stream":"stderr","time":"2022-12-14T13:05:32.621083529Z"}
          # $remote_addr - $remote_user [$time_local]  $status "$request" $body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for"
          # Because we run Promtail with --config.expand-env=true
            expression: '^(?P<tmp_msg>[\w\.]+ +-[^ ]* +- +\[.*\] +[\d]+.+(?:POST|GET|PUT|PATCH|DELETE)[^ ]* +[^ ]* +[^ ]*.+[\d]+ +"[^"]*" +"[^"]*" +"[^"]*")?'
            source: output
        - replace:
            source: tmp_msg
            expression: '^(.*)$'
            replace: 'Test string "Test string in quotes" continuation'
        - template:
            source: output
            template: '{{ if eq .tmp_msg "" }}{{ .Value }}{{ else }}level=info ts=2022-12-16T07:15:37.503910058Z caller=liauliau msg="{{ .tmp_msg }}"{{ end }}'
        - output:
            source: output

So issue can be closed :smiley:

I have updated NGinx logs to NOT contain double quotes (") and reformatted log messages using single quotes ('). This allows logfmt to properly parse container output with container pipeline having just this:

    - docker: {}

Did not expect that double quotes in string would mage string parser to stumble, however apparently this is how logfmt compatible logs should be structured.

Just tested with syslog and quoted string also breaks logfmt parser.

for i in auth auth-priv cron daemon kern lpr mail mark news syslog user uucp local0  local1 local2 local3 local4 local5 local6 local7
    for k in debug info notice warning err crit alert emerg
        logger -p $i.$k "Test daemon message \"quoted string\" facility $i priority $k"

Logs are retrieved with

- job_name: system
  - targets:
      - localhost
      environment: "poc"
      job: system
      source: varlog
      __path__: /var/log/*.log
      hostname: ${HOSTNAME}
      __default_severity: info
  - source_labels: [level, __default_severity]
    action: replace
    separator: ";"
    regex: ;(.*)
    replacement: $1
    target_label: level