Getting rid of {log, time} envelope around JSON messages

Hi folks – I have a promtail agent set up in my Kubernetes cluster correctly forwarding logs to my Grafana Cloud Loki instance. I switched away from using an unmodified Grafana Cloud Agent deployed via helm to an unmodified Promtail instance deployed via helm, and now my log entries are being wrapped in a {"log": "{\"my\": \"log\"}", "time": ...} wrapper.

The wrapper makes it kind of hard to read in Grafana and the inner json seems to have been string escaped. Is there a way to turn off this wrapper at the promtail level, so the raw JSON is the log entry in Grafana, without using a LogQL query every time?

For a bit of added information:

  • I haven’t modified the default config generated for promtail by helm
  • I am not trying to parse the JSON or add more labels from the JSON, I just want it to display nicely in Grafana
  • I can use the json filter in LogQL to get one step of JSON parsing, but because my log entry is escaped I have to do it twice which seems unnecessary and wasn’t necessary before
  • My log entries are 98% valid json, but there are a few that are just plain strings
  • I have verified that it is not my system adding the {log: ...} wrapper and that the logs on disk don’t have it
  • I have diffed the configs the Grafana Cloud Agent was running vs the config the promtail instance is running and don’t spot anything, but I am not an expert
  • I would prefer to not stick to the Grafana Cloud Agent as I want to use some of the promtail specific features running a plain old promtail enables soon, and keep things decoupled

Thanks for any help you can give me!

See an example here

let me get back to your question when im at my desk. what beautiful background image do you have here?

@melrose it is a jar of mayonnaise to remind me that there are perfect things in this world… I accept full responsibility for being a weirdo :grinning:

Any advice on getting rid of that wrapper?

ah you like the mayonaise. more the classical mayonaise or the delicacy mayonaise?

Hey @harry-gadget - so as far as I understand, you would prefer to just see {"my": "log"} in Loki instead of this wrapped version.

I’m not sure about how to turn off the wrapping, but here’s an alternative way to accomplish this…
You can use a combination of the json and replace pipeline stages:

scrape_configs:
- job_name: your_job
  ...
  pipeline_stages:
  - json:
      expressions:
        log_line: log
  - replace:
      expression: "(.+)"
      replace: "{{.log_line}}"

The json stage is using a JMESPath expression to extract your log content from your line (log) and assigning it to a field log_line. This file can then be reused in the subsequent replace stage to replace the whole entry.

Hope that helps

Thanks for the pointer @dannykopping ! I was hoping to avoid having to add specific pipeline stages to my promtail config because I am running in kubernetes. That means that I can’t always really predict what kind of log entries are going to come out of the various services I am running, and that the config is already super complicated and generated by the promtail helm charts. I am kind of afraid of trying to modify it to treat some services in some namespaces differently and selectively parse json.

Worse, a couple of them are heterogeneous, where they log mostly JSON lines, but every so often an error comes along and gets printed in plaintext to stdout.

When I was running the older Grafana Cloud agent’s promtail bits, it seemed to be ok with this setup, and just automatically parsed lines that were JSON and sent them along as such, and left non-json lines alone, never adding a wrapper. Is there any way to get that behaviour back?

Sorry @harry-gadget - I’m not sure. I’ll need to defer to my more experienced colleagues.
@ewelch could you weigh in here pls?

Hey @harry-gadget!

That wrapper you are seeing isn’t added by the Agent, rather that’s the normal log output created by docker

The good news is this would be applied to every log line in your kubernetes cluster as your container runtime must be docker.

The best way to handle this is to add the following pipeline stage:

-docker:

documentation here

This is basically some syntactic sugar to unwrap that json and then output your original log line.

We also have a

-cri:

Stage which can handle containerd runtimes if your Kubernetes isn’t running docker.

cri documentation here

3 Likes

Ah ha, that makes all the sense in the world! My guess is the grafana cloud agent config I was running with had that docker pipeline stage in it and the new helm chart one doesn’t include it by default. I will add that. Thanks so much for the tippy!

If your logs are not docker/cri is the best way to use the replace stage as show above? I have websphere logs emitted in json that are full of \n & \t that make it difficult to read.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.