How do you add a label to a log level?

This variant doesn’t work:

      loki.source.kubernetes "pod_logs" {
        targets    = discovery.relabel.pod_logs.output
        forward_to = [loki.process.add_level_label.receiver]
        clustering {
            enabled = true
        }
      }
      loki.process "add_level_label" {
        stage.labels {
            values = {
                level = "level",
            }
        }
        forward_to = [loki.write.default.receiver]
      }

This variant adds a label, but with an error in logs:

      loki.source.kubernetes "pod_logs" {
        targets    = discovery.relabel.pod_logs.output
        forward_to = [loki.process.add_level_label.receiver]
        clustering {
            enabled = true
        }
      }
      loki.process "add_level_label" {
        stage.logfmt {
            mapping = {
                extracted_level = "level",
            }
        }
        stage.labels {
            values = {
                level = "extracted_level",
            }
        }
        forward_to = [loki.write.default.receiver]
      }

ts=2024-09-03T16:20:13.58712616Z level=error msg=“failed to decode logfmt” component_path=/ component_id=loki.process.add_level_label component=stage type=logfmt err=“logfmt syntax error at pos 51 on line 1: unexpected ’ \” ’

Error is pretty clear: unexpected ’ \” ’

But, how to manage this error? I can’t understand, what’s wrong.

ts=2024-09-03T19:54:03.827772834Z level=info component_path=/ component_id=loki.echo.echo receiver=loki.echo.echo entry="10.10.0.8 - - [03/Sep/2024:07:21:32 +0000] \"GET /api/live/
ws HTTP/1.1\" 401 105 \"-\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36 OPR/112.0.0.0\" 701 0.006 [loki-grafan
a-80] [] 10.188.0.67:3000 105 0.006 401 89532b0e1f349ff927f34b6d6809047f\n" labels="{container=\"controller\", container_image=\"registry.k8s.io/ingress-nginx/controller:v1.11.2@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce\", container_port=\"443\", container_port_name=\"https\", instance=\"default/ingress-nginx-controller-5bdc4f464b-cd72s:controller\", job=\"loki.source.kubernetes.pod_logs\", namespace=\"default\", pod=\"ingress-nginx-controller-5bdc4f464b-cd72s\", pod_controller=\"ingress-nginx-controller-5bdc4f464b\", pod_controller_kind=\"ReplicaSet\", pod_host_ip=\"10.10.0.10\", pod_phase=\"Running\", pod_ready=\"true\"}"
ts=2024-09-03T19:54:03.828265963Z level=debug msg="extracted data debug in logfmt stage" component_path=/ component_id=loki.process.add_level_label component=stage type=logfmt "extracted data"="map[container:server container_image:quay.io/argoproj/argocd:v2.12.1 container_port:8080 container_port_name:server extracted_level:info instance:default/argocd-server-5f855545f7-7ktb2:server job:loki.source.kubernetes.pod_logs namespace:default pod:argocd-server-5f855545f7-7ktb2 pod_controller:argocd-server-5f855545f7 pod_controller_kind:ReplicaSet pod_host_ip:10.10.0.10 pod_phase:Running pod_ready:true]"
ts=2024-09-03T19:54:03.828679322Z level=error msg="failed to decode logfmt" component_path=/ component_id=loki.process.add_level_label component=stage type=logfmt err="logfmt syntax error at pos 45 on line 1: unexpected '\"'"
ts=2024-09-03T19:54:03.829039845Z level=debug msg="extracted data debug in logfmt stage" component_path=/ component_id=loki.process.add_level_label component=stage type=logfmt "extracted data"="map[container:repo-server container_image:quay.io/argoproj/argocd:v2.12.1 container_port:8084 container_port_name:metrics extracted_level:info instance:default/argocd-repo-server-7467d5898c-tc8q9:repo-server job:loki.source.kubernetes.pod_logs namespace:default pod:argocd-repo-server-7467d5898c-tc8q9 pod_controller:argocd-repo-server-7467d5898c pod_controller_kind:ReplicaSet pod_host_ip:10.10.0.11 pod_phase:Running pod_ready:true]"

Can you share some samples of your logs, please?

It’s just copy-paste from stdout Alloy. I am getting logs from loki.source.kubernetes. I don’t know how to get it yet.
There are logs from all the pods, and this error doesn’t say from which one.

If you are not sure where the problem is from, might be a better idea to just forward everything without any processing and see if you can find out more.