Break logs by '\n' using Grafana Alloy

Hi!

I have a log entries like:

2025-05-07 11:20:37.960 {\"unix\":1746612931, \"msg\":\"*** Starting uWSGI 2.0.28 (64bit) on [Wed May  7 10:15:29 2025] ***\n\", \"date\":\"07/05/2025 06:15:31\"}\n{\"unix\":1746612931, \"msg\":\"compiled with version: 12.2.0 on 05 May 2025 06:05:35\n\", \"date\":\"07/05/2025 06:15:31\"}\n{\"unix\":1746612931, \"msg\":\"os: Linux-6.1.132-147.221.amzn2023.x86_64 #1 SMP PREEMPT_DYNAMIC Tue Apr  8 13:14:54 UTC 2025\n\", \"date\":\"07/05/2025 06:15:31\"}\n{\"unix\":1746612931, \"msg\":\"nodename: social-backend-5759ff54bc-gbb9f\n\", \"date\":\"07/05/2025 06:15:31\"}\n{\"unix\":1746612931, \"msg\":\"machine: x86_64\n\", \"date\":\"07/05/2025 06:15:31\"}\n{\"unix\":1746612931, \"msg\":\"clock source: unix\n\", \"date\":\"07/05/2025 06:15:31\"}\n{\"unix\":1746612931, \"msg\":\"detected number of CPU cores: 2\n\", \"date\":\"07/05/2025 06:15:31\"}\n{\"unix\":1746612931, \"msg\":\"current working directory: /app\n\", \"date\":\"07/05/2025 06:15:31\"}\n{\"unix\":1746612931, \"msg\":\"writing pidfile to /tmp/uwsgi-master.pid\n\", \"date\":\"07/05/2025 06:15:31\"}\n{\"unix\":1746612931, \"msg\":\"detected binary path: /app/.venv/bin/uwsgi\n\", \"date\":\"07/05/2025 06:15:31\"}\n{\"unix\":1746612931, \"msg\":\"!!! no internal routing support, rebuild with pcre support !!!\n\", \"date\":\"07/05/2025 06:15:31\"}\n

This is how I see it in Grafana.
How can I break this lines using ‘\n’ as one line is actually joined multiple lines now and is so huge?

Also, errors are not treated as errors (shown as green): Imgur: The magic of the Internet

How can I turn them as errors?

Here is my Alloy Helm Values file contents:

alloy:
  mounts:
    varlog: true
  configMap:
    content: |
      logging {
        level  = "info"
        format = "json"
      }

      discovery.kubernetes "pods" {
        role = "pod"
      }

      local.file_match "node_logs" {
        path_targets = [{
            // Monitor syslog to scrape node-logs
            __path__  = "/var/log/syslog",
            job       = "node/syslog",
            node_name = sys.env("HOSTNAME"),
            cluster   = "dev-eks",
        }]
      }

      loki.source.file "node_logs" {
        targets    = local.file_match.node_logs.targets
        forward_to = [loki.write.endpoint.receiver]
      }

      // discovery.relabel rewrites the label set of the input targets by applying one or more relabeling rules.
      // If no rules are defined, then the input targets are exported as-is.
      discovery.relabel "pod_logs" {
        targets = discovery.kubernetes.pods.targets

        // Label creation - "namespace" field from "__meta_kubernetes_namespace"
        rule {
          source_labels = ["__meta_kubernetes_namespace"]
          action = "replace"
          target_label = "namespace"
        }

        // Label creation - "pod" field from "__meta_kubernetes_pod_name"
        rule {
          source_labels = ["__meta_kubernetes_pod_name"]
          action = "replace"
          target_label = "pod"
        }

        // Label creation - "container" field from "__meta_kubernetes_pod_container_name"
        rule {
          source_labels = ["__meta_kubernetes_pod_container_name"]
          action = "replace"
          target_label = "container"
        }

        // Label creation -  "app" field from "__meta_kubernetes_pod_label_app_kubernetes_io_name"
        rule {
          source_labels = ["__meta_kubernetes_pod_label_app_kubernetes_io_name"]
          action = "replace"
          target_label = "app"
        }

        // Label creation -  "job" field from "__meta_kubernetes_namespace" and "__meta_kubernetes_pod_container_name"
        // Concatenate values __meta_kubernetes_namespace/__meta_kubernetes_pod_container_name
        rule {
          source_labels = ["__meta_kubernetes_namespace", "__meta_kubernetes_pod_container_name"]
          action = "replace"
          target_label = "job"
          separator = "/"
          replacement = "$1"
        }

        // Label creation - "container" field from "__meta_kubernetes_pod_uid" and "__meta_kubernetes_pod_container_name"
        // Concatenate values __meta_kubernetes_pod_uid/__meta_kubernetes_pod_container_name.log
        rule {
          source_labels = ["__meta_kubernetes_pod_uid", "__meta_kubernetes_pod_container_name"]
          action = "replace"
          target_label = "__path__"
          separator = "/"
          replacement = "/var/log/pods/*$1/*.log"
        }

        // Label creation -  "container_runtime" field from "__meta_kubernetes_pod_container_id"
        rule {
          source_labels = ["__meta_kubernetes_pod_container_id"]
          action = "replace"
          target_label = "container_runtime"
          regex = "^(\\S+):\\/\\/.+$"
          replacement = "$1"
        }
      }

      loki.source.kubernetes "pod_logs" {
        targets    = discovery.relabel.pod_logs.output
        forward_to = [loki.process.pod_logs.receiver]
      }

      loki.process "pod_logs" {
        stage.static_labels {
            values = {
              cluster = "dev-eks",
            }
        }
        forward_to = [loki.write.endpoint.receiver]
      }

      loki.source.kubernetes_events "cluster_events" {
        job_name   = "integrations/kubernetes/eventhandler"
        log_format = "logfmt"
        forward_to = [loki.process.cluster_events.receiver]
      }

      loki.process "cluster_events" {
        forward_to = [loki.write.endpoint.receiver]

        stage.static_labels {
          values = {
            cluster = "dev-eks",
          }
        }

        stage.labels {
          values = {
            kubernetes_cluster_events = "job",
          }
        }
      }

      loki.write "endpoint" {
        endpoint {
            url = "http://loki-distributor.logging.svc.cluster.local:3100/loki/api/v1/push"
            tenant_id = "fake"
        }
      }

You are essentially looking to turn 1 logline into many (separated by \n), and I don’t think this is possible.

Is it possible to have the source to respect the newline characters? I think that would be your best bet. If not, I’d probably right a simple script and do some manual pre-processing.