I am using Grafana/Loki/Alloy to aggregate logs in my Kubernetes cluster. My Loki configuration is pretty basic, and a lot of it comes from the copy/paste examples in the documentation. It currently looks like:
discovery.kubernetes "pod" {
role = "pod"
selectors {
role = "pod"
}
namespaces {
names = ["mynamespace"]
}
}
discovery.relabel "pod_logs" {
targets = discovery.kubernetes.pod.targets
rule {
source_labels = ["__meta_kubernetes_namespace"]
action = "replace"
target_label = "namespace"
}
rule {
source_labels = ["__meta_kubernetes_pod_name"]
action = "replace"
target_label = "pod"
}
loki.source.kubernetes "pod_logs" {
targets = discovery.relabel.pod_logs.output
forward_to = [loki.process.add_level_label.receiver]
}
// Process the logs to extract the level
loki.process "add_level_label" {
stage.regex {
// Example regex to match the pattern from the Spring Boot config above
// Captures 'INFO', 'ERROR', 'DEBUG', etc. in a 'level' named group.
expression = `^(?P<timestamp>.*?) (?P<log_level>.*?) (?P<thread>.*?) --- (?P<exec>.*?).*`
}
stage.labels {
values = {
log_level = "log_level",
thread = "thread",
}
}
forward_to = [loki.process.pod_logs.receiver]
}
loki.process "pod_logs" {
stage.static_labels {
values = {
cluster = "mycluster",
}
}
forward_to = [loki.write.default.receiver]
}
In summary, the configuration should collect all Kubernetes pod logs. Using the add_level_labelprocess, it should also use REGEX on my Spring Boot application logs to parse out the log level as a label. Yes, I am familiar with filtering and I am still determined to add log level as an indexed label for the current state of my application (very early).
However, my problem is that the labels from my add_level_label process DO NOT get added at all, when exploring these logs in Grafana. They are not present as Labels anywhere. I see no errors in my Loki/Alloy pods at all related to this REGEX. I have also tested and verified the REGEX: ^(?P<timestamp>.*?) (?P<log_level>.*?) (?P<thread>.*?) --- (?P<exec>.*?).*
My logs are Spring Boot/Logback standard and look like:
2026-01-12T19:56:53.356Z INFO 1 --- [ main] c.test.test.ImporterApplication : Started ImporterApplication in 7.565 seconds (process running for 8.14)
Things I have tried: I had doubts that my process was getting executed, so I added some static labels and confirmed those static labels DID get added. So I know the process is getting hit. I’ve also checked the Loki/Alloy pods and don’t see any errors anywhere.
Is there perhaps some inaccurate assumption I’m making on the source of these logs and what gets exposed to the stage.regex?
Does anyone have an example of Alloy-aggregated logs in K8S getting regex’d in a stage to add labels? Thank you!