I have two micro service running inside dedicated docker containers. I want to process their logs and send them to Loki. I tried several solutions like Fluent Bit, Promtail and others but I never managed to achieved my goal and I hope that Alloy can allow me to do what I want to do.
The first container produce logs that looks like this line:
{"log":"{\"asctime\": \"2025-03-03T13:09:50.055\", \"name\": \"test\", \"levelname\": \"INFO\", \"trace_id\": 4, \"message\": \"msg number 4\"}\n","stream":"stdout","attrs":{"com.docker.compose.config-hash":"uuid","com.docker.compose.container-number":"1","com.docker.compose.depends_on":"","com.docker.compose.image":"sha256:image_sha","com.docker.compose.oneoff":"False","com.docker.compose.project":"projet_name","com.docker.compose.project.config_files":"configfile","com.docker.compose.project.working_dir":"workdir","com.docker.compose.service":"compose_service","com.docker.compose.version":"2.29.7"},"time":"2025-03-03T13:09:50.055605682Z"}
For each log I want to have in Loki the following labels:
- container_name
- trace_id
- level
and as body information (I do not know the correct terminology, output ?)
- the asctime from log, renamed timestamp
- the timestamp as the time key
- levelname rename as level
- name rename as logger_name
- trace_id
- message
- attrs
I tried to do the parsing like this, but Alloy seems to completely ignore some of my need
// -------------------------------------------- Sources
discovery.docker "containers" {
host = "unix:///var/run/docker.sock"
}
loki.source.docker "docker_logs" {
host = "unix:///var/run/docker.sock"
targets = discovery.docker.containers.targets
relabel_rules = loki.relabel.docker_labels.rules
forward_to = [loki.process.process_logs.receiver]
}
loki.relabel "docker_labels" {
forward_to = []
rule {
action = "replace"
source_labels = ["__meta_docker_container_name"]
regex = "/(.*)"
target_label = "container_name"
}
}
// -------------------------------------------- Processors
// Loki process for filtering logs
loki.process "process_logs" {
stage.json {
expressions = {
attrs = "attrs",
log = "log",
stream = "stream",
time = "time",
}
}
stage.json {
source = "log"
expressions = {
level = "levelname",
logger_name = "name",
message = "message",
timestamp = "asctime",
trace_id = "trace_id",
}
}
stage.timestamp {
source = "timestamp"
format = "RFC3339"
}
stage.labels {
values = {
container_name = "container_name",
trace_id = "trace_id",
}
}
forward_to = [loki.write.grafana_loki.receiver]
}
loki.write "grafana_loki" {
endpoint {
url = "http://loki:3100/loki/api/v1/push"
}
external_labels = {
cluster = "docker-compose",
}
}
I was expecting something like this with the keys like “asctime”, “name”, … correctly rename and to have the “timestamp” created key used a timestamp.
The trace_id is not set to label either.
Can anyone tell me what is wrong with my Alloy configuration ?
As I was struggling with extracting these simples logs I did not try to extract the one from my other container, the logs look like this:
2025-03-04T17:21:00.170Z INFO 1 --- [http-nio-8080-exec-2] c.p.c.back.services.BouncerService : [a7b79f2c79fad4b77ad4ca18adab9e42-b16c9fccb5d68247] removed from inactive users, userID : xxxx
and I will process these logs using a regular expression, but I was wondering how can I create two different pipeline for the two different log format ? I was thinking of using the container name as filter rule but I don’t see any filter stage in the doc.
Any pointers or help would be very much appreciated.
Thanks you