Hello,
I have setup my Grafana Alloy, Loki to successfully take the logs from my Application. However, I see one problem that I couldn’t find anywhere to solve.
The problem is my Grafana alloy container is taking up a lot of memory(like 742MiB), even though other containers such as Grafana(77MiB), Loki(64MiB), Prometheus(125MiB) are taking a much less memory compared to Alloy. Is there a way to reduce the memory usage that Alloy is taking up?
Does it have to do anything with Log format, the way container is started, or the way that the configuration file that is written?
Alloy Service
alloy:
image: grafana/alloy:latest
container_name: alloy
volumes:
- ./config.alloy:/etc/alloy/config.alloy
- /home/abhishek/Documents/logs:/var/log/app-logs:ro
command:
- run
- --server.http.listen-addr=0.0.0.0:8083
- --storage.path=/var/lib/alloy/data
- /etc/alloy/config.alloy
depends_on:
- loki
networks:
- grafana-network
My config.alloy
local.file_match "log_files" {
path_targets = [{"__path__" = "/var/log/app-logs/*/*.log"}]
sync_period = "5s"
}
local.file_match "log_gz_files" {
path_targets = [{"__path__" = "/var/log/app-logs/*/*.log.gz"}]
sync_period = "5s"
}
loki.source.file "file_source" {
targets = local.file_match.log_files.targets
tail_from_end = true
forward_to = [loki.process.log_processing.receiver]
}
loki.source.file "file_gz_source" {
targets = local.file_match.log_gz_files.targets
tail_from_end = true
decompression {
enabled = true
initial_delay = "10s"
format = "gz"
}
forward_to = [loki.process.log_processing.receiver]
}
loki.process "log_processing" {
stage.json {
expressions = { "level" = "level", "time" = "timestamp"}
}
stage.regex {
expression = "/var/log/app-logs/(?P<service>[^/]+)/.*"
source = "filename"
}
stage.labels {
values = { "loglbl" = "level", "service_name" = "service" }
}
stage.timestamp {
source = "timestamp"
format = "RFC3339"
action_on_failure = "skip"
}
forward_to = [loki.write.loki_receiver.receiver]
}
loki.write "loki_receiver" {
endpoint {
url = "http://loki:8082/loki/api/v1/push"
}
}