I have configured alloy in amazon linux2 with loki and grafana running on the same server. The alloy is configured to capture the log files and fwd it to loki and grafana would query the loki to show the log files in a dashboard. The alloy service and other services are running fine, the only issue is that any new log file is not showing up in grafana. After many tries, I observed that only after I restart the alloy service the new file is showing up in grafana explore feature.
I see no issue with cpu/ram/disk and all services, including alloy is fine. I also checked the webui of alloy and it shows the new file in ‘targets’ for ‘local.file_match’ as healthy.
I am not sure what is happenning, but the only solution is to restart the alloy service and I know this isn’t the right approach.
Any suggestion are apprecaited.
Are you using the local.file_match
component?
It allows to discover new files based on a glob and checks for new files periodically. Please share your config if using this component doesn’t help
I’m also having the same issue, where alloy is configured on the AWS - Ubuntu 24.04 machine
I have a application server, tomcat and alloy is configured to scrape the log file catalina.out
at /home/ubuntu/apache-tomcat/logs
I’m sharing my alloy config for the logs - loki
local.file_match "logs_default_tomcat_logs" {
path_targets = [{
__address__ = "localhost",
__path__ = "/home/ubuntu/apache-tomcat/logs/catalina.out",
application ="my-application",
instance = "new-tomcat-server",
job = "new-tomcat-logs",
}]
}
loki.source.file "logs_default_tomcat_logs" {
targets = local.file_match.logs_default_tomcat_logs.targets
forward_to = [loki.process.logs_format_process.receiver]
legacy_positions_file = "/tmp/positions.yaml"
}
loki.process "logs_format_process" {
forward_to = [loki.write.logs_default.receiver]
stage.multiline {
firstline = "^\\d{4}-\\d{2}-\\d{2}\\s+\\d{2}:\\d{2}:\\d{2}\\.\\d{3}\\s+(INFO|WARN|ERROR|DEBUG|TRACE)"
max_wait_time = "30s"
}
}
loki.write "logs_default" {
endpoint {
url = "http://loki/loki/api/v1/push"
tenant_id = "my-application"
}
external_labels = {}
}
Note: The issue is only with the logs, I also have metrics and traces configured for the application, I’m getting them with no hiccups.
Is there a specific scenario when the log stops? File rotation? Certain time of the day?
There isn’t always one clear reason why logs might stop, but a few things can cause it. In some cases, VMs are scheduled to shut down in the night, which would naturally stop any logging. Even for VMs that run 24/7, if there’s very little activity at night, you might only see a few log entries—or none at all—during those hours.
Also, logs are usually rotated daily or when they reach a certain size, so it might just be that the logging has continued in a new file.
Hello we also experience the issue on our kubernetes. Alloy runs as daemonset with hostPath volume to /var/log. So we use loki.source.file using kubernetes pod discovery and relabel like
rule {
source_labels = ["__meta_kubernetes_pod_uid", "__meta_kubernetes_pod_container_name"]
target_label = "__path__"
separator = "/"
replacement = "/var/log/pods/*$1/*.log"
}
The scraping is set as
loki.source.file "pods" {
targets = local.file_match.pods.targets
forward_to = [loki.process.pipeline_stages.receiver]
tail_from_end = true
}
alloy, version v1.8.1 (branch: HEAD, revision: dc3b14bd8)
build user: root@buildkitsandbox
build date: 2025-04-10T16:30:37Z
go version: go1.24.1
platform: linux/amd64
tags: netgo,builtinassets,promtail_journal_enabled