I’m running Grafana Alloy on OpenShift (deployed via Helm) and trying to collect pod logs with loki.source.file.
Note : Applied the node-exporter scc to service account
alloy.yaml
kind: Alloy
metadata:
name: alloy-data
namespace: test
spec:
controller:
type: daemonset
tolerations:
- effect: NoSchedule
operator: Exists
volumes:
extra:
- name: alloy-client-cert
secret:
secretName: alloy-client-cert
serviceAccount:
create: false
name: alloy-sa
automountServiceAccountToken: true
rbac:
create: true
alloy:
mounts:
varlog: true
securityContext:
runAsUser: 0
runAsGroup: 0
fsGroup: 0
configMap:
create: false
name: alloy-val
key: alloy-val
storagePath: /var/alloy
configmap;
apiVersion: v1
kind: ConfigMap
metadata:
name: alloy-data
data:
alloy-data: |
logging {
level = "debug"
}
local.file_match "pod_logs" {
path_targets = [
{ __path__ = "/var/log/pods/*/*/*.log" },
]
}
loki.source.file "pod_logs" {
targets = local.file_match.pod_logs.targets
forward_to = [loki.write.victorialogs.receiver]
}
loki.write "victorialogs" {
endpoint {
url = "<url>"
tenant_id = "0:0"
}
external_labels = {}
}
But in Alloy logs I see:
ts=2025-09-22T23:19:50.679821972Z level=debug msg="no files targets were passed, nothing will be tailed" component_id=loki.source.file.pod_logs
What’s the recommended way to collect pod logs with Alloy in OpenShift?
why not getting log data?