Hi My question is as follows:
promtail version = 2.9.3, Chart version: 6.15.5
After I customize the configuration and deploy it,promtail logs Continuously reporting errors:
level=error ts=2024-05-17T08:08:59.854229558Z caller=filetarget.go:342 msg="failed to tail file, stat failed" error="stat /var/log/pods/xxx/7241.log: no such file or directory" filename=/var/log/pods/xxx/7241.log
level=error ts=2024-05-17T08:08:59.854282873Z caller=filetarget.go:342 msg="failed to tail file, stat failed" error="stat /var/log/pods/xxx0.log: no such file or directory" filename=/var/log/pods/xxxt/0.log
level=error ts=2024-05-17T08:08:59.854342495Z caller=filetarget.go:342 msg="failed to tail file, stat failed" error="stat /var/log/pods/xxx/0.log: no such file or directory" filename=/var/log/pods/xxx/0.log
But when I logged into the corresponding host and checked the file in the error message, I found that it existed,This makes me very confused.
[root@xxx xxx]# ls -l
total 4
lrwxrwxrwx 1 root root 167 Apr 12 11:02 0.log -> /data/docker-data/containers/e29bc4a8f819ab54a3137011a258d4e413937052b8ab626b1b291d661d59c194/e29bc4a8f819ab54a3137011a258d4e413937052b8ab626b1b291d661d59c194-json.log
Of course, there are indeed files that do not exist,But there are other log files in the same directory,example:
ls
10.log 11.log
But promtail is still listening to 0.log.
This is my main configuration:
snippets:
pipelineStages:
- cri: {}
- match:
selector: '{container="filebeat"}'
action: drop
- multiline:
firstline: '^\d+\-\d+\-\d+\s+\d+\:\d+\:\d+\.\d+'
max_wait_time: 3s
max_lines: 128
common:
- source_labels: [__meta_kubernetes_namespace]
action: keep
regex: ^(xxx-ns.*)
- action: replace
source_labels:
- __meta_kubernetes_pod_node_name
target_label: node_name
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: namespace
- action: replace
source_labels:
- __meta_kubernetes_pod_name
target_label: pod
- action: replace
source_labels:
- __meta_kubernetes_pod_container_name
target_label: container
- action: replace
replacement: /var/log/pods/*$1/*.log
separator: /
source_labels:
- __meta_kubernetes_pod_uid
- __meta_kubernetes_pod_container_name
target_label: __path__
- action: replace
replacement: /var/log/pods/*$1/*.log
regex: true/(.*)
separator: /
source_labels:
- __meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash
- __meta_kubernetes_pod_annotation_kubernetes_io_config_hash
- __meta_kubernetes_pod_container_name
target_label: __path__
- action: replace
source_labels: []
target_label: type
replacement: svclog
scrapeConfigs: |
# See also https://github.com/grafana/loki/blob/master/production/ksonnet/promtail/scrape_config.libsonnet for reference
- job_name: kubernetes-pods
pipeline_stages:
{{- toYaml .Values.config.snippets.pipelineStages | nindent 4 }}
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels:
- __meta_kubernetes_pod_controller_name
regex: ([0-9a-z-.]+?)(-[0-9a-f]{8,10})?
action: replace
target_label: __tmp_controller_name
- source_labels:
- __meta_kubernetes_pod_label_app_kubernetes_io_name
- __meta_kubernetes_pod_label_app
- __tmp_controller_name
- __meta_kubernetes_pod_name
regex: ^;*([^;]+)(;.*)?$
action: replace
target_label: app
- source_labels:
- __meta_kubernetes_pod_label_app_kubernetes_io_instance
- __meta_kubernetes_pod_label_instance
regex: ^;*([^;]+)(;.*)?$
action: replace
target_label: instance
- source_labels:
- __meta_kubernetes_pod_label_app_kubernetes_io_component
- __meta_kubernetes_pod_label_component
regex: ^;*([^;]+)(;.*)?$
action: replace
target_label: component
{{- if .Values.config.snippets.addScrapeJobLabel }}
- replacement: kubernetes-pods
target_label: scrape_job
{{- end }}
{{- toYaml .Values.config.snippets.common | nindent 4 }}
{{- with .Values.config.snippets.extraRelabelConfigs }}
{{- toYaml . | nindent 4 }}
{{- end }}
- job_name: system-logs
static_configs:
- targets:
- localhost
labels:
node_name: ${HOSTNAME}
type: syslog
__path__: /var/log/messages
Can you help me check this issue? It’s currently hindering my progress. Thank you very much!