In my fleet of personal servers, I’m formatting nginx access logs as json and shipping those to Loki with Promtail. Around October 25th, I see my two ARM based servers (one running RHEL9, one running Fedora 38) quit shipping these files, but are still successfully shipping the journal. I did routine patching that day and I see 2.9.2 came out October 16th.
My x86_64 Debian 12 servers on the same version of promtail are not having this issue.
The playbook I wrote for this adds promtail to applicable user groups and also sets ACLs for the nginx log folder. Using sudo -u promtail
I can ls
the nginx log folder and cat
the files in it.
Checking positions.yml
on the Red Hat servers vs the Debian ones, all the Debian systems show the nginx logs but the Red Hat servers only show the journal
I tried downgrading both to 2.9.1 today, but the problem persists.
My scrape_configs
looks like this across all hosts:
scrape_configs:
- job_name: nginx
pipeline_stages:
- match:
selector: '{job="nginx"}'
stages:
- json:
expressions:
timestamp: time
- timestamp:
source: timestamp
format: RFC3339
static_configs:
- targets:
- localhost
labels:
job: nginx
hostname: REDACTED
__path__: /var/log/nginx/*access.log
- job_name: journal
journal:
path: /var/log/journal
labels:
job: systemd-journal
hostname: REDACTED
relabel_configs:
- source_labels: ['__journal__systemd_unit']
target_label: 'unit'
As far as parsing the timestamp goes, the only difference between the logs on the two server types is my Debian hosts are on UTC and my Red Hat ones are on EST. Example from each:
"time":"2023-11-24T15:14:43+00:00"
"time":"2023-11-24T10:44:22-05:00"
Something must have changed here, but I’m not sure if it’s related to the different OS version or the different architecture version.