I have the server with stack loki+promtail+syslog-ng run on the docker compose stack with config:
promtail-config
server:
http_listen_port: 9080
grpc_listen_port: 9095
positions:
filename: /tmp/positions.yaml
filename: /etc/promtail/promtail-positions.yaml
clients:
scrape_configs:
- job_name: nginx-log-collector
syslog:
listen_address: 0.0.0.0:601
label_structured_data: yes
max_message_length: 65536
relabel_configs:- source_labels: [‘__syslog_message_hostname’]
target_label: ‘server’
pipeline_stages: - json:
expressions:
request_type: request_type
nginx_service: service
vhost: server_name
code: status
request_time: request_time
timestamp: time_iso8601 - timestamp:
source: timestamp
format: RFC3339
location: “Europe/Moscow” - labels:
request_type:
nginx_service:
vhost:
code: - metrics:
requests_total:
type: Counter
description: HTTP requests count
prefix: http_
config:
match_all: true
action: inc
request_duration_seconds:
type: Histogram
description: HTTP request duration seconds
prefix: http_
source: request_time
config:
buckets: [0.1,0.2,0.5,0.7,1,2,10,30] - match:
selector: ‘{server=~“.+”} |= “[error]”’
stages:
- regex:
expression: ‘.*request: "(?P<request_type>[A-Z]+) /.*upstream: "\w+://(?P<upstream_addr>\d+.\d+.\d+.\d+:\d+).host: “(?P[a-zA-z.-]+)”.’
- labels:
vhost:
request_type:
upstream_addr:
- source_labels: [‘__syslog_message_hostname’]
syslog-ng-config
@version: 4.7
@include “scl.conf”
options {
create-dirs(yes);
log_fifo_size(1000000);
keep-hostname(yes);
};
source s_network {
network(ip(“0.0.0.0”) transport(“udp”) port(514));
};
destination d_loki {
syslog(“promtail” transport(“tcp”) port(601));
};
log {
source(s_network);
destination(d_loki);
};
Syslog-ng monitoring reports that outgoing packets to promtail are getting lost.
Messages are not lost at night, i.e. their loss is a consequence of some limits, but which ones?
How do I understand why the messages are lost? What can be configured in the promtail configuration?