How to custom timestamp in promtail and explore on Grafana?

Hi,I am using the promtail component for log collection,Examples of logs are as follows:

2023-08-30 10:14:56,274 INFO  datanode.DataNode (BlockReceiver.java:run(1506)) - PacketResponder: BP-1986026358-172.18.0.29-1625995456331:blk_1102843442_29109253, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=2:[172.18.2.29:1019, 172.18.0.31:1019] terminating

My promtail yaml config file is:

server:
  http_listen_port: 9080
  grpc_listen_port: 0
  grpc_server_max_send_msg_size: 6291456
  grpc_server_max_recv_msg_size: 6291456
positions:
  filename: /opt/promtail/positions.yaml
clients: 
  - url: http://172.18.0.25:3100/loki/api/v1/push
    batchsize: 4194304
scrape_configs:
  - job_name: hdfs
    static_configs:
    - labels:
        service: namenode
        host: hdp001.datasw
        __path__: /var/log/hadoop/*/*hdfs-namenode-*.log
    - labels:
        service: datanode
        host: hdp001.datasw
        __path__: /var/log/hadoop/*/*hdfs-*datanode-*.log
    pipeline_stages:
    - match:
        selector: '{service=~"namenode|datanode"}'
        stages:
        - regex:
            expression: '^(?P<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}),\S+ (?P<level>\S+)  (?P<class>[\w.]+)'
        - labels:
            time:
            level:
            class:  
        - timestamp:
            source: time
            format: "2006-01-02 15:04:05 UTC"
        - drop:
            source: "level"
            value: "DEBUG"

But when I first started promtail, all of logs were set the same timestamp,Is this normal? Or is there something wrong with my timestamp configuration?
image

Well, I think I figured out why the timestamp wasn’t used, and I changed the configuration to this:

server:
  http_listen_port: 9080
  grpc_listen_port: 0
  grpc_server_max_send_msg_size: 6291456
  grpc_server_max_recv_msg_size: 6291456
positions:
  # 这个文件是promtail记录日志偏移量的文件,每次采集这个文件都会更新
  # 就算服务宕机,下次重启的话也会从这个文件中记录的日志偏移量开始
  filename: /opt/promtail/positions.yaml
clients: # 注意,此处将日志发送给Loki
  - url: http://172.18.0.25:3100/loki/api/v1/push
    # 发送给loki前累积的最大批处理大小(以字节为单位),这里是4M
    batchsize: 4194304
limits_config:
  # 是否限制采集速率
  readline_rate_enabled: true
  readline_rate: 5000
  readline_burst: 10000
scrape_configs:
# 下面的配置部分就和Prometheus很相似了
  - job_name: hdfs
    static_configs:
    - labels:
        service: namenode
        host: hdp001.datasw
        __path__: /var/log/hadoop/*/*hdfs-namenode-*.log
    - labels:
        service: datanode
        host: hdp001.datasw
        __path__: /var/log/hadoop/*/*hdfs-*datanode-*.log
    pipeline_stages:
    - match:
        selector: '{service=~"namenode|datanode"}'
        stages:
        - regex:
            expression: '^(?P<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\S+) (?P<level>\S+)  (?P<class>[\w.]+)'
        - labels:
            time:
            level:
            class:
        - timestamp:
            source: time
            format: "2006-01-02 15:04:05,999"
            location: "Asia/Shanghai"

        - drop:
            source: "level"
            value: "DEBUG"

the timestamp is right.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.