Using the logstash-output-loki tool does not send data to loki for reception properly

Hi, guys:
I’ve encountered the following problem with logstash-output-loki, and I hope you can help me with it~
Description of the problem:

I’m using loki-stack to do a log collection and visualisation solution. I deployed it with the following command:

helm upgrade loki .  --set grafana.enabled=true --set grafana.service.type=NodePort --set filebeat.enabled=true,logstash.enabled=true,promtail.enabled=false     --set loki.fullnameOverride=loki,logstash.fullnameOverride=logstash-loki -n loki  

I want to use filebeat as a client to collect logs (instead of Promtail ), send them to logstash, then logstash sends the log data to loki, and eventually grafana gets the data from loki and visualises it.
Due to the difference in my business environment, we can’t use Promtail and use filebeat as the log collection tool, please understand.

Environmental reenactment:

Configuring logstash, filebeat, loki, grafana in helm values.yaml

test_pod:
  enabled: true
  image: bats/bats:v1.1.0
  pullPolicy: IfNotPresent

loki:
  enabled: true
  isDefault: true
  url: http://{{(include "loki.serviceName" .)}}:{{ .Values.loki.service.port }}
  readinessProbe:
    httpGet:
      path: /ready
      port: http-metrics
    initialDelaySeconds: 45
  livenessProbe:
    httpGet:
      path: /ready
      port: http-metrics
    initialDelaySeconds: 45
  datasource:
    jsonData: "{}"
    uid: ""


promtail:
  enabled: true
  config:
    logLevel: info
    serverPort: 3101
    clients:
      - url: http://{{ .Release.Name }}:3100/loki/api/v1/push

fluent-bit:
  enabled: false

grafana:
  enabled: false
  adminPassword: XV5AAuh7kfL0RfwiMdHTkzO0QLYTI2CSGgjiiwJc
  sidecar:
    datasources:
      label: ""
      labelValue: ""
      enabled: true
      maxLines: 1000
  image:
    tag: 9.5.1

prometheus:
  enabled: false
  isDefault: false
  url: http://{{ include "prometheus.fullname" .}}:{{ .Values.prometheus.server.service.servicePort }}{{ .Values.prometheus.server.prefixURL }}
  datasource:
    jsonData: "{}"

filebeat:
  enabled: false
  filebeatConfig:
    filebeat.yml: |
      # logging.level: debug
      filebeat.inputs:
      - type: container
        paths:
          - /var/log/containers/*.log
        processors:
        - add_kubernetes_metadata:
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"
      output.logstash:
              hosts: ["logstash-loki:5044"]

logstash:
  enabled: false
  image: grafana/logstash-output-loki
  imageTag: 2.9.9
  filters:
    main: |-
        input {
          beats {
            port => 5044
          }
        }
        
        filter {
          if [kubernetes] {
            mutate {
              add_field => {
                "container_name" => "%{[kubernetes][container][name]}"
                "namespace" => "%{[kubernetes][namespace]}"
                "pod" => "%{[kubernetes][pod][name]}"
              }
              replace => { "host" => "%{[kubernetes][node][name]}"}
            }
          }
          mutate {
            remove_field => ["tags"]  # Note: with include_fields defined below this wouldn't be necessary
          }
        }
        
        output {
          loki {
            url => "https://loki:3100/loki/api/v1/push"
            username => "YWRtaW4="
            password => "WFY1QUF1aDdrZkwwUmZ3aU1kSFRrek8wUUxZVEkyQ1NHZ2ppaXdKYw=="
            batch_size => 112640 #112.64 kilobytes
            retries => 5
            min_delay => 3
            max_delay => 500
            message_field => "message"
            include_fields => ["container_name","namespace","pod","host"]
            metadata_fields => ["pod"]
          }
          # stdout { codec => rubydebug }
        }

# proxy is currently only used by loki test pod
# Note: If http_proxy/https_proxy are set, then no_proxy should include the
# loki service name, so that tests are able to communicate with the loki
# service.
proxy:
  http_proxy: ""
  https_proxy: ""
  no_proxy: ""

Also since filebeat is deployed as daemonset and connected to an elasticsearch link as per the original configuration inside this loki-stack package, I changed it to the logstash address of my deployment

output.logstash. host: '${NODE_NAME}' hosts: "[logstash-loki-headless:9600]"

Based on the configuration above, I executed this command to deploy:

helm upgrade loki . --set grafana.enabled=true --set grafana.service.type=NodePort --set filebeat.enabled=true,logstash.enabled=true,promtail.enabled=false --set loki.fullnameOverride=loki,logstash.fullnameOverride=logstash-loki -n loki

I viewed the following log for one of the filebeat pods:

2024-07-30T03:29:34.466Z        INFO    [monitoring]    log/log.go:184  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cgroup":{"cpu":{"stats":{"periods":208,"throttled":{"ns":7525002324,"periods":64}}},"cpuacct":{"total":{"ns":8616812551}},"memory":{"mem":{"usage":{"bytes":757760}}}},"cpu":{"system":{"ticks":18780,"time":{"ms":466}},"total":{"ticks":318410,"time":{"ms":7518},"value":318410},"user":{"ticks":299630,"time":{"ms":7052}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":18},"info":{"ephemeral_id":"dd5d6560-acfa-4d21-8069-db9bfd547096","uptime":{"ms":1260093},"version":"7.17.3"},"memstats":{"gc_next":82122784,"memory_alloc":41801088,"memory_total":51848263032,"rss":205713408},"runtime":{"goroutines":98}},"filebeat":{"events":{"active":-1,"added":65546,"done":65547},"harvester":{"closed":3,"open_files":6,"running":6,"started":5}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":65536,"active":4096,"batches":32,"total":65536},"read":{"bytes":192},"write":{"bytes":7030929}},"pipeline":{"clients":1,"events":{"active":4117,"filtered":11,"published":65536,"total":65547},"queue":{"acked":65536}}},"registrar":{"states":{"cleanup":3,"current":63,"update":65547},"writes":{"success":32,"total":32}},"system":{"load":{"1":2.5,"15":2.71,"5":2.74,"norm":{"1":0.3125,"15":0.3388,"5":0.3425}}}}}}
2024-07-30T03:29:35.904Z        INFO    [input.harvester]       log/harvester.go:309    Harvester started for paths: [/var/log/containers/*.log]        {"input_id": "62f12307-0ce7-40e0-adfb-30e258567b59", "source": "/var/log/containers/logstash-loki-0_loki_logstash-b5db22fade9b5e591f8556a1dfe103ee9ac4f670b2f5ddecc4227642b2fdb107.log", "state_id": "native::51348647-64769", "finished": false, "os_id": "51348647-64769", "harvester_id": "e4c5f437-7dd6-4afa-a53b-c8c7570a3e14"}
2024-07-30T03:29:36.862Z        INFO    [input.harvester]       log/harvester.go:332    File was removed. Closing because close_removed is enabled.     {"input_id": "62f12307-0ce7-40e0-adfb-30e258567b59", "source": "/var/log/containers/logstash-loki-0_loki_logstash-b5db22fade9b5e591f8556a1dfe103ee9ac4f670b2f5ddecc4227642b2fdb107.log", "state_id": "native::51348645-64769", "finished": false, "os_id": "51348645-64769", "harvester_id": "8d2dfd82-2077-45c7-914d-7592d35bd142"}
2024-07-30T03:29:46.104Z        INFO    [input.harvester]       log/harvester.go:309    Harvester started for paths: [/var/log/containers/*.log]        {"input_id": "62f12307-0ce7-40e0-adfb-30e258567b59", "source": "/var/log/containers/logstash-loki-0_loki_logstash-b5db22fade9b5e591f8556a1dfe103ee9ac4f670b2f5ddecc4227642b2fdb107.log", "state_id": "native::51348642-64769", "finished": false, "os_id": "51348642-64769", "harvester_id": "99b3d337-2eb5-474b-956a-11c7c98ecff9"}
2024-07-30T03:29:47.074Z        INFO    [input.harvester]       log/harvester.go:332    File was removed. Closing because close_removed is enabled.     {"input_id": "62f12307-0ce7-40e0-adfb-30e258567b59", "source": "/var/log/containers/logstash-loki-0_loki_logstash-b5db22fade9b5e591f8556a1dfe103ee9ac4f670b2f5ddecc4227642b2fdb107.log", "state_id": "native::51348647-64769", "finished": false, "os_id": "51348647-64769", "harvester_id": "e4c5f437-7dd6-4afa-a53b-c8c7570a3e14"}
2024-07-30T03:29:56.483Z        INFO    [input.harvester]       log/harvester.go:309    Harvester started for paths: [/var/log/containers/*.log]        {"input_id": "62f12307-0ce7-40e0-adfb-30e258567b59", "source": "/var/log/containers/logstash-loki-0_loki_logstash-b5db22fade9b5e591f8556a1dfe103ee9ac4f670b2f5ddecc4227642b2fdb107.log", "state_id": "native::51348645-64769", "finished": false, "os_id": "51348645-64769", "harvester_id": "3509583c-e7e1-4ee7-8eb0-d2ce76c3397c"}
2024-07-30T03:29:57.439Z        INFO    [input.harvester]       log/harvester.go:332    File was removed. Closing because close_removed is enabled.     {"input_id": "62f12307-0ce7-40e0-adfb-30e258567b59", "source": "/var/log/containers/logstash-loki-0_loki_logstash-b5db22fade9b5e591f8556a1dfe103ee9ac4f670b2f5ddecc4227642b2fdb107.log", "state_id": "native::51348642-64769", "finished": false, "os_id": "51348642-64769", "harvester_id": "99b3d337-2eb5-474b-956a-11c7c98ecff9"}
2024-07-30T03:30:04.466Z        INFO    [monitoring]    log/log.go:184  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cgroup":{"cpu":{"stats":{"periods":211,"throttled":{"ns":7129068231,"periods":61}}},"cpuacct":{"total":{"ns":8611479515}},"memory":{"mem":{"usage":{"bytes":2195456}}}},"cpu":{"system":{"ticks":19270,"time":{"ms":489}},"total":{"ticks":326020,"time":{"ms":7604},"value":326020},"user":{"ticks":306750,"time":{"ms":7115}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":18},"info":{"ephemeral_id":"dd5d6560-acfa-4d21-8069-db9bfd547096","uptime":{"ms":1290093},"version":"7.17.3"},"memstats":{"gc_next":82265040,"memory_alloc":41845736,"memory_total":53071083256,"rss":205713408},"runtime":{"goroutines":98}},"filebeat":{"events":{"added":65545,"done":65545},"harvester":{"closed":3,"open_files":6,"running":6,"started":3}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":65536,"active":4096,"batches":32,"total":65536},"read":{"bytes":192},"write":{"bytes":7021848}},"pipeline":{"clients":1,"events":{"active":4117,"filtered":9,"published":65536,"total":65545},"queue":{"acked":65536}}},"registrar":{"states":{"cleanup":3,"current":63,"update":65545},"writes":{"success":32,"total":32}},"system":{"load":{"1":2.32,"15":2.69,"5":2.67,"norm":{"1":0.29,"15":0.3363,"5":0.3338}}}}}}
2024-07-30T03:30:06.742Z        INFO    [input.harvester]       log/harvester.go:309    Harvester started for paths: [/var/log/containers/*.log]        {"input_id": "62f12307-0ce7-40e0-adfb-30e258567b59", "source": "/var/log/containers/device-api-5785647df5-xs2xk_gray_device-api-23763eeb308560c6a9c837f427f7f2d48fc1fc59c4987834d4b746a7ea9c8532.log", "state_id": "native::17810198-64769", "finished": false, "os_id": "17810198-64769", "old_source": "/var/log/containers/device-api-5785647df5-xs2xk_gray_device-api-23763eeb308560c6a9c837f427f7f2d48fc1fc59c4987834d4b746a7ea9c8532.log", "old_finished": true, "old_os_id": "17810198-64769", "harvester_id": "e5d281cd-1f76-4574-81ba-ffe4bb83b875"}
2024-07-30T03:30:06.742Z        INFO    [input.harvester]       log/harvester.go:309    Harvester started for paths: [/var/log/containers/*.log]        {"input_id": "62f12307-0ce7-40e0-adfb-30e258567b59", "source": "/var/log/containers/device-api-6f796b7cc5-ts4gt_prod_device-api-ea35c6b2f1dbaf8ad94ad53b03124dcd192475d850b70bc5b5120ba74ea85c9c.log", "state_id": "native::67785539-64769", "finished": false, "os_id": "67785539-64769", "old_source": "/var/log/containers/device-api-6f796b7cc5-ts4gt_prod_device-api-ea35c6b2f1dbaf8ad94ad53b03124dcd192475d850b70bc5b5120ba74ea85c9c.log", "old_finished": true, "old_os_id": "67785539-64769", "harvester_id": "8b1fc8de-ad77-4e5d-a6f3-819c0a9a0f9d"}
2024-07-30T03:30:06.743Z        INFO    [input.harvester]       log/harvester.go:309    Harvester started for paths: [/var/log/containers/*.log]        {"input_id": "62f12307-0ce7-40e0-adfb-30e258567b59", "source": "/var/log/containers/rms-76cc7b7cd6-tvxfb_prod_rms-3d80e87e467c2f2e3f571a85bbd963e044b83ae93e0e1541221793892b5403d7.log", "state_id": "native::219223217-64769", "finished": false, "os_id": "219223217-64769", "old_source": "/var/log/containers/rms-76cc7b7cd6-tvxfb_prod_rms-3d80e87e467c2f2e3f571a85bbd963e044b83ae93e0e1541221793892b5403d7.log", "old_finished": true, "old_os_id": "219223217-64769", "harvester_id": "fe45c1dd-322a-4ec3-ba40-3c2b52ede528"}
2024-07-30T03:30:06.743Z        ERROR   [input] log/input.go:557        Harvester could not be started on new file: /var/log/containers/logstash-loki-0_loki_logstash-b5db22fade9b5e591f8556a1dfe103ee9ac4f670b2f5ddecc4227642b2fdb107.log, Err: error setting up harvester: Harvester setup failed. Unexpected file opening error: file info is not identical with opened file. Aborting harvesting and retrying file later again {"input_id": "62f12307-0ce7-40e0-adfb-30e258567b59", "source": "/var/log/containers/logstash-loki-0_loki_logstash-b5db22fade9b5e591f8556a1dfe103ee9ac4f670b2f5ddecc4227642b2fdb107.log", "state_id": "native::51348647-64769", "finished": false, "os_id": "51348647-64769"}
2024-07-30T03:30:06.744Z        INFO    [input.harvester]       log/harvester.go:309    Harvester started for paths: [/var/log/containers/*.log]        {"input_id": "62f12307-0ce7-40e0-adfb-30e258567b59", "source": "/var/log/containers/cds-79b8688cd8-pwjgz_gray_cds-be21806947eea07e7ff062fc0e8a02fb30249f1e61f9dfdb94a065ece0fd95fb.log", "state_id": "native::342774-64769", "finished": false, "os_id": "342774-64769", "old_source": "/var/log/containers/cds-79b8688cd8-pwjgz_gray_cds-be21806947eea07e7ff062fc0e8a02fb30249f1e61f9dfdb94a065ece0fd95fb.log", "old_finished": true, "old_os_id": "342774-64769", "harvester_id": "4627b62b-11b2-42e4-8dfd-e8dfda381a50"}
2024-07-30T03:30:06.748Z        INFO    [input.harvester]       log/harvester.go:332    File was removed. Closing because close_removed is enabled.     {"input_id": "62f12307-0ce7-40e0-adfb-30e258567b59", "source": "/var/log/containers/logstash-loki-0_loki_logstash-b5db22fade9b5e591f8556a1dfe103ee9ac4f670b2f5ddecc4227642b2fdb107.log", "state_id": "native::51348645-64769", "finished": false, "os_id": "51348645-64769", "harvester_id": "3509583c-e7e1-4ee7-8eb0-d2ce76c3397c"}
2024-07-30T03:30:16.746Z        INFO    [input.harvester]       log/harvester.go:309    Harvester started for paths: [/var/log/containers/*.log]        {"input_id": "62f12307-0ce7-40e0-adfb-30e258567b59", "source": "/var/log/containers/logstash-loki-0_loki_logstash-b5db22fade9b5e591f8556a1dfe103ee9ac4f670b2f5ddecc4227642b2fdb107.log", "state_id": "native::51348645-64769", "finished": false, "os_id": "51348645-64769", "old_source": "/var/log/containers/logstash-loki-0_loki_logstash-b5db22fade9b5e591f8556a1dfe103ee9ac4f670b2f5ddecc4227642b2fdb107.log", "old_finished": true, "old_os_id": "51348645-64769", "harvester_id": "f4dda560-0125-4638-8373-c6b3ded8acde"}
2024-07-30T03:30:27.190Z        INFO    [input.harvester]       log/harvester.go:309    Harvester started for paths: [/var/log/containers/*.log]        {"input_id": "62f12307-0ce7-40e0-adfb-30e258567b59", "source": "/var/log/containers/logstash-loki-0_loki_logstash-b5db22fade9b5e591f8556a1dfe103ee9ac4f670b2f5ddecc4227642b2fdb107.log", "state_id": "native::51348642-64769", "finished": false, "os_id": "51348642-64769", "harvester_id": "e0c4aa6a-ebd1-4825-9b94-c13d22bcabc7"}
2024-07-30T03:30:34.467Z        INFO    [monitoring]    log/log.go:184  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cgroup":{"cpu":{"stats":{"periods":211,"throttled":{"ns":6948112988,"periods":45}}},"cpuacct":{"total":{"ns":6234524324}},"memory":{"mem":{"usage":{"bytes":2691072}}}},"cpu":{"system":{"ticks":19640,"time":{"ms":371}},"total":{"ticks":331200,"time":{"ms":5179},"value":331200},"user":{"ticks":311560,"time":{"ms":4808}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":23},"info":{"ephemeral_id":"dd5d6560-acfa-4d21-8069-db9bfd547096","uptime":{"ms":1320093},"version":"7.17.3"},"memstats":{"gc_next":82406224,"memory_alloc":41764432,"memory_sys":4456448,"memory_total":53892444424,"rss":202915840},"runtime":{"goroutines":123}},"filebeat":{"events":{"active":1,"added":43571,"done":43570},"harvester":{"closed":1,"open_files":11,"running":11,"started":6},"input":{"log":{"files":{"truncated":1}}}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":43562,"active":4096,"batches":27,"total":43562},"read":{"bytes":162},"write":{"bytes":4724642}},"pipeline":{"clients":1,"events":{"active":4117,"filtered":8,"published":43562,"total":43570},"queue":{"acked":43562}}},"registrar":{"states":{"cleanup":1,"current":63,"update":43570},"writes":{"success":27,"total":27}},"system":{"load":{"1":2.05,"15":2.65,"5":2.56,"norm":{"1":0.2563,"15":0.3313,"5":0.32}}}}}}

I have viewed the following logs for loki-0’s pod:

level=info ts=2024-07-30T03:16:36.561228282Z caller=table_manager.go:252 msg="query readiness setup completed" duration=1.733µs distinct_users_len=0
level=info ts=2024-07-30T03:16:36.562260166Z caller=table_manager.go:167 msg="handing over indexes to shipper"
level=info ts=2024-07-30T03:16:36.568955925Z caller=checkpoint.go:615 msg="starting checkpoint"
level=info ts=2024-07-30T03:16:36.569104712Z caller=checkpoint.go:340 msg="attempting checkpoint for" dir=/data/loki/wal/checkpoint.001143
level=info ts=2024-07-30T03:16:36.570625603Z caller=checkpoint.go:502 msg="atomic checkpoint finished" old=/data/loki/wal/checkpoint.001143.tmp new=/data/loki/wal/checkpoint.001143
ts=2024-07-30T03:16:42.678759138Z caller=spanlogger.go:80 level=info msg="building index list cache"
ts=2024-07-30T03:16:42.67885817Z caller=spanlogger.go:80 level=info msg="index list cache built" duration=17.723µs
level=info ts=2024-07-30T03:17:36.561999631Z caller=table_manager.go:134 msg="uploading tables"
level=info ts=2024-07-30T03:17:36.563078547Z caller=table_manager.go:167 msg="handing over indexes to shipper"
level=info ts=2024-07-30T03:18:36.561580763Z caller=table_manager.go:134 msg="uploading tables"
level=info ts=2024-07-30T03:18:36.562648646Z caller=table_manager.go:167 msg="handing over indexes to shipper"
level=info ts=2024-07-30T03:19:36.561478044Z caller=table_manager.go:134 msg="uploading tables"
level=info ts=2024-07-30T03:19:36.562573197Z caller=table_manager.go:167 msg="handing over indexes to shipper"
level=info ts=2024-07-30T03:20:36.561470901Z caller=table_manager.go:134 msg="uploading tables"
level=info ts=2024-07-30T03:20:36.562557099Z caller=table_manager.go:167 msg="handing over indexes to shipper"
level=info ts=2024-07-30T03:21:36.561584576Z caller=table_manager.go:213 msg="syncing tables"
level=info ts=2024-07-30T03:21:36.561635781Z caller=table_manager.go:252 msg="query readiness setup completed" duration=2.034µs distinct_users_len=0
level=info ts=2024-07-30T03:21:36.561562207Z caller=table_manager.go:134 msg="uploading tables"
level=info ts=2024-07-30T03:21:36.562652547Z caller=table_manager.go:167 msg="handing over indexes to shipper"
level=info ts=2024-07-30T03:21:36.568327669Z caller=checkpoint.go:615 msg="starting checkpoint"
level=info ts=2024-07-30T03:21:36.568462573Z caller=checkpoint.go:340 msg="attempting checkpoint for" dir=/data/loki/wal/checkpoint.001144
level=info ts=2024-07-30T03:21:36.570295399Z caller=checkpoint.go:502 msg="atomic checkpoint finished" old=/data/loki/wal/checkpoint.001144.tmp new=/data/loki/wal/checkpoint.001144
level=info ts=2024-07-30T03:22:36.561131931Z caller=table_manager.go:134 msg="uploading tables"
level=info ts=2024-07-30T03:22:36.562203378Z caller=table_manager.go:167 msg="handing over indexes to shipper"
level=info ts=2024-07-30T03:23:36.5620418Z caller=table_manager.go:134 msg="uploading tables"
level=info ts=2024-07-30T03:23:36.562099803Z caller=table_manager.go:167 msg="handing over indexes to shipper"
level=info ts=2024-07-30T03:24:36.561547441Z caller=table_manager.go:134 msg="uploading tables"
level=info ts=2024-07-30T03:24:36.562619543Z caller=table_manager.go:167 msg="handing over indexes to shipper"
level=info ts=2024-07-30T03:25:36.562031501Z caller=table_manager.go:134 msg="uploading tables"
level=info ts=2024-07-30T03:25:36.562093597Z caller=table_manager.go:167 msg="handing over indexes to shipper"
level=info ts=2024-07-30T03:26:36.561208335Z caller=table_manager.go:213 msg="syncing tables"
level=info ts=2024-07-30T03:26:36.561219869Z caller=table_manager.go:134 msg="uploading tables"
level=info ts=2024-07-30T03:26:36.561256942Z caller=table_manager.go:252 msg="query readiness setup completed" duration=1.757µs distinct_users_len=0
level=info ts=2024-07-30T03:26:36.56229159Z caller=table_manager.go:167 msg="handing over indexes to shipper"
level=info ts=2024-07-30T03:26:36.56890683Z caller=checkpoint.go:615 msg="starting checkpoint"
level=info ts=2024-07-30T03:26:36.569045219Z caller=checkpoint.go:340 msg="attempting checkpoint for" dir=/data/loki/wal/checkpoint.001145
level=info ts=2024-07-30T03:26:36.571393554Z caller=checkpoint.go:502 msg="atomic checkpoint finished" old=/data/loki/wal/checkpoint.001145.tmp new=/data/loki/wal/checkpoint.001145

I see the following logs for the pod of logstash-loki-0:

{
    "@timestamp" => 2024-07-30T03:34:16.910Z,
       "message" => "                        \"kubernetes_io/hostname\" => \"k8s-master001\",",
         "input" => {
        "type" => "container"
    },
          "host" => {
        "name" => "loki-filebeat-8gsjk"
    },
           "log" => {
          "file" => {
            "path" => "/var/log/containers/logstash-loki-0_loki_logstash-b5db22fade9b5e591f8556a1dfe103ee9ac4f670b2f5ddecc4227642b2fdb107.log"
        },
        "offset" => 1678464
    },
    "kubernetes" => {
        "namespace_labels" => {
            "kubernetes_io/metadata_name" => "loki"
        },
                     "pod" => {
            "name" => "logstash-loki-0",
             "uid" => "5559c14a-3e96-4471-8429-f439b47cacc0",
              "ip" => "10.244.10.28"
        },
                  "labels" => {
            "statefulset_kubernetes_io/pod-name" => "logstash-loki-0",
                  "apps_kubernetes_io/pod-index" => "0",
                                           "app" => "logstash-loki",
                                       "release" => "loki",
                      "controller-revision-hash" => "logstash-loki-7d8c6f65ff",
                                         "chart" => "logstash",
                                      "heritage" => "Helm"
        },
                    "node" => {
            "hostname" => "k8s-master001",
                "name" => "k8s-master001",
                 "uid" => "52568c2c-daab-400d-8f0a-a7909110aab6",
              "labels" => {
                                  "IngressProxy" => "true",
                       "beta_kubernetes_io/arch" => "amd64",
                            "kubernetes_io/arch" => "amd64",
                        "kubernetes_io/hostname" => "k8s-master001",
                                       "ingress" => "true",
                       "node_kubernetes_io/node" => "",
                                 "whatisyouname" => "dageigei",
                              "kubernetes_io/os" => "linux",
                "node-role_kubernetes_io/master" => "",
                         "beta_kubernetes_io/os" => "linux"
            }
        },
               "namespace" => "loki",
               "container" => {
            "name" => "logstash"
        },
           "namespace_uid" => "26368a02-d52e-482a-9aac-51bfe3b3ff92",
             "statefulset" => {
            "name" => "logstash-loki"
        }
    },
          "tags" => [
        [0] "beats_input_codec_plain_applied"
    ],
           "ecs" => {
        "version" => "1.12.0"
    },
        "stream" => "stdout",
         "agent" => {
                  "id" => "4151c412-a0f3-408a-93f1-7f0b98d8af0e",
                "type" => "filebeat",
            "hostname" => "loki-filebeat-8gsjk",
                "name" => "loki-filebeat-8gsjk",
        "ephemeral_id" => "dd5d6560-acfa-4d21-8069-db9bfd547096",
             "version" => "7.17.3"
    },
      "@version" => "1",
     "container" => {
             "id" => "b5db22fade9b5e591f8556a1dfe103ee9ac4f670b2f5ddecc4227642b2fdb107",
        "runtime" => "containerd",
          "image" => {
            "name" => "docker.io/grafana/logstash-output-loki:2.9.9"
        }
    }
}
{
    "@timestamp" => 2024-07-30T03:34:16.910Z,
       "message" => "                                       \"ingress\" => \"true\",",
          "host" => {
        "name" => "loki-filebeat-8gsjk"
    },
         "input" => {
        "type" => "container"
    },
           "log" => {
          "file" => {
            "path" => "/var/log/containers/logstash-loki-0_loki_logstash-b5db22fade9b5e591f8556a1dfe103ee9ac4f670b2f5ddecc4227642b2fdb107.log"
        },
        "offset" => 1678578
    },
    "kubernetes" => {
        "namespace_labels" => {
            "kubernetes_io/metadata_name" => "loki"
        },
           "namespace_uid" => "26368a02-d52e-482a-9aac-51bfe3b3ff92",
                     "pod" => {
            "name" => "logstash-loki-0",
             "uid" => "5559c14a-3e96-4471-8429-f439b47cacc0",
              "ip" => "10.244.10.28"
        },
                    "node" => {
            "hostname" => "k8s-master001",
                "name" => "k8s-master001",
                 "uid" => "52568c2c-daab-400d-8f0a-a7909110aab6",
              "labels" => {
                        "kubernetes_io/hostname" => "k8s-master001",
                            "kubernetes_io/arch" => "amd64",
                       "beta_kubernetes_io/arch" => "amd64",
                         "beta_kubernetes_io/os" => "linux",
                                       "ingress" => "true",
                       "node_kubernetes_io/node" => "",
                                 "whatisyouname" => "dageigei",
                              "kubernetes_io/os" => "linux",
                "node-role_kubernetes_io/master" => "",
                                  "IngressProxy" => "true"
            }
        },
               "namespace" => "loki",
               "container" => {
            "name" => "logstash"
        },
             "statefulset" => {
            "name" => "logstash-loki"
        },
                  "labels" => {
            "statefulset_kubernetes_io/pod-name" => "logstash-loki-0",
                  "apps_kubernetes_io/pod-index" => "0",
                                           "app" => "logstash-loki",
                      "controller-revision-hash" => "logstash-loki-7d8c6f65ff",
                                       "release" => "loki",
                                         "chart" => "logstash",
                                      "heritage" => "Helm"
        }
    },
          "tags" => [
        [0] "beats_input_codec_plain_applied"
    ],
        "stream" => "stdout",
           "ecs" => {
        "version" => "1.12.0"
    },
         "agent" => {
                  "id" => "4151c412-a0f3-408a-93f1-7f0b98d8af0e",
                "type" => "filebeat",
            "hostname" => "loki-filebeat-8gsjk",
                "name" => "loki-filebeat-8gsjk",
        "ephemeral_id" => "dd5d6560-acfa-4d21-8069-db9bfd547096",
             "version" => "7.17.3"
    },
      "@version" => "1",
     "container" => {
             "id" => "b5db22fade9b5e591f8556a1dfe103ee9ac4f670b2f5ddecc4227642b2fdb107",
        "runtime" => "containerd",
          "image" => {
            "name" => "docker.io/grafana/logstash-output-loki:2.9.9"
        }
    }
}

I don’t know why, but the loki side just isn’t picking up the logstash filter side, my custom log fields. And when I open grafana to view this datasource, clicking on test, reports the following message:

I have a question, why I am using filebeat’s log collection tool, it will report error as promtail, is it because this loki-stack package binds loki with promtail? Is it because the loki-stack package is bound to loki and promtail? Or loki and promtail are natural partners, and work together?

When I go into explore to look at the fields, I find that I can’t detect any of them.


Thank you very much, you are able to see here, my query please, have you guys ever encountered it? Or is there something wrong with my configurations, please advise! :gift_heart:

  1. Your loki configuration has password, probably want to change / remove that.
  2. Your Loki URL should probably be http if you are hitting port 3100.

If that doesn’t fix it, I would recommend you to simplify your troubleshooting effort and hit each component individually / manually, and see which link is not working, and then go from there.

Okay, I’ll try. Thanks for the answer.