Promtail not sending logs to Loki

Hello everyone,

I am trying to setup Promtail to send Kubernetes logs to Loki.

When Promtail proccess the logs I don’t get any error, only this kind of message:

level=debug ts=2023-11-27T20:14:35.685643073Z caller=output.go:75 component=file_pipeline msg="extracted data did not contain output source"
level=debug ts=2023-11-27T20:14:35.685644266Z caller=regex.go:132 component=file_pipeline component=stage type=regex msg="extracted data debug in regex stage" extracteddata="map[app_kubernetes_io_instance:directus-ict app_kubernetes_io_name:directus container:directus content:2023-11-27T20:13:16: PM2 log: App [directus:0] online filename:/var/log/pods/directus_directus-ict-********/directus/0.log flags:F job:directus/directus-ict-******** namespace:directus pod:directus-ict-******** pod_template_hash:******** stream:stdout time:2023-11-27T20:13:16.997636899Z]"

Will this “extracted data did not contain output source” prevent the logs to be added to Loki?

Dry running my logs doesn’t throw any error and output the above line as:

2023-11-28T12:43:05.36610777+0000       {stream="stdout"}       2023-11-28T12:43:05: PM2 log: App [directus:0] online

What is this output source? What am I doing wrong here?

Thank you!

Both Grafana Loki and Promtail are running version 2.9.2

Please share a sample log and your promtail configuration.

Hello Tony, sure!

Config:

server:
  http_listen_port: 9080
  grpc_listen_port: 0
  log_level: debug

clients:
- url: https://my-loki-server-url/loki/api/v1/push
  tls_config:
    insecure_skip_verify: true
  basic_auth:
    username: "${username}"
    password: "${passsword}"
positions:
  filename: /tmp/positions.yaml
target_config:
  sync_period: 10s
scrape_configs:
- job_name: pod-logs
  kubernetes_sd_configs:
    - namespaces:
        names:
          - directus
      role: pod
  pipeline_stages:
    - cri: {}
  relabel_configs:
    - source_labels:
        - __meta_kubernetes_pod_node_name
      target_label: __host__
    - action: labelmap
      regex: __meta_kubernetes_pod_label_(.+)
    - action: replace
      replacement: $1
      separator: /
      source_labels:
        - __meta_kubernetes_namespace
        - __meta_kubernetes_pod_name
      target_label: job
    - action: replace
      source_labels:
        - __meta_kubernetes_namespace
      target_label: namespace
    - action: replace
      source_labels:
        - __meta_kubernetes_pod_name
      target_label: pod
    - action: replace
      source_labels:
        - __meta_kubernetes_pod_container_name
      target_label: container
    - replacement: /var/log/pods/*$1/*.log
      separator: /
      source_labels:
        - __meta_kubernetes_pod_uid
        - __meta_kubernetes_pod_container_name
      target_label: __path__
    - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
      action: keep
      regex: true

Logs:

2023-11-28T12:43:01.589559352Z stdout F {"level":30,"time":1701175381588,"pid":8,"hostname":"directus-ict-7fa84654sd-154fa","msg":"Initializing bootstrap..."}
2023-11-28T12:43:01.789190358Z stdout F {"level":30,"time":1701175381788,"pid":8,"hostname":"directus-ict-7fa84654sd-154fa","msg":"Database already initialized, skipping install"}
2023-11-28T12:43:01.789460698Z stdout F {"level":30,"time":1701175381789,"pid":8,"hostname":"directus-ict-7fa84654sd-154fa","msg":"Running migrations..."}
2023-11-28T12:43:01.911139285Z stdout F {"level":30,"time":1701175381910,"pid":8,"hostname":"directus-ict-7fa84654sd-154fa","msg":"Done"}
2023-11-28T12:43:02.292976739Z stdout F 2023-11-28T12:43:02: PM2 log: Launching in no daemon mode
2023-11-28T12:43:02.324715731Z stdout F 2023-11-28T12:43:02: PM2 log: App [directus:0] starting in -cluster mode-
2023-11-28T12:43:05.36610777Z stdout F 2023-11-28T12:43:05: PM2 log: App [directus:0] online
2023-11-28T12:43:07.550486396Z stdout F {"level":30,"time":1701175387547,"pid":29,"hostname":"directus-ict-7fa84654sd-154fa","msg":"Server started at http://0.0.0.0:8055"}

Thank you!

What port is Loki running on?

Hello Jason,

3100… But it is also available on port 80 through our domain.

But your URL says https

Yes, you are right. 443, sorry.

HTTPS config was already added to Loki service.

Do you think it could be something related to Loki address? Shouldn’t it throw some exception when calling /loki/api/v1/push API?

I mean you can do a telnet test against the port. I’m not sure if that’s the issue. I remember I left the s in my address and I was going full Happy Gilmore yelling at my log files to go home

1 Like

Changing Loki URL to a wrong value throws this error:

level=warn ts=2023-11-28T16:48:48.885377814Z caller=client.go:419 component=client host=loki.wrong-address.com msg="error sending batch, will retry" status=-1 tenant= error="Post \"https://loki.wrong-address.com/loki/api/v1/push\": dial tcp: lookup loki.wrong-address.com on N.N.N.N:N: no such host"

Ok, made a new test changing Loki adress to internal cluster on HTTP, port 3100, and still get those:

level=debug ts=2023-11-28T16:50:52.74734108Z caller=output.go:75 component=file_pipeline msg="extracted data did not contain output source"

Have you tried without any regex and then add your regex one by one?

I tested your logs against a promtail agent with inspection turned on, and everything looks good, so I am not quite sure what the problem is. I would recommend:

  1. Remove both the cri and relabel configs. This should forward the logs to Loki as they come in, with no parsing at all.

  2. If #1 is successful, add cri stage.

  3. Add relabel configs afterwards.

This should hopefully tell you where the issue is.

1 Like

Thank you Tony and Jason,

Turns out Jason’s suspicion was right. Using HTTPS config was the problem, even tho it doesn’t throw any excpetion.

Changing config to use Loki internal address, over HTTP, made it work.

It still outputs those “extracted data did not contain output source” but it doesn’t affect the log ingestion.

That error comes from this:

        - replacement: /var/log/pods/*$1/*.log
          separator: /
          source_labels:
            - __meta_kubernetes_pod_uid
            - __meta_kubernetes_pod_container_name
          target_label: __path__

I am not exactly sure how Promtail should find the logs it should push to Loki without it, but when I remove this I can’t see any target on Promtail.

Problem solved!

Thank you very much for your help! :smiley:

1 Like

Great, glad you were able to figure it out.

I probably should’ve looked at it a bit closely. With kubernetes_sd_configs you don’t need to specify path. My understanding is it reads from kubernetes API, find the pods that match the filter (in your case you were filtering by namespaces), and scrape logs directly from the API.

You would only need to specify path if you were configuring promtail to read from docker logs directly (such as /var/lib/docker/containers//*.log or pods//*.log).