Grafana Agent for Loki has duplicated scrape config

Hello everyone,

I got a problem with the Grafana Agent for Logs. Every time I make one request to our internal Grafana Dashboard, I got two log entries from our ingress-nginx. They contain the same data, but the “job” label is set differently. One is “scraper/grafana-agent” and the other one is “scraper/grafana-agent-logs”.

It worked properly before I integrated Grafana Tempo with a second Grafana Agent running in the Kubernetes cluster. The Grafana Agent for Logs is managed by the Grafana Operator.

So, what I’ve done is to upgrade the Grafana Agent from Version 0.19.0 to 0.20.0. I see that the config file of the Grafana Agent has duplicated entries and even totally new scrape_configs entries:

var/lib/grafana-agent/config-in/agent.yml:

logs:
configs:
- clients:
- external_labels:
cluster: scraper/grafana-agent-logs
url: http://loki-distributed-gateway.loki.svc.cluster.local/loki/api/v1/push
name: scraper/grafana-agent
scrape_configs:
- job_name: podLogs/scraper/grafana-agent-logs
kubernetes_sd_configs:
- role: pod
pipeline_stages:
- docker: {}
relabel_configs:
- source_labels:
- job
target_label: __tmp_prometheus_job_name
- source_labels:
- __meta_kubernetes_namespace
target_label: namespace
- source_labels:
- __meta_kubernetes_service_name
target_label: service
- source_labels:
- __meta_kubernetes_pod_name
target_label: pod
- source_labels:
- __meta_kubernetes_pod_container_name
target_label: container
- replacement: scraper/grafana-agent-logs
target_label: job
- replacement: /var/log/pods/*$1/*.log
separator: /
source_labels:
- __meta_kubernetes_pod_uid
- __meta_kubernetes_pod_container_name
target_label: __path__
- job_name: podLogs/scraper/grafana-agent
kubernetes_sd_configs:
- role: pod
pipeline_stages:
- docker: {}
relabel_configs:
- source_labels:
- job
target_label: __tmp_prometheus_job_name
- source_labels:
- __meta_kubernetes_namespace
target_label: namespace
- source_labels:
- __meta_kubernetes_service_name
target_label: service
- source_labels:
- __meta_kubernetes_pod_name
target_label: pod
- source_labels:
- __meta_kubernetes_pod_container_name
target_label: container
- replacement: scraper/grafana-agent
target_label: job
- replacement: /var/log/pods/*$1/*.log
separator: /
source_labels:
- __meta_kubernetes_pod_uid
- __meta_kubernetes_pod_container_name
target_label: __path__
- clients:
- external_labels:
cluster: scraper/grafana-agent-logs
url: http://loki-distributed-gateway.loki.svc.cluster.local/loki/api/v1/push
name: scraper/grafana-agent-logs
scrape_configs:
- job_name: podLogs/scraper/grafana-agent-logs
kubernetes_sd_configs:
- role: pod
pipeline_stages:
- docker: {}
relabel_configs:
- source_labels:
- job
target_label: __tmp_prometheus_job_name
- source_labels:
- __meta_kubernetes_namespace
target_label: namespace
- source_labels:
- __meta_kubernetes_service_name
target_label: service
- source_labels:
- __meta_kubernetes_pod_name
target_label: pod
- source_labels:
- __meta_kubernetes_pod_container_name
target_label: container
- replacement: scraper/grafana-agent-logs
target_label: job
- replacement: /var/log/pods/*$1/*.log
separator: /
source_labels:
- __meta_kubernetes_pod_uid
- __meta_kubernetes_pod_container_name
target_label: __path__
- job_name: podLogs/scraper/grafana-agent
kubernetes_sd_configs:
- role: pod
pipeline_stages:
- docker: {}
relabel_configs:
- source_labels:
- job
target_label: __tmp_prometheus_job_name
- source_labels:
- __meta_kubernetes_namespace
target_label: namespace
- source_labels:
- __meta_kubernetes_service_name
target_label: service
- source_labels:
- __meta_kubernetes_pod_name
target_label: pod
- source_labels:
- __meta_kubernetes_pod_container_name
target_label: container
- replacement: scraper/grafana-agent
target_label: job
- replacement: /var/log/pods/*$1/*.log
separator: /
source_labels:
- __meta_kubernetes_pod_uid
- __meta_kubernetes_pod_container_name
target_label: __path__
positions_directory: /var/lib/grafana-agent/data
server:
http_listen_port: 8080

Before it looked like this:

logs:
configs:
- clients:
- external_labels:
cluster: scraper/grafana-agent-logs
url: http://loki-distributed-gateway.loki.svc.cluster.local/loki/api/v1/push
name: scraper/grafana-agent
scrape_configs:
- job_name: podLogs/scraper/grafana-agent
kubernetes_sd_configs:
- role: pod
pipeline_stages:
- docker: {}
relabel_configs:
- source_labels:
- job
target_label: __tmp_prometheus_job_name
- source_labels:
- __meta_kubernetes_namespace
target_label: namespace
- source_labels:
- __meta_kubernetes_service_name
target_label: service
- source_labels:
- __meta_kubernetes_pod_name
target_label: pod
- source_labels:
- __meta_kubernetes_pod_container_name
target_label: container
- replacement: scraper/grafana-agent
target_label: job
- replacement: /var/log/pods/*$1/*.log
separator: /
source_labels:
- __meta_kubernetes_pod_uid
- __meta_kubernetes_pod_container_name
target_label: __path__
positions_directory: /var/lib/grafana-agent/data
server:
http_listen_port: 8080

Currently, I’m not convinced how to fix it. I’ve made a rollback to the old versions, but it doesn’t resolve the problem.

Any advice in this matter would be very much appreciated.

Thanks
Tom

Hi @tzaspelfinatix ,

I started to look at the operator for Grafana Agent but quickly backed away from that. At least for now it is much clearer for me to just write everything out as Kubernetes manifests.

I also run two sets of Grafana Agent. One is a DaemonSet which collect logs and one is for traces which runs as a Deployment. The Deployment for traces has no scrape_config so no discovery of logs (no duplication of logs).

Not entirely sure how to fix your situation as you use the operator. At least for me I have the configs in ConfigMaps like this

$ kubectl get configmap -n grafana-agent
NAME                       DATA   AGE
grafana-agent-logs         1      60d
grafana-agent-traces       2      60d

I update the ConfigMaps manually for now to have 100% control. Will plug into Argo CD which we use for deployments once my initial testing is done.

Somehow you have to remove the logs.scrape_configs from scraper/grafana-agent (I think)

1 Like

Hi @b0b,

Thanks again for your answer. I don’t like the operator, he is like a black box for me.
I noticed that there are two podlogs and two instancelogs for the Grafana Agent responsible for logs. My solution was to delete the wrong podlog and instancelog. The only thing I cannot understand is how the second podlog and instancelog was created.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.