Hi,
we have below Promtail config to scrape kubernetes level pod logs,
apiVersion: v1
kind: ConfigMap
metadata:
name: promtail-config
namespace: promtail-test
data:
promtail.yaml: |
server:
http_listen_port: 9080
grpc_listen_port: 0
clients:
- url: http://loki-test-gateway.loki-test.svc.cluster.local/loki/api/v1/push
positions:
filename: /tmp/positions.yaml
target_config:
sync_period: 10s
scrape_configs:
- job_name: pod-logs
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels:
- __meta_kubernetes_pod_node_name
target_label: __host__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- action: replace
replacement: $1
separator: /
source_labels:
- __meta_kubernetes_namespace
- __meta_kubernetes_pod_name
target_label: job
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: namespace
- action: replace
source_labels:
- __meta_kubernetes_pod_name
target_label: pod
- action: replace
source_labels:
- __meta_kubernetes_pod_container_name
target_label: container
- replacement: /var/log/pods/*$1/*.log
separator: /
source_labels:
- __meta_kubernetes_pod_uid
- __meta_kubernetes_pod_container_name
target_label: __path__
- action: labeldrop
regex: "app_kubernetes_io_.instance"
pipeline_stages:
- docker: {}
- labeldrop:
- app_kubernetes_io_instance
- app_kubernetes_io_managed_by
- labelallow:
- app_kubernetes_io_component
- pod
however, I see many unnecessary/not required labels are coming to loki server, when i try labeldrop/labelallow , this is not actually not working as expected, we are still able to see labels, this might cause high cardinality,
please suggest how we can remove them,
Many thanks,