Hi all, i have a Loki on a VM, and then i am using promtail from my other VM to collect and send Kubernetes data to Loki.
using my Loki’s /ready
endpoint i get: ready.
My promtail configuration is pretty much basic:
apiVersion: v1
kind: ConfigMap
metadata:
name: promtail-config
namespace: monitoring
data:
config.yml: |
server:
http_listen_port: 0
grpc_listen_port: 0
log_level: info
positions:
filename: /var/log/positions.yaml
clients:
- url: http://<external-url>:3100
timeout: 10s
scrape_configs:
- job_name: kubernetes-pods
kubernetes_sd_configs:
- namespaces:
names:
- default
role: pod
relabel_configs:
- source_labels:
- __meta_kubernetes_pod_node_name
target_label: __host__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- action: replace
replacement: $1
separator: /
source_labels:
- __meta_kubernetes_namespace
- __meta_kubernetes_pod_name
target_label: job
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: namespace
- action: replace
source_labels:
- __meta_kubernetes_pod_name
target_label: pod
- action: replace
source_labels:
- __meta_kubernetes_pod_container_name
target_label: container
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: promtail-service-account
namespace: monitoring
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: promtail
labels:
app: promtail
rules:
- apiGroups:
- ""
resources:
- nodes
- nodes/proxy
- services
- endpoints
- pods
verbs:
- get
- watch
- list
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: promtail
namespace: monitoring
labels:
app: promtail
subjects:
- kind: ServiceAccount
name: promtail-service-account
namespace: monitoring
roleRef:
kind: ClusterRole
name: promtail
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: promtail
namespace: monitoring
spec:
selector:
matchLabels:
app: promtail
template:
metadata:
labels:
app: promtail
spec:
serviceAccountName: promtail-service-account
containers:
- name: promtail
image: grafana/promtail:latest
args:
volumeMounts:
- name: promtail-config
mountPath: /etc/promtail
env:
- name: PROMTAIL_PORT
valueFrom:
fieldRef:
fieldPath: metadata.annotations['promtail-port']
volumes:
- name: promtail-config
configMap:
name: promtail-config
when i do: kubectl -n monitoring logs promtail-12345 i get:
level=info ts=2024-11-20T07:52:38.896694933Z caller=promtail.go:133 msg="Reloading configuration file" md5sum=81ebd3e2403438f9662b6999fb01e125
level=info ts=2024-11-20T07:52:38.897123412Z caller=kubernetes.go:331 component=discovery discovery=kubernetes config=kubernetes-pods msg="Using pod service account via in-cluster config"
level=info ts=2024-11-20T07:52:38.898463061Z caller=server.go:352 msg="server listening on addresses" http=[::]:42595 grpc=[::]:43979
level=info ts=2024-11-20T07:52:38.89855898Z caller=main.go:173 msg="Starting Promtail" version="(version=3.1.2, branch=HEAD, revision=41a2ee77e8)"
level=warn ts=2024-11-20T07:52:38.898624153Z caller=promtail.go:263 msg="enable watchConfig"```
and no logs are send to Loki (exploring from Grafana).
Can someone help out?