Promtail failed to grep log

I would like to use Loki to collect server logs for my AWS EKS Fargate Application. As I am new in Loki, I tried to follow the simplest example to set the config in promtail.yaml, but failed after using different ways for it…

I tried to use kubectl logs to find some hints from pod’s log, but seems nothing special…

level=info ts=2022-09-15T10:10:38.558067919Z caller=server.go:288 http=[::]:3101 grpc=[::]:9095 msg="server listening on addresses"
level=info ts=2022-09-15T10:10:38.558268213Z caller=main.go:121 msg="Starting Promtail" version="(version=2.6.1, branch=HEAD, revision=6bd05c9a4)"
level=info ts=2022-09-15T10:10:43.559457087Z caller=filetargetmanager.go:338 msg="Adding target" key="/usr/local/jboss_api/standalone/log*.log:{host=\"co2\", job=\"apilogs\"}"

When I browsed to Promtail UI using :3101, it showed that my api log is not ready ( api-log (0/1 ready)

my config.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: api-dev-log-config
  namespace: fargate-api-dev
data:
   promtail.yaml: |
    server:
      log_level: info
      http_listen_port: 3101

    clients:
      - url: http://loki.logging:3100/loki/api/v1/push

    positions:
      filename: /run/promtail/positions.yaml

    scrape_configs:
      - job_name: api-log
        pipeline_stages:
        - cri: {}
        static_configs:
        - targets:
          - localhost
          labels:
            job: apilogs
            host: co2 
            __path__: /usr/local/jboss_api/standalone/log*.log

rbac.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: promtail-clusterrole
  namespace: fargate-api-dev
rules:
  - apiGroups: [""]
    resources:
    - nodes
    - services
    - pods
    verbs:
    - get
    - watch
    - list
---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: promtail-clusterrolebinding
  namespace: fargate-api-dev
subjects:
    - kind: ServiceAccount
      name: api-dev-sa
      namespace: default
roleRef:
    kind: ClusterRole
    name: promtail-clusterrole
    apiGroup: rbac.authorization.k8s.io

deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: fargate-api-dev
  name: k8s-dev
spec:
  replicas: 1
  selector:
    matchLabels:
      app: k8s-dev
  template:
    metadata:
      labels:
        app: k8s-dev
    spec:
      volumes:
        - name: dev-properties
          configMap: 
            name: dev-properties
        - name: api-dev-log
          emptyDir: {}
        - name: run
          emptyDir: {}
        - name: promtail-config
          configMap:
            name: api-dev-log-config
      serviceAccount: api-dev-sa
      serviceAccountName: api-dev-sa

      containers:
        - name: promtail-container
          image: promtail:2.6.1
          args:
          - -config.file=/etc/promtail/promtail.yaml
          volumeMounts:
          - name: api-dev-log
            mountPath: /usr/local/jboss_api/standalone/log
            readOnly: false
          - name: promtail-config
            mountPath: /etc/promtail
            readOnly: false
          - name: run
            mountPath: /run/promtail
          imagePullPolicy: IfNotPresent
          resources:
            requests:
              memory: "500Mi"
              cpu: "200m"
            limits:
              memory: "500Mi"
              cpu: "200m"

        - name: k8s-api
          image: k8s-api:1.2.0
          ports:
            - containerPort: 8443
          resources:
            requests:
              memory: "2000Mi"
              cpu: "500m"
            limits:
              memory: "2000Mi"
              cpu: "500m"
          volumeMounts:
            - name: api-dev-log
              mountPath: /usr/local/jboss_api/standalone/log
              readOnly: false

Would be great if anyone can give me so hints to continue, thank you so much!

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.