Loki Helm chart "pending" for an unknown reason

I am trying Loki using the simple scalable deployment Helm chart on my MacBook Pro M1 (ARM64). Some of the pods start and are running, but (most?) just remain pending forever.

I checked the docs but they’re not all that great unfortunately.

Chart version: 6.3.4
Loki version: 3.0.0
Minikube version: v1.33.0 (commit: 86fc9d54fca63f295d8737c8eacdbb7987e89c67)
Docker Desktop version: 4.29.0 (145265)

This is what the pods look like:

loki-backend-0                  2/2     Running   0          5m59s
loki-backend-1                  0/2     Pending   0          5m59s
loki-backend-2                  0/2     Pending   0          5m59s
loki-canary-6gtdj               1/1     Running   0          5m59s
loki-chunks-cache-0             0/2     Pending   0          5m59s
loki-gateway-747bbf5b8f-8grh2   1/1     Running   0          5m59s
loki-read-5954879c69-27zsq      1/1     Running   0          5m59s
loki-read-5954879c69-65b9k      0/1     Pending   0          5m59s
loki-read-5954879c69-qv5dw      0/1     Pending   0          5m59s
loki-results-cache-0            2/2     Running   0          5m59s
loki-write-0                    1/1     Running   0          5m59s
loki-write-1                    0/1     Pending   0          5m59s
loki-write-2                    0/1     Pending   0          5m59s

The values.yaml is basically just the default one it gives you except it complained about not having a “schema_config” (so I added a “schemaConfig”):

---
loki:
  storage:
    bucketNames:
      chunks: chunks
      ruler: ruler
      admin: admin
    type: s3

    minio:
      enabled: true

  schemaConfig:
    configs:
      - from: 2024-04-01
        object_store: s3
        store: tsdb
        schema: v13
        index:
          prefix: index_
          period: 24h

How do I debug this? I have no idea what’s going on to be honest.

Check pending pod events, please. I guess they can’t be scheduled for some reason: probably they are some constraints and they can’t be scheduled on the same node (I guess you have only single node cluster, because minikube).

I would run monolith single replica Loki on single node cluster (minikube).

1 Like

Quick Google search suggested running the describe command against the pending pods. When I did that the loki-chunks-cache-0 didn’t have enough memory. I increased the memory available via Docker Desktop to 20GB and that one started.

I have a feeling you’re right about the other pods though, they just can’t run on one node like that - which of course makes sense given that this is the simple “scalable” deployment afterall.

Here are the errors:

$ kubectl describe pods loki-backend-1
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  98s                default-scheduler  0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod..
  Warning  FailedScheduling  94s (x2 over 97s)  default-scheduler  0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod..

$ kubectl describe pods loki-read-5954879c69-9674j
Events:
  Type     Reason            Age    From               Message
  ----     ------            ----   ----               -------
  Warning  FailedScheduling  2m44s  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod..

$ kubectl describe pods loki-write-1
Events:
  Type     Reason            Age    From               Message
  ----     ------            ----   ----               -------
  Warning  FailedScheduling  3m28s  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod..