Loki on kind with helm, minimal setup needed for testing fluent-bit ingest

Hey friends,

I’m using fluent-bit to ingest k8s events into loki and I think I spotted some sort of duplication bug. Pretty sure the bug is on the fluent-bit side but it doesn’t show up in the stdout output, only the loki entries.

I’d like to build a repro setup on kind to prove and hopefully fix it.

Can anyone recommend a good minimal helm values file for installing loki into kind?

Hoping to avoid the s3 dependency and have something a fluent-bit dev could run on a laptop. It would only need to store < 1MiB of log data.

Thanks in advance!
Mat

If you are looking to just run Loki for local testing, I’d recommend you to just run it as a single Docker container in monolithic mode. See Install Loki with Docker or Docker Compose | Grafana Loki documentation.

You can ignore the promtail part, write a quick helm chart or deployment yaml for a single Loki container, throw in a PV for Loki chunk storage if you feel like it, and you should be good to go.

Thanks @tonyswumac !

This did the trick for a single container loki

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: loki-config
data:
  local-config.yaml: |
    auth_enabled: false

    server:
      http_listen_port: 3100
      grpc_listen_port: 9096
      log_level: debug
      grpc_server_max_concurrent_streams: 1000

    common:
      instance_addr: 127.0.0.1
      path_prefix: /tmp/loki
      storage:
        filesystem:
          chunks_directory: /tmp/loki/chunks
          rules_directory: /tmp/loki/rules
      replication_factor: 1
      ring:
        kvstore:
          store: inmemory

    query_range:
      results_cache:
        cache:
          embedded_cache:
            enabled: true
            max_size_mb: 100

    limits_config:
      metric_aggregation_enabled: true

    schema_config:
      configs:
        - from: 2020-10-24
          store: tsdb
          object_store: filesystem
          schema: v13
          index:
            prefix: index_
            period: 24h

    pattern_ingester:
      enabled: true
      metric_aggregation:
        loki_address: localhost:3100

    ruler:
      alertmanager_url: http://localhost:9093

    frontend:
      encoding: protobuf

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: loki
spec:
  selector:
    matchLabels:
      app: loki
  template:
    metadata:
      labels:
        app: loki
    spec:
      containers:
      - name: loki
        image: grafana/loki:3.4.1
        resources:
          limits:
            memory: "1Gi"
            cpu: "500m"
          requests:
            memory: "512Mi"
            cpu: "250m"
        ports:
        - containerPort: 3100
        volumeMounts:
        - name: loki-config
          mountPath: /etc/loki
      volumes:
      - name: loki-config
        configMap:
          name: loki-config

---
apiVersion: v1
kind: Service
metadata:
  name: loki
spec:
  selector:
    app: loki
  ports:
  - port: 80
    targetPort: 3100

Unfortunately, the bug I was hoping to reproduce doesn’t show up in this configuration. :lolsob:

I’ll get cracking on a more complete reproduction on EKS with helm/s3. Still thinking this is on the fluent-bit side, but will plan to update this thread with whatever bug I end up opening.