Helm: failed parsing config

I am trying to install the loki-distributed chart, using S3 for storage. I have deployed the exact same values to 2 other environments, one using Ceph and one using Minio. However, I am getting the following errors after deploying:

failed parsing config: /etc/loki/config/config.yaml: yaml: unmarshal errors:                                                              │
│   line 3: field max_look_back_period not found in type config.ChunkStoreConfig                                                            │
│   line 7: field shared_store not found in type compactor.Config                                                                           │
│   line 29: field max_transfer_retries not found in type ingester.Config                                                                   │
│   line 36: field enforce_metric_name not found in type validation.plain                                                                   │
│   line 82: field shared_store not found in type boltdb.IndexCfg

These are the relevant Values.yaml:

    storage_config:
      boltdb_shipper:
        active_index_directory: /var/loki/index
        cache_location: /var/loki/cache
        cache_ttl: 168h
        shared_store: filesystem
      filesystem:
        directory: /var/loki/chunks
      aws:
        endpoint: "s3.us-gov-west-1.amazonaws.com"
        region: "us-gov-west-1"
        bucketnames: my-bucket/dir1/dir2
        s3forcepathstyle: true
        insecure: true

I have confirmed that the pods are mounting the corect ConfigMap, and the ConfigMap is structured according to the docs so i am not sure what I am doing wrong.

What does your index configuration look like?

I have not configured that, I did not have to in order to get loki running with the default values on the other 3 environments I currently have that all Mimir running. Not sure why that would prevent the yaml from being able to be marshaled into a struct.

Can you share your entire Loki configuration? You need at least one index configuration I believe.

Sure, As mentioned I am only overriding a few of the default values. By default it will configure one index:

  schemaConfig:
    configs:
    - from: "2020-09-07"
      store: boltdb-shipper
      object_store: filesystem
      schema: v11
      index:
        prefix: loki_index_
        period: 24h

Overrides:

  • globals
global:
  image:
    registry: &imageRegistry "localregistry:5000"
  dnsService: "rke2-coredns-rke2-coredns"
  • set tags for all images
  • storageConfig:
  storageConfig:
    tsdb_shipper:
      active_index_directory: /loki/index
      cache_location: /loki/index_cache
      cache_ttl: 24h 

# -- Uncomment to configure each storage individually#   azure: {}
#   gcs: {}    
    aws:
      endpoint: "s3.us-gov-west-1.amazonaws.com"
      region: "us-gov-west-1"
      bucketnames: my-bucket/subdir1/subdir2
      s3forcepathstyle: true
      insecure: true

Here is the ConfigMap:

apiVersion: v1
data:
  config.yaml: |
    auth_enabled: false
    chunk_store_config:
      max_look_back_period: 0s
    common:
      compactor_address: http://loki-loki-distributed-compactor:3100
    compactor:
      shared_store: filesystem
      working_directory: /var/loki/compactor
    distributor:
      ring:
        kvstore:
          store: memberlist
    frontend:
      compress_responses: true
      log_queries_longer_than: 5s
      tail_proxy_url: http://loki-loki-distributed-querier:3100
    frontend_worker:
      frontend_address: loki-loki-distributed-query-frontend-headless:9095
    ingester:
      chunk_block_size: 262144
      chunk_encoding: snappy
      chunk_idle_period: 30m
      chunk_retain_period: 1m
      lifecycler:
        ring:
          kvstore:
            store: memberlist
          replication_factor: 1
      max_transfer_retries: 0
      wal:
        dir: /var/loki/wal
    ingester_client:
      grpc_client_config:
        grpc_compression: gzip
    limits_config:
      enforce_metric_name: false
      max_cache_freshness_per_query: 10m
      reject_old_samples: true
      reject_old_samples_max_age: 168h
      split_queries_by_interval: 15m
    memberlist:
      join_members:
      - loki-loki-distributed-memberlist
    query_range:
      align_queries_with_step: true
      cache_results: true
      max_retries: 5
      results_cache:
        cache:
          embedded_cache:
            enabled: true
            ttl: 24h
    ruler:
      alertmanager_url: https://alertmanager.xx
      external_url: https://alertmanager.xx
      ring:
        kvstore:
          store: memberlist
      rule_path: /tmp/loki/scratch
      storage:
        local:
          directory: /etc/loki/rules
        type: local
    runtime_config:
      file: /var/loki-distributed-runtime/runtime.yaml
    schema_config:
      configs:
      - from: "2020-09-07"
        index:
          period: 24h
          prefix: loki_index_
        object_store: filesystem
        schema: v11
        store: boltdb-shipper
    server:
      http_listen_port: 3100
    storage_config:
      aws:
        bucketnames: machina-artifacts/management-prod/loki-logs
        endpoint: s3.us-gov-west-1.amazonaws.com
        insecure: true
        region: us-gov-west-1
        s3forcepathstyle: true
      tsdb_shipper:
        active_index_directory: /loki/index
        cache_location: /loki/index_cache
        cache_ttl: 24h
    table_manager:
      retention_deletes_enabled: false
      retention_period: 0s
kind: ConfigMap
metadata:
  annotations:
    meta.helm.sh/release-name: loki
    meta.helm.sh/release-namespace: monitoring
  creationTimestamp: "2024-11-08T13:33:45Z"
  labels:
    app.kubernetes.io/instance: loki
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: loki-distributed
    app.kubernetes.io/version: 2.9.10
    helm.sh/chart: loki-distributed-0.79.4
  name: loki-loki-distributed
  namespace: monitoring
  resourceVersion: "13257165"
  uid: 204c50e3-0ad9-4c2d-af33-951b9080206a

any ideas?

IMHO you are using old configs, which are not available in newer Loki version. E. g. chunk_store_config doesn’t have max_look_back_period