Loki Helm chart via Terraform throwing error ConfigMap unmarshal map[string]string

I am trying to install Loki in Distributed mode where logs will be sent to S3. Using Helo latest version trying to deploy in EKS 1.33 using terraform. Tried lot of different values but Helm instalaation is failing with the below error.
│ 1 error occurred:
│ * ConfigMap in version “v1” cannot be handled as a ConfigMap: json: cannot
│ unmarshal string into Go struct field ConfigMap.data of type
│ map[string]string

The Terraform Codeblock for the Helm installation:

resource "helm_release" "loki" {
  name             = "loki"
  repository       = "https://grafana.github.io/helm-charts"
  chart            = "loki"
  namespace        = var.k8s_namespace
  create_namespace = true
  atomic           = false
  cleanup_on_fail  = true
  timeout          = 300

values = [
  yamlencode({
    deploymentMode = "Distributed"
    loki = {
      auth_enabled = false
      commonConfig = { replication_factor = 1 }

      # Structured configuration replaces both 'config' string and separate blocks
      structuredConfig = {
        limits_config = {
          allow_structured_metadata = true
        }
        schema_config = {
          configs = [
            {
              from         = "2024-01-01"
              store        = "tsdb"
              object_store = "s3"
              schema       = "v13"
              index = {
                prefix = "loki_index_"
                period = "24h"
              }
            }
          ]
        }
        storage_config = {
          aws = {
            s3               = "s3://${aws_s3_bucket.loki_storage.id}"
            region           = data.aws_region.current.id
            s3forcepathstyle = true
          }

          tsdb_shipper = {
            active_index_directory = "/var/loki/index"
            cache_location         = "/var/loki/cache"
            cache_ttl              = "24h" 
          }
        }
      }

      # Keep storage settings for Helm chart internals
      storage = {
        type = "s3"  # Add storage type
        bucketNames = {
          chunks = "${aws_s3_bucket.loki_storage.id}"
          ruler  = "${aws_s3_bucket.loki_storage.id}"
          admin  = "${aws_s3_bucket.loki_storage.id}"
        }
      }
    }
    distributor = {
        replicas       = 2
        maxUnavailable = 1
      }

      ingester = {
        replicas = 2
        persistence = {
          enabled      = true
          size         = "10Gi"
          storageClass = var.ebs_storage_class_name
        }
      }

      querier = {
        replicas       = 2
        maxUnavailable = 1
      }

      queryFrontend = {
        replicas       = 2
        maxUnavailable = 1
      }

      compactor = {
        enabled                       = true
        retention_enabled             = true
        retention_delete_delay        = "2h"
        retention_delete_worker_count = 150
        working_directory             = "/var/loki/compactor"
        shared_store                  = "aws"
      }

      ruler = {
        enabled  = false
      }

      gateway = {
        enabled = true
      }

      queryScheduler = {
        enabled = true
      }

      frontendWorker = {
        enabled = true
      }

      backend = {
        enabled  = false
        replicas = 0
      }

      read = {
        enabled  = false
        replicas = 0
      }

      write = {
        enabled  = false
        replicas = 0
      }
    })
  ]

  depends_on = [
    aws_iam_role_policy_attachment.loki_policy_attachment,
    aws_s3_bucket.loki_storage
  ]
}
 
 

Create the S3 bucker and necessaey IAM role to access S3 for Loki.

I am now clueless on what I am doing wrong. Can any expert please help? Appriciate your response.

That’s a problem of your Terraform/Helm/Kubernetes, not a Loki, so this is not the best forum for this issue.

A few hints: You are mixing helm/TF, which is terrible for debugging and humans.

Create values.yaml (YAML is better readable format for humans than HCL/JSON) - use default chart values.yaml as a base and customize it for your needs - use helm (not terraform) for deployment at this step to be sure that values.yaml is valid and it’s working as expected.

When helm is working then it’s terraform time - create template from values.yaml and parse that on the TF level, e.g.:

resource "helm_release" "this" {
  ...

  values = [
    templatefile("${path.module}/values.yaml", {
      variable= "value"
    })
  ]
}
``
1 Like

Exactly. I realised that. So moved to templatefile method. But now getting this error for Compactor where as the S3 bucktet in the right region.

loki:
   schemaConfig:
     configs:
       - from: "2024-04-01"
         store: tsdb
         object_store: s3
         schema: v13
         index:
           prefix: loki_index_
           period: 24h
   storage_config:
     aws:
       region: "eu-west-2" # for example, eu-west-2  
       bucketnames: "loki-storage" # Your actual S3 bucket name, for example, loki-aws-dev-chunks
       s3forcepathstyle: false
   ingester:
       chunk_encoding: snappy
   pattern_ingester:
       enabled: true
   limits_config:
     allow_structured_metadata: true
     volume_enabled: true
     retention_period: 672h # 28 days retention
   compactor:
     retention_enabled: true 
     delete_request_store: s3
   ruler:
    enable_api: true
    storage:
      type: s3
      s3:
        region: "eu-west-2" # for example, eu-west-2
        bucketnames: ${s3_bucket_name} #"loki-storage" # Your actual S3 bucket name, for example, loki-aws-dev-ruler
        s3forcepathstyle: false
      alertmanager_url: http://prom:9093 # The URL of the Alertmanager to send alerts (Prometheus, Mimir, etc.)

   querier:
      max_concurrent: 4

   storage:
      type: s3
      bucketNames:
        chunks: ${s3_bucket_name} #"loki-storage" # Your actual S3 bucket name (loki-aws-dev-chunks)
        ruler: ${s3_bucket_name} #"loki-storage" # Your actual S3 bucket name (loki-aws-dev-ruler)
        # admin: "<Insert s3 bucket name>" # Your actual S3 bucket name (loki-aws-dev-admin) - GEL customers only
      s3:
        region: "eu-west-2" # eu-west-2
        #insecure: false
      # s3forcepathstyle: false

serviceAccount:
 create: true
 name: "loki-sa"
 annotations:
   "eks.amazonaws.com/role-arn": "arn:aws:iam::717949064245:role/eks-managed-clstr-dev-loki-role" # The service role you created

deploymentMode: Distributed

ingester:
 replicas: 3
 zoneAwareReplication:
  enabled: false

querier:
 replicas: 3
 maxUnavailable: 2

queryFrontend:
 replicas: 2
 maxUnavailable: 1

queryScheduler:
 replicas: 2

distributor:
 replicas: 3
 maxUnavailable: 2
compactor:
 replicas: 1

indexGateway:
 replicas: 2
 maxUnavailable: 1

ruler:
 replicas: 1
 maxUnavailable: 1


gateway:
  enabled: false

# Enable minio for storage
minio:
 enabled: false

backend:
 replicas: 0
read:
 replicas: 0
write:
 replicas: 0

singleBinary:
 replicas: 0
resource "helm_release" "loki" {
  name             = "loki"
  repository       = "https://grafana.github.io/helm-charts"
  chart            = "loki"
  version          = var.loki_chart_version
  namespace        = var.k8s_namespace
  create_namespace = true
  atomic           = false  # Set to false to debug installation issues
  cleanup_on_fail  = true
  timeout          = 300

  # Use a values file instead of inline values
  values = [
    templatefile("${path.module}/loki-values-updated.yaml", {
      loki_service_account_name = var.loki_service_account_name
      loki_role_arn = aws_iam_role.loki_role.arn
      s3_bucket_name = aws_s3_bucket.loki_storage.id
      aws_region = data.aws_region.current.id
      ebs_storage_class_name = var.ebs_storage_class_name
    })
  ]

  depends_on = [
    aws_iam_role_policy_attachment.loki_policy_attachment,
    aws_s3_bucket.loki_storage
  ]
}

init compactor: failed to init delete store: failed to get s3 object: BucketRegionError: incorrect region, the bucket is not in ‘eu-west-2’ region at endpoint ‘’, bucket is in ‘ap-sout ││ heast-1’ region ││ status code: 301, request id: 5Z4DHJG625ZCWC9X, host id: QKQ+OGq7lj+H/GSwz75kbuvsl+vmSdiuGuUdy/2TGo5H3dlzgksiRO30tlioQbSuslCWNSRYaC79FpkTxN/vt3EGzwXNcdgH ││ error initialising module: compactor

Any help?

Found the problem. Bucket name was not getting passed properly.