LGTM Stack on Kubernetes: structured_metadata Disallowed Error in Loki

Hello,

I’m trying to deploy the LGTM stack on a small Kubernetes cluster. I began by installing each component individually, until I discovered that Grafana provides two Helm charts lgtm-distributed and k8s-mon so I switched to using those. Due to the very sparse documentation available online, this installation is driving me crazy, and I could really use some help.

Right now, when I tail the logs of the k8s-mon Alloy logging daemon, I see:

ts=2025-07-20T20:16:17.139426707Z level=error msg="final error sending batch" component_path=/ component_id=loki.write.loki component=client host=lgtm-loki-distributor.observability.svc.cluster.local:3100 status=400 tenant="" error="server returned HTTP status 400 Bad Request (400): 30 errors like: stream '{cluster=\"ionos-cluster\", container=\"myTomcatService\", job=\"test/myTomcatService\", k8s_cluster_name=\"ionos-cluster\", namespace=\"test\", service_name=\"myTomcatService\"}' includes structured metadata, but this feature is disallowed. Please see `limits_config.structured_metadata` or contact your Loki administrator to enable it."

I’ve also added the OpenTelemetry agent to a Java8 / Tomcat 9 application, with these environment variables in my Deployment:

          - name: OTEL_EXPORTER_OTLP_ENDPOINT
            value: http://k8s-mon-alloy-receiver.observability.svc.cluster.local:4318
          - name: OTEL_EXPORTER_OTLP_PROTOCOL
            value: http/protobuf
          # lo completa cada kustomization
          - name: OTEL_SERVICE_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.labels['app']

My Helm values files are:

for lgtm-distributed

global:
  clusterDomain: cluster.local
  dnsService:   coredns
  dnsNamespace: kube-system

rolloutOperator:
  enabled: false

grafana:
  service:
    type: ClusterIP  
    port: 80
  ingress:
    enabled: true
    ingressClassName: nginx    
    hosts:
      - grafana.mydomain.com
    path: /                   
    tls:
      - secretName: mydomain        # certificado ya creado
        hosts:
          - grafana.mydomain.com
  persistence:
    enabled: true
    existingClaim: grafana-pvc-nfs
  podSecurityContext:
    runAsUser: 472
    runAsGroup: 472
    fsGroup: 472
    fsGroupChangePolicy: "OnRootMismatch" 
  envFromSecrets:
    - name: ldap-bind-secret
  grafana.ini:
    server:
      root_url: https://grafana.mydomain.com/
    auth.ldap:
      enabled: true
      config_file: /etc/grafana/ldap.toml
    auth:
      disable_login_form: false
  ldap:
    enabled: true
    config: |-
      # ldap config...


loki:
  enabled: true
  structuredConfig:
    auth_enabled: false
    server:
      http_listen_port: 3100
    limits_config:
      retention_period: 2160h
      allow_structured_metadata: true
      volume_enabled: true
    pattern_ingester:
      enabled: true
    ruler:
      enable_api: true


# https://medium.com/@MadhavPrajapati/how-to-set-up-grafana-mimir-in-kubernetese-207b8693d1b5
mimir:
  # nginx:
  #   enabled: false
  ruler:
    enabled: false
  query_scheduler:
    enabled: false
  alertmanager:
    enabled: false
  overrides_exporter:
    enabled: false
  # ingester:
  #   zoneAwareReplication:
  #     enabled: false
  #   replicas: 1
  ingester:
    zoneAwareReplication:
      enabled: false
    persistentVolume:
      enabled: false
    replicas: 2
  queryScheduler:
    replicas: 0
  store_gateway:
    zoneAwareReplication:
      enabled: false
    replicas: 1
  rollout_operator:
    enabled: false
  distributor:
    extraArgs:
      auth.multitenancy-enabled: false
    replicaCount: 1
  querier:
    replicaCount: 1
  query_frontend:
    replicaCount: 1
  structuredConfig:
    blocks_storage:
      tsdb:
        retention_period: 90d
    compactor:
      compaction_interval: 24h
  minio:               
    enabled: true
    mode: standalone
    persistence:
      enabled: true
      existingClaim: lgtm-minio-nfs
    podSecurityContext:
      fsGroup: 1000 


tempo:
  singleBinary:
    enabled: true
    replicas: 1
    resourcesPreset: small
    persistence:
      enabled: false   
  memcached:
    enabled: false

grafana-oncall:
  enabled: false

for k8s-mon

cluster:
  name: ionos-cluster        

destinations:
  - name: loki
    type: loki
    # url: http://lgtm-loki-gateway.observability.svc.cluster.local:3100/loki/api/v1/push
    url: http://lgtm-loki-distributor.observability.svc.cluster.local:3100/loki/api/v1/push
    
  - name: mimir
    type: prometheus
    # url: http://lgtm-mimir-nginx.observability.svc.cluster.local/prometheus/api/v1/write
    url: http://lgtm-mimir-nginx.observability.svc.cluster.local/prometheus/api/v1/write

  # Traces y métricas OTLP (opcional; mismo backend Tempo/Mimir via OTLP)
  - name: otlp-backend
    type: otlp
    auth:
      type: none

podLogs:
  enabled: true
  namespaces:
    - test
  # excludeNamespaces: []

clusterMetrics:
  enabled: true          # cAdvisor, kube-state-metrics, node-exporter…

clusterEvents:
  enabled: true

applicationObservability:
  enabled: true          # Recepción OTLP de tus apps
  receivers:
    otlp:
      http:
        enabled: true
        port: 4318
      grpc:
        enabled: true
        port: 4317
        
alloy-logs:
  enabled: true
alloy-metrics:
  enabled: true

alloy-receiver:
  enabled: true    


alloy-singleton:
  enabled: true         # para eventos del clúster

global:
  scrapeInterval: 60s

I’ve even tried overriding the Loki config: section in my lgtm-values.yaml with a full hand-crafted config: block (including limits_config.allow_structured_metadata: true), and then fully uninstalling and reinstalling the chart—even so, the resulting lgtm-loki ConfigMap still omits my overrides.

Honestly, I’m quite desperate at this point. I don’t fully understand why my values aren’t being picked up, or if I’m supposed to be using the default chart configuration instead. I appreciate any insight or suggestions on how to enable structured metadata support or otherwise resolve these errors. Thank you in advance for any help!

I don’t run our Loki cluster on Kubernetes, but I think lgtm-distributed is old. Please see Install Grafana Loki with Helm | Grafana Loki documentation, and the helm chart on the Loki github repo.

@tonyswumac thank you.

It seems that the Helm was interpreting badly and did not take the values, in the case of Loki they must go in loki.loki.
This is confusing because it does not happen for grafana, neither mimir nor tempo.

This link helped me a lot lgtm-stack/helm/values-lgtm.local.yaml at main · daviaraujocc/lgtm-stack · GitHub