I am migration Loki from the version 5.14.1 to 6.27.0, I am using helm. Also I am getting several errors, I am configuring a multi tenant setup:
Currently I am using this configuration:
global:
dnsService: coredns
test:
enabled: false
monitoring:
selfMonitoring:
enabled: false
grafanaAgent:
installOperator: false
loki:
auth_enabled: true
schemaConfig:
configs:
- from: "2023-02-28"
index:
prefix: loki_ops_index_
period: 24h
object_store: s3
schema: v11
store: tsdb
- from: "2025-02-28"
index:
prefix: loki_ops_index_
period: 24h
object_store: s3
schema: v13
store: tsdb
querier:
multi_tenant_queries_enabled: true
storage:
bucketNames:
chunks: bucket-chunks
ruler: bucket-ruler
admin: bucket-admin
s3:
endpoint: https://<ENDPOINT>/
region: <REGION>
secretAccessKey: <KEY>
accessKeyId: <ACCESS_KEY>
limits_config:
reject_old_samples: true
reject_old_samples_max_age: 168h
max_cache_freshness_per_query: 10m
split_queries_by_interval: 15m
query_timeout: 300s
volume_enabled: true
ingress:
enabled: true
ingressClassName: nginx
annotations:
cert-manager.io/cluster-issuer: prod
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-buffering: "off"
nginx.ingress.kubernetes.io/proxy-request-buffering: "off"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-realm: Tenants
nginx.ingress.kubernetes.io/auth-secret: <TENANT_SECRET_NAME>
nginx.ingress.kubernetes.io/auth-secret-type: auth-file
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header X-Scope-OrgID $remote_user;
hosts: LOKI_DOMAIN_NAME
tls:
- secretName: tls
hosts:
- LOKI_DOMAIN_NAME
Not sure if I have to delete the PVC for the old loki-writers but I noticed errors without deleted the pvc like this:
evel=error ts=2025-02-27T21:31:17.857503924Z caller=flush.go:261 component=ingester loop=19 org_id=org1|org2 msg="failed to flush" retries=7 err="failed to flush chunks: multiple org IDs present, num_chunks: 1, labels: {app=\"label1\", chart=\"mychart\", cluster=\"mycluster\", component=\"mycomponent\", filename=\"/var/log/pods/..../registry/0.log\", heritage=\"Helm\", job=\"monitoring/kubernetes-logs\", namespace=\"my_namespace\", pod=\"mypod\", pod_template_hash=\"...\", release=\"myrelease\"}"
I also got one of the writers that could start the ingester so I deleted the PVC in that one, and solve the readyness issue because of the digested. Also it seems that there are not multiple org errors, but its a different one:
level=warn ts=2025-03-03T20:14:17.812719897Z caller=grpc_logging.go:76 method=/logproto.Querier/GetChunkIDs duration=204.46µs msg=gRPC err="rpc error: code = Code(499) desc = The request was cancelled by the client."
My data sources work as expected have a valid connection.
Also the Grafana Cloud in the Drilldown shows the error:
An error occurred within the plugin
In the dashboard shows the logs but not sure if its configured correctly. So not sure how if I have to remove that errors to have a valid multi-tenant configuration and also remove the error in the Grafana Cloud UI in the Drilldown. Do I have to remove the rest of old PVC for the loki-writers? Or what its wrong on the configuration. Or skipping something by my side.