Mimir upgrade to helm chart 4.4.1 - now getting bucket errors in ingester and store gateway

I upgraded my helm chart from 4.2.0 to 4.4.1 and am now getting crash backoff loops in the ingester and store gateway pods for some reason. No idea why it’s now failing when it was working perfectly fine with the 4.2.0 config. I can’t find any breaking changes in the version documentation.

store gateway:

ts=2023-06-08T16:45:11.783030527Z caller=main.go:213 level=info msg="Starting application" version="(version=2.8.0, branch=HEAD, revision=f917e08)"
ts=2023-06-08T16:45:11.790131445Z caller=server.go:322 level=info http=[::]:8080 grpc=[::]:9095 msg="server listening on addresses"
ts=2023-06-08T16:45:11.799187862Z caller=memberlist_client.go:437 level=info msg="Using memberlist cluster label and node name" cluster_label= node=mimir-store-gateway-0-aa953e53
ts=2023-06-08T16:45:11.801484136Z caller=inmemory.go:173 level=info msg="created in-memory index cache" maxItemSizeBytes=134217728 maxSizeBytes=1073741824 maxItems=maxInt
ts=2023-06-08T16:45:11.801657379Z caller=cache.go:106 level=info msg="created chunks cache"
ts=2023-06-08T16:45:11.80277687Z caller=module_service.go:82 level=info msg=initialising module=activity-tracker
ts=2023-06-08T16:45:11.80286604Z caller=module_service.go:82 level=info msg=initialising module=sanity-check
ts=2023-06-08T16:45:11.802927917Z caller=sanity_check.go:32 level=info msg="Checking directories read/write access"
ts=2023-06-08T16:45:11.802851455Z caller=module_service.go:82 level=info msg=initialising module=usage-stats
ts=2023-06-08T16:45:11.804271035Z caller=sanity_check.go:34 level=error msg="Unable to access directory" err="store-gateway: failed to access directory /data/tsdb-sync: open /data/tsdb-sync/.check: permission denied"
ts=2023-06-08T16:45:11.80437275Z caller=mimir.go:811 level=error msg="module failed" module=sanity-check err="invalid service state: Failed, expected: Running, failure: store-gateway: failed to access directory /data/tsdb-sync: open /data/tsdb-sync/.check: permission denied"
ts=2023-06-08T16:45:11.804446985Z caller=mimir.go:811 level=error msg="module failed" module=server err="failed to start server, because it depends on module sanity-check, which has failed: invalid service state: Failed, expected: Running, failure: invalid service state: Failed, expected: Running, failure: store-gateway: failed to access directory /data/tsdb-sync: open /data/tsdb-sync/.check: permission denied"
ts=2023-06-08T16:45:11.804468522Z caller=mimir.go:811 level=error msg="module failed" module=memberlist-kv err="failed to start memberlist-kv, because it depends on module server, which has failed: context canceled"
ts=2023-06-08T16:45:11.804476772Z caller=mimir.go:811 level=error msg="module failed" module=store-gateway err="failed to start store-gateway, because it depends on module memberlist-kv, which has failed: context canceled"
ts=2023-06-08T16:45:11.805114344Z caller=seed.go:127 level=warn msg="failed to read cluster seed file from object storage" err="Get \"https://fakebucket-monitoring-mimir-blocks.s3.dualstack.us-east-2.amazonaws.com/__mimir_cluster/mimir_cluster_seed.json\": context canceled"
ts=2023-06-08T16:45:11.806005201Z caller=mimir.go:811 level=error msg="module failed" module=runtime-config err="failed to start runtime-config, because it depends on module activity-tracker, which has failed: context canceled"
ts=2023-06-08T16:45:11.806302978Z caller=module_service.go:114 level=info msg="module stopped" module=usage-stats
ts=2023-06-08T16:45:11.810136287Z caller=module_service.go:114 level=info msg="module stopped" module=activity-tracker
ts=2023-06-08T16:45:11.810499628Z caller=mimir.go:800 level=info msg="Application stopped"
ts=2023-06-08T16:45:11.810603528Z caller=log.go:65 level=error msg="error running application" err="failed services\ngithub.com/grafana/mimir/pkg/mimir.(*Mimir).Run\n\t/__w/mimir/mimir/pkg/mimir/mimir.go:854\nmain.main\n\t/__w/mimir/mimir/cmd/mimir/main.go:215\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:250\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1598"

ingester:

ts=2023-06-08T16:45:31.331732343Z caller=main.go:213 level=info msg="Starting application" version="(version=2.8.0, branch=HEAD, revision=f917e08)"
ts=2023-06-08T16:45:31.336758899Z caller=server.go:322 level=info http=[::]:8080 grpc=[::]:9095 msg="server listening on addresses"
ts=2023-06-08T16:45:31.345975257Z caller=ingester.go:342 level=info msg="TSDB idle compaction timeout set" timeout=1h5m16.008712587s
ts=2023-06-08T16:45:31.346549859Z caller=memberlist_client.go:437 level=info msg="Using memberlist cluster label and node name" cluster_label= node=mimir-ingester-2-8ea80fd5
ts=2023-06-08T16:45:31.348065171Z caller=module_service.go:82 level=info msg=initialising module=activity-tracker
ts=2023-06-08T16:45:31.348116706Z caller=module_service.go:82 level=info msg=initialising module=usage-stats
ts=2023-06-08T16:45:31.348232678Z caller=module_service.go:82 level=info msg=initialising module=active-groups-cleanup-service
ts=2023-06-08T16:45:31.348082244Z caller=module_service.go:82 level=info msg=initialising module=sanity-check
ts=2023-06-08T16:45:31.348569611Z caller=sanity_check.go:32 level=info msg="Checking directories read/write access"
ts=2023-06-08T16:45:31.352681349Z caller=sanity_check.go:34 level=error msg="Unable to access directory" err="ingester: failed to access directory /data/tsdb: open /data/tsdb/.check: permission denied"
ts=2023-06-08T16:45:31.35293587Z caller=mimir.go:811 level=error msg="module failed" module=sanity-check err="invalid service state: Failed, expected: Running, failure: ingester: failed to access directory /data/tsdb: open /data/tsdb/.check: permission denied"
ts=2023-06-08T16:45:31.352943111Z caller=module_service.go:114 level=info msg="module stopped" module=active-groups-cleanup-service
ts=2023-06-08T16:45:31.352970388Z caller=mimir.go:811 level=error msg="module failed" module=server err="failed to start server, because it depends on module sanity-check, which has failed: invalid service state: Failed, expected: Running, failure: invalid service state: Failed, expected: Running, failure: ingester: failed to access directory /data/tsdb: open /data/tsdb/.check: permission denied"
ts=2023-06-08T16:45:31.352999674Z caller=mimir.go:811 level=error msg="module failed" module=memberlist-kv err="failed to start memberlist-kv, because it depends on module server, which has failed: context canceled"
ts=2023-06-08T16:45:31.353017891Z caller=mimir.go:811 level=error msg="module failed" module=ingester-service err="failed to start ingester-service, because it depends on module memberlist-kv, which has failed: context canceled"
ts=2023-06-08T16:45:31.353030682Z caller=mimir.go:811 level=error msg="module failed" module=runtime-config err="failed to start runtime-config, because it depends on module sanity-check, which has failed: invalid service state: Failed, expected: Running, failure: invalid service state: Failed, expected: Running, failure: ingester: failed to access directory /data/tsdb: open /data/tsdb/.check: permission denied"
ts=2023-06-08T16:45:31.353194075Z caller=seed.go:127 level=warn msg="failed to read cluster seed file from object storage" err="Get \"https://fakebucket-monitoring-mimir-blocks.s3.dualstack.us-east-2.amazonaws.com/__mimir_cluster/mimir_cluster_seed.json\": context canceled"
ts=2023-06-08T16:45:31.353255468Z caller=module_service.go:114 level=info msg="module stopped" module=usage-stats
ts=2023-06-08T16:45:31.358329742Z caller=memberlist_client.go:543 level=info msg="memberlist fast-join starting" nodes_found=10 to_join=8
ts=2023-06-08T16:45:31.359247666Z caller=module_service.go:114 level=info msg="module stopped" module=activity-tracker
ts=2023-06-08T16:45:31.359294878Z caller=mimir.go:800 level=info msg="Application stopped"
ts=2023-06-08T16:45:31.359388592Z caller=log.go:65 level=error msg="error running application" err="failed services\ngithub.com/grafana/mimir/pkg/mimir.(*Mimir).Run\n\t/__w/mimir/mimir/pkg/mimir/mimir.go:854\nmain.main\n\t/__w/mimir/mimir/cmd/mimir/main.go:215\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:250\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1598"

The only solution was to delete the PVC’s backing the pods and let the deployment recreate them. Not sure what happened.