Each of our three loki ingester replicas (deployed with the loki helm chart, version 6.53.0) are stuck repeatedly doing this in the logs:
level=info ts=2026-03-10T00:48:24.304438644Z caller=table_manager.go:136 index-store=tsdb-2025-03-04 msg="uploading tables"
level=info ts=2026-03-10T00:48:24.304492774Z caller=index_set.go:86 msg="uploading table loki_index_20521"
level=info ts=2026-03-10T00:48:24.304500563Z caller=index_set.go:107 msg="finished uploading table loki_index_20521"
level=info ts=2026-03-10T00:48:24.304506234Z caller=index_set.go:186 msg="cleaning up unwanted indexes from table loki_index_20521"
level=info ts=2026-03-10T00:49:24.305364155Z caller=table_manager.go:136 index-store=tsdb-2025-03-04 msg="uploading tables"
level=info ts=2026-03-10T00:49:24.305459955Z caller=index_set.go:86 msg="uploading table loki_index_20521"
level=info ts=2026-03-10T00:49:24.305468395Z caller=index_set.go:107 msg="finished uploading table loki_index_20521"
level=info ts=2026-03-10T00:49:24.305475215Z caller=index_set.go:186 msg="cleaning up unwanted indexes from table loki_index_20521"
level=info ts=2026-03-10T00:50:24.305198007Z caller=table_manager.go:136 index-store=tsdb-2025-03-04 msg="uploading tables"
level=info ts=2026-03-10T00:50:24.305278907Z caller=index_set.go:86 msg="uploading table loki_index_20521"
level=info ts=2026-03-10T00:50:24.305287547Z caller=index_set.go:107 msg="finished uploading table loki_index_20521"
level=info ts=2026-03-10T00:50:24.305292807Z caller=index_set.go:186 msg="cleaning up unwanted indexes from table loki_index_20521"
level=info ts=2026-03-10T00:51:24.305112699Z caller=table_manager.go:136 index-store=tsdb-2025-03-04 msg="uploading tables"
level=info ts=2026-03-10T00:51:24.305170489Z caller=index_set.go:86 msg="uploading table loki_index_20521"
level=info ts=2026-03-10T00:51:24.305184389Z caller=index_set.go:107 msg="finished uploading table loki_index_20521"
level=info ts=2026-03-10T00:51:24.305191529Z caller=index_set.go:186 msg="cleaning up unwanted indexes from table loki_index_20521"
level=info ts=2026-03-10T00:52:24.304367141Z caller=table_manager.go:136 index-store=tsdb-2025-03-04 msg="uploading tables"
level=info ts=2026-03-10T00:52:24.304461561Z caller=index_set.go:86 msg="uploading table loki_index_20521"
level=info ts=2026-03-10T00:52:24.304473311Z caller=index_set.go:107 msg="finished uploading table loki_index_20521"
level=info ts=2026-03-10T00:52:24.304481031Z caller=index_set.go:186 msg="cleaning up unwanted indexes from table loki_index_20521"
level=info ts=2026-03-10T00:53:24.304373023Z caller=table_manager.go:136 index-store=tsdb-2025-03-04 msg="uploading tables"
level=info ts=2026-03-10T00:53:24.304443403Z caller=index_set.go:86 msg="uploading table loki_index_20521"
level=info ts=2026-03-10T00:53:24.304451593Z caller=index_set.go:107 msg="finished uploading table loki_index_20521"
level=info ts=2026-03-10T00:53:24.304460383Z caller=index_set.go:186 msg="cleaning up unwanted indexes from table loki_index_20521"
When we hit /ready we see
Some services are not Running:
Starting: 1
Running: 6
And it returns a 503
/services returns:
runtime-config => Running
server => Running
memberlist-kv => Running
ring => Running
store => Running
analytics => Running
ingester => Starting
Not sure how to get the ingester to mark itself ready, has anyone run into this before or have any tips we could try?