Failed to flush user open file too large on local setup

Using Loki version 2.4.1 , local setup on SUSE and the loki storage is located on NFS.

We had several days scheduled down time when I’ve tried to activate loki on restart I’m getting multiple errors like this one
caller=flush.go:221 org_id=fake msg=“failed to flush user” err=“open /nfs/<root path>/loki/prod/storage/chunks/ZmFrZS81OTk1ODIwYzFkMjc5YTFjOjE4ZTlhMGY1N2ZhOjE4ZTlhNGJlYjAxOjNlYmYwY2U=: file too large”
The chunk folder is huge - and when I ls it it hangs:
drwxr-xr-x 3 <usr> <grp> 16K Apr 4 14:28 boltdb-shipper-active
drwxr-xr-x 7 <usr> <grp> 4.0K Apr 4 12:51 boltdb-shipper-cache
drwxr-x— 3 <usr> <grp> 320M Apr 1 18:33 chunks

I looked at posts and tried to play with max_global_streams_per_user (set to 10000) also tried to increase # of nodes on the nfs - but the error continues.

relevant data from my loki config:
common:
path_prefix: /nfs/<path>/loki/prod/storage
storage:
filesystem:
chunks_directory:/nfs/<path>/loki/prod/storage/chunks
rules_directory: /nfs/<path>/loki/prod/storage/rules
replication_factor: 1
ring:
instance_addr: 127.0.0.1
kvstore:
store: inmemory

limits_config:
ingestion_rate_mb: 50
ingestion_burst_size_mb: 100
max_global_streams_per_user: 10000
schema_config:
configs:
- from: 2020-10-24
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h

any ideas how do progress?

Feels more likely a problem on your NFS than Loki.

How many chunk files do you have?

around 12k

ls chunks/ | wc -l
11837

I think it’s more likely to be something to do with your NFS. I’ve not tried to run Loki with an NFS, I could be wrong. but I’d double check and see what errors you get from NFS.