Issues with NFS NAS-mounted on-premise Grafana Helm


I’m having a hard time with Grafana operations.
I am currently running Grafana in an on-premise (ubuntu 22.04 LTS) environment.

I have used Helm Chart and have set the specification to ReadWriteOnce mode.
(I’m going to change that to ReadWriteMany mode as it seems to be more appropriate).

apiVersion: v1
kind: PersistentVolume
  name: grafana-pv
  storageClassName: grafana-sc
  persistentVolumeReclaimPolicy: Retain
    storage: 10G
    - ReadWriteOnce
    path: /volume1/test

On the node where Grafana is deployed, a process with the state D STAT is created that periodically accesses the NFS server as shown below.
After that, the D STAT Process will not disappear.

Every 1.0s: ps -eo ppid,pid,user,stat,pcpu,comm,wchan:32 | grep " D" test-server2: Thu Jun 22 10:49:41 2023
      2 592533 root D 0.0 - -

Then, it will be left in the blocked queue as a process, as shown below.
Programs that use PV on Node will no longer be able to service it.

 vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 0  3      0 124634128 261444 4514876    0    0     2     5   17   21  0  0 99  0  0
 0  3      0 124633872 261444 4514880    0    0     0     0 1288 2490  0  0 87 12  0

I checked to see which program was causing this, and found that it happens when Grafana is up.

Is Grafana not recommended to run on NFS?
For reference, my grafana.db file is about 11MB in size.

Does anyone have any advice on this phenomenon?