Is there a setting that avoids the intermittent "No data" problem

I have a Grafana server and Influxdb running on a Raspberry Pi and the data is displayed on a Google Chrome browser running on a Windows machine. Generally it works fine, except that every few minutes one or more of the graphs fail to update. Instead, the existing graph line is erased and replaced by the message “No data”. If I do a manual refresh then the data is displayed correctly.

If this caused by a timeout somewhere? If so is there a setting that I can change to increase the value?

Hello :wave: and welcome to the forum, @mjb

I think the ideal thing would be to check the grafana server logs and see what is happening when thonse panels break.

Also, people with similar problems have had some success adjusting these two config options:

You can check the config for outher timeout related options>

What sort of key words should I look for in the Grafana log?

What if it’s a client side issue with the Javascript code running in the browser window? Is there a log for that?

I’m not sure if I completely understand, but have you inspected your browser’s developer console while experience that issue? I’d maybe check the console or the networking tab and look for errors

How do I identify an error?
All I see is network requests and responses.

I am having the same issue. Very, very regularly, one or more of my panels change from showing data to saying “No data”. Manually refreshing then fixes that until it happens during one of the next refreshes.

I did check the log and found something:

t=2022-05-25T14:10:29+0200 lvl=eror msg="Request error" logger=context userId=1 orgId=1 uname=admin error="net/http: abort Handler"
t=2022-05-25T14:13:30+0200 lvl=eror msg="Data proxy error" logger=data-proxy-log userId=1 orgId=1 uname=admin path=/api/datasources/proxy/1/query remote_addr=10.0.20.10 referer="http://10.0.50.10:3000/d/aM4cL5bWk/openwb?orgId=1&from=now%2Fd&to=now" error="httputil: ReverseProxy read error during body copy: read tcp 127.0.0.1:33130->127.0.0.1:8086: use of closed network connection"
t=2022-05-25T14:13:30+0200 lvl=eror msg="Request error" logger=context userId=1 orgId=1 uname=admin error="net/http: abort Handler"
t=2022-05-25T14:14:30+0200 lvl=eror msg="Data proxy error" logger=data-proxy-log userId=1 orgId=1 uname=admin path=/api/datasources/proxy/1/query remote_addr=10.0.20.10 referer="http://10.0.50.10:3000/d/aM4cL5bWk/openwb?orgId=1&from=now%2Fd&to=now" error="httputil: ReverseProxy read error during body copy: read tcp 127.0.0.1:60156->127.0.0.1:8086: use of closed network connection"
t=2022-05-25T14:14:30+0200 lvl=eror msg="Request error" logger=context userId=1 orgId=1 uname=admin error="net/http: abort Handler"
t=2022-05-25T14:14:31+0200 lvl=eror msg="Data proxy error" logger=data-proxy-log userId=1 orgId=1 uname=admin path=/api/datasources/proxy/1/query remote_addr=10.0.20.10 referer="http://10.0.50.10:3000/d/aM4cL5bWk/openwb?orgId=1&from=now%2Fd&to=now" error="httputil: ReverseProxy read error during body copy: read tcp 127.0.0.1:58948->127.0.0.1:8086: use of closed network connection"
t=2022-05-25T14:14:31+0200 lvl=eror msg="Request error" logger=context userId=1 orgId=1 uname=admin error="net/http: abort Handler"
t=2022-05-25T14:15:31+0200 lvl=eror msg="Data proxy error" logger=data-proxy-log userId=1 orgId=1 uname=admin path=/api/datasources/proxy/1/query remote_addr=10.0.20.10 referer="http://10.0.50.10:3000/d/aM4cL5bWk/openwb?orgId=1&from=now%2Fd&to=now" error="httputil: ReverseProxy read error during body copy: read tcp 127.0.0.1:34296->127.0.0.1:8086: use of closed network connection"
t=2022-05-25T14:15:31+0200 lvl=eror msg="Request error" logger=context userId=1 orgId=1 uname=admin error="net/http: abort Handler"

Not sure what’s happening there. I use InfluxDB as the data backend.

Is grafana behind a proxy? Like ngnix?

No, it’s local, the same Raspberry Pi.

I had a problem like this appear after switching to a new CF card. The ‘upgrade’ was spurred by the rare-occasional timeouts of the graphs on the web page.
The one I tried upgrading to, had awful random-read performance; Causing many graphs to break and recover intermittently, often timing out on my chrome display page…
My attempted card upgrade backfired by switching from the stock one, to a card that was highly reviewed for video recording (sequential write, over random-read). Turns out, the graphs need the high random-read performance in order to sample data for display.
I solved the issue by using a card that had better random-read performance than my
pi-stock ?sandisk?

This is not an product endorsement, but the [Samsung PRO Endurance] & [SanDisk Extreme PRO] have given me trouble free graph performance on a marquee with a dozen graphs running on it. Testing them both, showed higher random-read perfornamce, so I kept them in place.
Both have succeeded in making the intermittent graph errors disappear, as they refresh quickly now.
I name these two card models, because few are reviewing cards by their random-read performance. They always advertise their write performance, in which most will fit the bill in a pi. Its the ‘reads’ speed that will give you hang-ups in grafana & DB queries.