How to display a dashboard graph panel query that scales correctly

Hi guys,

Just looking for some friendly advice on displaying historic graphs with Grafana, InfluxDB and Telegraf.

I have a dashboard that shows combined network usage across 50 Servers in MB/s.

The query is:

SELECT sum(“Bytes_Received_persec”) AS “MB’s Recieved”, sum(“Bytes_Sent_persec”) AS “MB’s Sent” FROM “win_net” WHERE (“host” =~ /^BuildFarmNodes/ AND “instance” =~ /^network/) AND $timeFilter GROUP BY time(10s) fill(linear)

This graph works great and gives me good info for peaks and troughs and helps create a good snapshot of how busy the servers were over a 5 minute period. The same query works fine and scales up to about 24hrs however now i’m adding a row showing 7 day / 30 day and 12 month graphs. I need to tweak the query so it isn’t trying to use 10s metric intervals for 50 servers over a year :slight_smile:

When i change the interval to 1m or 5m or 1hr the results that are generated arn’t giving me the information i expect. I’m assuming because i’m using the “Bytes_Per_sec” value and then asking for it to sum this over 1m/5m/1hr etc and it’s creating ridiculous throughput figures.

how do i scale this so that i can display a graph that simply shows the “bytes_per_sec” value in 10s intervals every say hour for 30 days?

I was thinking of using mean average of say 5min intervals but then realized that in a 5 minute interval the network might be using 4GB/s and then for the remaining 4m 50s it’s doing 1MB :slight_smile: So the averaged data point isn’t representative anymore… help!

Use __interval variable, it will calculate the aggregation period automatically -

You can also define a new variable of the type Interval, where you can customize calculation of the aggregation period -

Thanks for the reply - But __interval skews the graph horribly as you can see from the pics below. I have set picture 1 to 10s and picture 2 to __interval. Both pictures have time range “override relative time LAST” set to 7days.

The peak disk usage on pic 1 is 874MB/s
The peak disk usage on pic 2 is 2.5GB/s

It makes sense because query 1 is summing values per 10s and query 2 is summing for the larger time period (10m). Use DERIVATE function to calculate the proper rate from the sum.

Interesting - Which one would you use?

SUM + DERIATIVE alone seems to skew the info even worse…

Even using SUM + non_negative_deriative(10s) gives me very bizarre results. I’ve attached a picture of the exact same query as before with non_negative_deriative(10s) added.

The peak disk usage on pic 1 is 874MB/s
The peak disk usage on pic 2 is 2.5GB/s
The peak disk usage on pic 3 is 82.3MB/s

Is there a way to make Grafana use “Average” instead of “Sum” when the range changes?