Bar graph problem: bars in dashboard are ok, then suddenly they become really thin

Hi,
I have some graphs that uses bar plot mode and I’m grouping data by hour:

from(bucket: "dns-resolvers-bucket")
  |> range(start: v.timeRangeStart, stop:v.timeRangeStop)
  |> filter(fn: (r) =>
    r._measurement == "stats" and
    r._field =~ /num_(cachehits|cachemiss|prefetch|expired)/ and
    r.host == "nas" and
    r.service == "unbound"
  )
  |> derivative(
      unit: 1h,
      nonNegative: true,
      columns: ["_value"],
      timeColumn: "_time"
  )
  |> aggregateWindow(every: 1h, fn: mean)

I’ve just opened the dashboard with a time-window of 24-hours and everything was fine, with good-looking bars with a reasonable width. I’ve started working on the dashboard, on other panels, and after some refreshes all the bars became really thin (same height/value, but much much thinner, almost a line). After a while they become slightly bigger.
The following image is the “slightly bigger” stage:

What could be the cause of this “problem”? Is there any way I can say "make the bar as large as 30 minutes, since I’m aggregating 1-hour periods of data?

Thank you in advance for any help

1 Like

As you can see now the same 24-hours graph shows prettier bars with a reasonable width.

I’m not sure if you found a resolution to this but I had the same issue. Posting this because it was quite a struggle to find similar posts and your issue seemed exactly the same as mine but without resolution. I’m not sure if my solution is the recommended solution but it did change the behaviour to what was expected by my users.

Using Influx 2.1.1 and Grafana 8.2.4. I was using 1 minute windows with 5 second refresh so it was likely easier to visualize the issue. My original query was:

from(bucket: "myBucket")
  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
  |> filter(fn: (r) => r["_measurement"] == "jvm_memory_used_bytes" and r["area"] == "heap")
  |> aggregateWindow(every: 1m, fn: spread)

Bars would gradually increase in size to become the expected size then suddenly drop to being very thin then repeat the process:

I noticed that when the time rolled just past the minute Influx would produce a window with a very narrow time frame. As that window increased, so did the width of the bars. Grafana seemed to be rendering the bars as thin as the shortest time frame.

cap3

I was able to eliminate the short most recent window using experimental.addDuration for the stop argument to extend the window to one minute beyond the latest full minute, however, the results then also included a tiny window for the minute beyond. Adding “createEmpty: false” to the aggregation got rid of that window since there was no data.

The bars maintained consistent width at that point but the oldest bar in the graph would start to go down as the data began to exceed the lower bounds of the query period causing some additional confusion for my users. This was alleviated using experimental.subDuration on the start of the range. My final query ended up being:

import "experimental"

from(bucket: "myBucket")
  |> range(start: experimental.subDuration(d: 1m, from: v.timeRangeStart), stop: experimental.addDuration(d: 1m, to: v.timeRangeStop))
  |> filter(fn: (r) => r["_measurement"] == "jvm_memory_used_bytes" and r["area"] == "heap")
  |> aggregateWindow(every: 1m, fn: spread, createEmpty: false)

It should be noted that the subDuration portion worked when used from Grafana, however, when I attempted to use it in the Influx Data Explorer it failed with “expected time but found duration (argument from)”. I’m not entirely sure why but I have seen some mention of changing v.timeRangeStart to be a timestamp here which seems to possibly have been changed around Oct 2021. I suspect this may work from the data explorer in more recent versions?

I don’t think this is actually a Grafana issue. I realized that I was able to work around this issue by not using Flux’s aggregateWindow, but instead, using a combination of their window and whatever aggregate function you need.

@mightyslaytanic Try

  |> window(every: 1h)
  |> mean()
  |> group()

And for @seacuke23

from(bucket: "myBucket")
  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
  |> filter(fn: (r) => r["_measurement"] == "jvm_memory_used_bytes" and r["area"] == "heap")
  |> aggregateWindow(every: 1m, fn: spread)

Try

from(bucket: "myBucket")
  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
  |> filter(fn: (r) => r["_measurement"] == "jvm_memory_used_bytes" and r["area"] == "heap")
  |> window(every: 1m)
  |> spread()
  |> group()
1 Like

This topic was automatically closed after 365 days. New replies are no longer allowed.