FFT Visualization w/ Time Series Graph - Possible to remove X-Axis gridlines?

Hello, I’m attempting to visualize an FFT with a graph panel set to time series. It works but has a couple problems. 1. The x-axis grid lines are so numerous and close together it clouds the entire graph. 2. The query/plotting of the data takes quite some time (10’s of seconds). The data is stored in influxdb with the following format:

point = {
    "measurement" : "FrequencyDomain",
    "time" : <current time stamp>,
    "tags" " {
        "SecondOfDay" : <second of the day>,  # I might not need this
        "DAQchannel" : <channel number>
    },
     "fields" : {
        <freq> : <mag>,    # Each key unique and represents a frequency bin. Orig data is float
        <freq> : <mag>,
        ...
    }
}

The query I’m using is:

toFloat = (tables=<-) =>
  tables
    |> map(fn:(r) => ({ r with _field: float(v: r._field)}))

from(bucket: "cm")
  |> range(start: 2021-05-01T15:15:59Z, stop: 2021-05-08T15:15:59Z)
  |> filter(fn: (r) => r["_measurement"] == "FrequencyDomain")
  |> filter(fn: (r) => r["Ch"] == "Ch0")
  |> filter(fn: (r) => r["SecondOfDay"] == "68880")
  |> aggregateWindow(every: 500ms, fn: last, createEmpty: false)
  |> toFloat()

A pic of the visualization is:

For this particular visualization there are 16,741 bins however I can reduce this to 2500 with the same x-axis clouded result. I believe query time improves… But mainly, I’m wondering if there is anyway to disable the x-axis gridlines so that the entire plot does not appear to be greyed out. I looked at plotly but it appears I would have to set each bin manually in the add-on UI. Does anybody have any ideas for removing the x-axis grid lines? And is there a data structure that would perform better for queries? Thank you very much!

Hi @mchristisen, thanks for sharing your use case. Using Grafana to plot FFTs is quite an interesting approach - and kind of ironic in a sense, given that what you’re plotting is essentially the inverse of time-series data :slight_smile:.

Which is also part of the issue here: in your setup, you’re in effect plotting tends of thousands of separate data series with one point each, rather than a single series with many points. This isn’t what Grafana normally expects to “see”, and in particular it isn’t a great match for the standard graph panel; indeed I would have thought that Plotly may be better, but it sounds like you already checked that out. To be honest it’s also not a great use case for Influx, which expects time to be the main index, and works best when cardinality (number of distinct fields + tag combinations) is relatively modest.

Anyway, you probably know all of this already, and have made your choices, but I thought I’d put it out there. FWIW, I don’t actually have a much better suggestion as to what the data structure should be - other than to maybe switch to a normal tabular/SQL data store that doesn’t expect time to be the primary index.

To get the results you’re after with this setup, I’d recommend the following:

  • To turn off the x-axis marks - check the “Axes” section of the “Panel” tab in edit more:

    That should take care of the “shading”. (also switching to “Series” rather than “Time” mode may help with performance, as well as Value=Current)
  • For performance and usability, reduce the number of frequency bins you have as much as possible - and in any case no greater than the screen resolution you’re using. [btw Grafana optimizes this automatically for time-series data, but in this case it’ll need to be manual]
  • The time it takes to plot may be both due to slow querying on the Influx side, as well as slow visualization by Grafana. I don’t have any silver bullets, but if you’re looking for optimize further, check out the “Stats” tab in the Query inspector - that will at least tell you how long your Influx query took; the rest would be in-browser visualization time.
1 Like

Hi svetb, thank you very much for your response. I am very much new to grafana/influx and tried to put some thought into how best to store and display this data. It is totally not time-series : ) but thought it was worth a shot. I figured the data would have to be plotted with plotly but to my surprise selecting time-series for the x-axis displayed an fft after sorting by bin. And I swear, until you suggested it, every time I turned the “x-axis show slider” off, the entire x data disappeared. But that did the trick! Thank you : )

For the query stats, the total request time is ~20seconds with a data processing time of 14ms. I’m not sure whether the data processing time is for the grafana plot or the influx query. Need to dig in the docs further, a quick web search didnt give me an answer.

Plotly probably has the answer, I’m just not sure how to use it yet or how to best pipe data into it. Manually setting thousands of traces isnt the answer though : ) I’m sure its user error…

1 Like

With the x-axis grid lines removed the fft looks better.
fft_no_x_gridlines

However the long query time is still an issue. I took your advice svetb and moved this fft data to mysql with a simple two column table structure with the needed rows (I’m still at 16741) so I could test query speed. The table structure is:

+-------------------+
|   bin  |   mag    | 
+-------------------+
|    x   |   value  |   
|   ...  |   ....   |
+-------------------+

This worked very well. Total query time was ~500ms for the 16741 rows plus graphing overhead. With this data structure the Plotly plugin worked perfectly and I only had to setup one trace.
fft_plotly

But, this is not how I would like to store my data. I would prefer something like this:

+---------------------------------------------------------------------------+
|  index  |  datetime  | bin 1  | bin 2 | bin 3  |  ...   | bin 2500        |
+---------------------------------------------------------------------------+
|         |            |        |       |        |        |                 |
+---------------------------------------------------------------------------+

or possibly:

+---------------------------------------------------------------------------+
|  index  |  datetime  | json_fft_data                                      |
+---------------------------------------------------------------------------+
|         |            | {'bin1':'value', 'bin2':'value', ... }             |
+---------------------------------------------------------------------------+

This leads to an issue in the plotly plugin where each column would need to be added as a trace individually.

I’m wondering now if moving forward with mysql I could insert rows of bin data in my preferred format at my normal interval and just keep a pivot table with the simple convenient two column table structure for grafana. Manipulating data into a pivot table will be a new experience for me (much like the rest of this).

1 Like

Hi @mchristisen its great that you got it working this way.

On to your final question, may I ask why your preferred data format is specifically what you describe? While that sort of multi-column table is of course very common for manipulating data in Excel for instance, it’s quite unusual for data in a (SQL) database to be stored like that. From what you’ve shared, I think the most appropriate database table schema would contain the following 4 fields:

  • index
  • datetime
  • bin number
  • magnitude

I don’t know how you’re inserting the data into the database, but would a format like that potentially be feasible for you?

If that’s not an option, then indeed, you’ll probably need to do some (un-)pivoting on your data in order to get it into an easily plot-able format.

Hmm, I guess that format is purely based on some assumptions about db query performance. My current plan is to condense the fft data down to 2500 bins and insert this data at 1 minute intervals then store for ~3 years. With each bin being inserted into columns it looks like there will be 525,600 rows of fft data per year.

1 row_min * 60 min_hour * 24 hour_day * 365 days_year = 525,600 rows_year

With each bin being inserted into a row it looks like the there will be 1,314,000,000 rows of fft data per year.

2500 row_min * 60 min_hour * 24 hour_day * 365 day_year = 1,314,000,000 rows_year

My assumption was that “SELECT * WHERE ‘datetime’=<some_minute>” would be faster with individual bin data stored in columns as opposed to rows. Do you think that would not be the case? It would certainly make visualizing this data in grafana easier for an ignoramus like myself if it made no difference : )

1 Like

Nah…having that number of rows should be fine - as long as you have the storage capacity for that amount of data of course. See e.g. Maximum number of records in a MySQL database table - Stack Overflow. In fact, in MySQL the number of columns is limited: https://dev.mysql.com/doc/mysql-reslimits-excerpt/5.7/en/column-count-limit.html. Just make sure you have everything properly indexed (unique index for each FFT, and for each bin).

Not in my experience. If anything, I suspect that would be slower.