Any tips to overcome Postgre/MySQL 1000000 row max data limit, elegantly?

Imagine, user takes suddenly wide time-range like 2 years -> Error 1000000 max rows returned.

Is there some magic interval setting to be able to use with SQL databases, if query returns too much it would automatically aggregate data-row amount under 1million.

Hi,

There’s no such feature available in Grafana. If you find this feature really helpful I would suggest you to create a feature request issue.

Marcus

Ok. Let’s do that. But first, let me chew this on SQL level. Ignoring/dropping/grouping timestamps on SQL (=> less timestamp related results) is bit challenging.

Anyway, in huge set of results, I would like to see generally some user-optional-setting interpolate/average result on incoming data/values in data source (low)level. You guys obviously are doing some level aggregating/interpolating/averaging inside Grafana UI -> 1million datapoints cannot be displayed in 1920x1080 screen.

I was just hoping that some SQL guru’s will wake up to this topic…

Have you considered using a time-series database instead (such as Influx or QuestDB)?
https://questdb.io/ has a big advantage in that it imitates Postgre in terms of its compatibility with Grafana (use the existing Postgre Grafana plugin).

1 Like

This problem is also easily solved by using TimescaleDBs time_bucket() hyperfunction which allows you to aggregate metrics over an arbitrary interval.

2 Likes