Lost in graph because of data granularity

Greetings,

We have iOt-ish data coming from devices in the field. the data is captured in milliseconds.
When displaying this data in grafana (the data is now dumped both in sql and influxdb for POC/evaluation) it was way too granular and sometimes easy to lose your bearing and you have to zoom in and zoom out a lot to find the specific area you would like to focus on. Is there a way to manage this so that e are able to see anomalies from birds eye view and then zoom in. Maybe another way is to use filters via variables? Or maybe make it so that it defaults to current date and time span of last 15 minutes? Any other recommendations in order for us to tame this beast? Thanks!

I recommend that you give us some examples of:

a) the data you are collecting

b) the queries you are performing

c) what does this data mean (if that isn’t obvious from (a)) and how do you
characterise an “anomaly” (which you say you want to focus in on)?

Antony.

Thanks @pooh

a) the data is radio packets

_measurement	radiotype	_field	timestamp
packets	rt	allocated	1628763423986
packets	rt	allocated	1628590623986
packets	rt	allocated	1628504223986
packets	rt	allocated	1628417823986
packets	rt	allocated	1628331423986
packets	rt	allocated	1628245023986
packets	rt	allocated	1627640223986
packets	rt	allocated	1627553823986
packets	rt	allocated	1627467423986
packets	rt	allocated	1627381023986
packets	rt	allocated	1627294623986
packets	rt	allocated	1627208223986
packets	rt	allocated	1627121823986
packets	rt	allocated	1627035423986
packets	rt	allocated	1626949023986
packets	rt	allocated	1626862623986
packets	rt	allocated	1626689823986
packets	rt	allocated	1626603423986
packets	rt	allocated	1626517023986

b) the query

from(bucket:"packetlogs")
  |> range(start:-24h)
  |> filter(fn:(r) =>
    r._measurement == "packets" and
    r._field == "allocation"
  )

c) anomalies would be spikes compared to a healthy baseline

hope I have provided what you asked for, if not please let me know?