CDF plugin - struggle with legend

I have been wanting a CDF panel since I first stumbled into the Grafana world. And love and behold, it now exists!

@sebastiangunreben I think it is you who developed it? Thanks a bunch! This is really useful for me. :slight_smile: My only issue is that I struggle to get the Legend to work properly. As seen in this example, I get to name the three first number series, but the last refuses to be named.


Working:

from(bucket:"piprobe/autogen")
 |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
 |> filter(fn: (r) => r._measurement == "ping" and
  r._field == "rtt" and
  r.probe == "${probe}"
 )
 |> aggregateWindow(every: ${interval}, fn: mean)
 |> map(fn:(r) => ({ _time:r._time,_value:r._value,_measurement:"mean"}))

Failing:

from(bucket:"piprobe/autogen")
 |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
 |> filter(fn: (r) => r._measurement == "ping" and
  r._field == "rtt" 
 )
 |> map(fn:(r) => ({ _time:r._time,_value:r._value}))
 |> aggregateWindow(every: ${interval}, fn: mean)
 |> map(fn:(r) => ({ _time:r._time,group:r._value}))

I have tried multiple variant of the map function, removing it all together, keeping _value named as _value and introducing _measurement:"group" ect. But it just keeps being unnamed. Any hints as to why I struggle to get the legend for all numberseries?

1 Like

This continiues to be a issue for me. Any suggestions? Or perhaps anybody knows @sebastiangunreben and can give him a hint to look at this post?

Hello @eriktar,
now I see your post. Good idea to address me directly.
I try to reproduce your finding. May I ask you to give me an insight on how your data structure is looking like?
BR,
Sebastian

Thanks for responding @sebastiangunreben. :slight_smile:

I ingest the data on this format into influxdb
ping,probe=probe12,target=ping.example.com ttl=51,rtt=36.7,loss=0 1647264089202160000

This here is the resulting data from the working query. Single device

Resulting data where I can’t get a legend. Mean for all probes.

I have multiple probes, but all ping the same target. Meaning the only difference in the data extracted should be that there is more tags in the group query And the result should be stripe of any tags as far as I can see.

@eriktar … scenario is now well understood.
Some thoughts about that.

  1. you do not need the lines “aggregationWindow” and “map” and “keep” as the CDF only takes a series of values and builds up a CDF
  2. the legend name is taken from the series name. When you use the influx sql query field you have the option to insert an “Alias” name, which is taken as legend name. For flux, I did not find any proper name to freely define a new name of a series.

I hope switching to sql language is an option for you.

BR,
Sebastian

2 Likes

like this …

1 Like

Good old influxql to the rescue.

The map was there to try different variants of renaming of the datasets. I belive it works in some cases - I just cant figure out the pattern as to when it works and when it fails.

The aggregateWindow was there to ensure I don’t get to many data points in return from the query. Perhaps that is not a limitation on this plugin? I’ll experiment and find out.

I’ve done a bit further digging. I think the CDF plugin reads the name from the wrong level in the json tree. The outer level does not always contain a name field. The inner level does

{
    "refId": "C",  # name often, but not always shows up at this level
    "meta": {
      "executedQueryString": "from(bucket:\"piprobe/autogen\")\n |> range(start: 2022-03-16T05:44:43.344Z, stop: 2022-03-16T06:44:43.344Z)\n |> filter(fn: (r) => r._measurement == \"ping\" and\n  r._field == \"rtt\" \n )\n |> map(fn:(r) => ({ _time:r._time,_value:r._value}))\n |> aggregateWindow(every: 1m, fn: mean)\n |> map(fn:(r) => ({ _time:r._time,\"Group\":r._value,_measurement:\"Group\"}))\n"
    },
    "fields": [
      {
        "name": "_time",
        "type": "time",
        "typeInfo": {
          "frame": "time.Time",
          "nullable": true
        },
        "config": {
          "unit": "ms",
          "color": {
            "mode": "thresholds"
          },
          "thresholds": {
            "mode": "absolute",
            "steps": [
              {
                "color": "green",
                "value": nul
              },
              {
                "color": "red",
                "value": 80
              }
            ]
          }
        },
        "values": [
            ...
        ],
        "entities": {},
        "state": {
          "scopedVars": {
            "__series": {
              "text": "Series",
              "value": {
                "name": "Series (C)"
              }
            },
            "__field": {
              "text": "Field",
              "value": {}
            }
          },
          "seriesIndex": 3
        }
      },
      {
        "name": "Group",  # CDF should use this name, as it seems to always show up?
        "type": "number", 
        "typeInfo": {
          "frame": "float64",
          "nullable": true
        },
        "labels": {},
        "config": {
          "unit": "ms",
          "min": 0,
          "color": {
            "mode": "thresholds"
          },
          "thresholds": {
            "mode": "absolute",
            "steps": [
              {
                "color": "green",
                "value": null
              },
              {
                "color": "red",
                "value": 80
              }
            ]
          },
          "mappings": []
        },
        "values": [ ... ]  #  These are the data CDF uses

Hi @sebastiangunreben, I’ve recently started using this plugin, also thanks!
I have a different data source (Snowflake SQL), but the issue is similar: the series name seems to be taken from the query name rather than from the actual series.

I see that there appears to have been a change WRT series naming in v0.2.7, but maybe what should be used is
s.name ? s.name : field!.name instead of
field!.name ? field!.name : s.name
(wild guess)

BTW I’m using the Partition by values transformation as it’s the only way I found to have multiple series with a single query.

Thank you Felipe, I will have a look into this issue.

BR,

Sebastian

Hello Filipe,

thank you for using this Grafana plugin.

The logic of the labeling is as follows.

There is DATAFRAME handed over to the plugin.

It consists of several fields.

Fields store values or timestamps.

Dataframe has got some “name” attribute, which corresponds to the “Alias” field in Grafana.

But also fields have got some “name” attribute. In my case, influxdb, these “names” are set to “value” and “time”.

I need to check, how snowflake sql adds names to the dataframes.

BR,

Sebastian

Thank you for the prompt action Sebastian!

In case it helps, here is a sample DataFrame JSON from my Snowflake data source, B is the value that shows in the legends for all the 3 series, series-foo, series-bar and series-baz are the actual names one would expect in the legend:

[
  {
    "schema": {
      "refId": "B",
      "meta": {
        "executedQueryString": "<redacted>",
        "transformations": [
          "partitionByValues",
          "partitionByValues",
          "partitionByValues"
        ]
      },
      "name": "B",
      "fields": [
        {
          "name": "YEAR_WEEK",
          "type": "string",
          "typeInfo": {
            "frame": "string",
            "nullable": true
          },
          "config": {}
        },
        {
          "name": "DURATION",
          "type": "number",
          "typeInfo": {
            "frame": "float64",
            "nullable": true
          },
          "config": {}
        }
      ]
    },
    "data": {
      "values": [
        [
          "series-foo",
          "series-foo",
          "series-foo",
          "series-bar",
          "series-bar",
          "series-bar",
          "series-bar",
          "series-bar",
          "series-bar",
          "series-bar",
          "series-baz",
          "series-baz",
          "series-baz",
          "series-baz"
        ],
        [
          84.04886,
          48.002988,
          82.490841,
          89.124445,
          80.774677,
          86.180786,
          80.57381,
          95.553483,
          86.730673,
          89.427641,
          79.423928,
          77.312786,
          85.523365,
          90.045422
        ]
      ]
    }
  }
][
  {
    "schema": {
      "refId": "B",
      "meta": {
        "executedQueryString": "<redacted>",
        "transformations": [
          "partitionByValues",
          "partitionByValues",
          "partitionByValues"
        ]
      },
      "name": "B",
      "fields": [
        {
          "name": "YEAR_WEEK",
          "type": "string",
          "typeInfo": {
            "frame": "string",
            "nullable": true
          },
          "config": {}
        },
        {
          "name": "DURATION",
          "type": "number",
          "typeInfo": {
            "frame": "float64",
            "nullable": true
          },
          "config": {}
        }
      ]
    },
    "data": {
      "values": [
        [
          "series-foo",
          "series-foo",
          "series-foo",
          "series-bar",
          "series-bar",
          "series-bar",
          "series-bar",
          "series-bar",
          "series-bar",
          "series-bar",
          "series-baz",
          "series-baz",
          "series-baz",
          "series-baz"
        ],
        [
          84.04886,
          48.002988,
          82.490841,
          89.124445,
          80.774677,
          86.180786,
          80.57381,
          95.553483,
          86.730673,
          89.427641,
          79.423928,
          77.312786,
          85.523365,
          90.045422
        ]
      ]
    }
  }
]