Speed up long running Flux Query

Hi guys

I dig a little bit deeper into pulses, time ranges and so on and changed my task a little bit.
More is to be find here: https://community.grafana.com/t/power-consumption-per-day-with-non-equidistant-data-points/76406/9

Task looks like this now:
import “contrib/tomhollingworth/events”

option task = {
name: “power_consumption_per_day”,
cron: “0 0 * * *”,
}

from(bucket: "loxone/autogen")
    |> range(start: -24h, stop: now())
    |> filter(fn: (r) => r["_measurement"] == "energie2")
    |> filter(fn: (r) => r["_field"] == "value")
    |> filter(fn: (r) => r["category"] == "Energie")
    |> filter(fn: (r) => r["name"] == "Verbrauchszähler")
    |> group()
    |> drop(columns: ["result"])
    |> events.duration(unit: 1s)
    |> map(fn: (r) => ({r with power_kWh: float(v: r.duration) / 3600.0 * float(v: r._value)}))
    |> sum(column: "power_kWh")
    |> duplicate(column: "_stop", as: "_time")
    |> drop(columns: ["_stop", "_start"])
    |> map(
        fn: (r) => ({
            _time: r._time,
            _value: r.power_kWh,
            _measurement: "energie_consumption_production",
            _field: "value",
            category: "Energie",
            name: "Verbrauchszähler",
            raum: "Zentral",
            state: "actual",
            type: "Meter",
        }),
    )
    |> to(bucket: "power_consumption_production")
2 Likes