Download CSV - Button

Hello everyone,

I am trying to create a button that allows me to download all data from the InfluxDB as a CSV file.

I tried it with an HTML button but then had to find out that the text field did not allow me to access the database.

Is there a way to use a button plugin that sends a query and offers the CSV file to save?

Thanks for your help

PS: Influx can create CSV files in the shell with this parameter: “-format ‘csv’ > /file”. Maybe it is possible to use that to start the Download?


I would be very interested to find a solutio to this as well

Any updates? I’m looking for a similar feature.

I’ve got multiple panels in my dashboard and am looking for a button/command to download all the csv’s at once.

(Rather than Inspect → Data → Download CSV for each one)

I recently found myself with the same issue and also wanted a simple button for the user to download, mainly because with big data Grafana needs an aggregation to reduce the amount of datapoints displayed for a Graph. Thus Inspect > Data > Download CSV works, but gives you the resulting & reduced datapoints instead of all raw data per minute or second over a period of 30 days or more.

The strategy I used was:

1. create an html button and attach an onClick event or listener with javascript.

2. prepare the necessary flux script as a javascript string (e.g. use backticks ``)

3. Use javascript again to query the database directly (in my case InfluxDB) through its API in CSV format.
I used XMLHttpRequest to make a POST request to the API

3. Then I used this raw data in another function to add it as an “invisible” a-href download link and emulate the click() event.

Result: when I click on the button, I’ll automatically get a CSV file downloaded to downloads folder.
The plugin I used was HTML graphics plugin for Grafana | Grafana Labs.

My code/solution has a bunch of work-arounds at the moment due to a lot of compensations & experimenting, but if anyone’s still interested, I’ll try to simplify & clean it up as much as possible to post it here.

FYI: you could edit this strategy in order to make multiple API calls to download multiple CSV’s with one click on the button. However, all this depends on your database and skill in javascript.


I would be interested into your solution and happy to download your work if you are happy to share.
I would be interested also about a panel that can allow us :

Hello @penguinfr and welcome!

Sorry for this long post. It was a challenge to prepare a working “template” as an example, but I will try to do so below. My concept was based on the following:

  • Export CSV data based upon time period chosen from within Grafana itself
  • Get the data directly from the datasource through its API (remember, my example is based upon the use of InfluxDB as a datasource which comes with its own API out of the box).

Tools I used and tested upon:

  • Grafana 8.4.3
    – plugins I used: InfluxDB datasource (native) & HTML Graphics v2.0.2 (install v2.0.2 manually to work with grafana 8.3.4 and above)
  • InfluxDB 2.1.1 (with API v2 and Flux scripting)

NOTE: This has been put together as an experiment and needs a LOT of improvement. I used examples from all around in order to create something basic within Grafana to support this idea.
If your Database of choice doesn’t come with an API, then you could always write your own (e.g. with Nodejs) or query it through a different DB (e.g. like how you can query MySQL data inside of influxDB directly).

Alright with all of that out of the way, let’s get started:

In my example above I’ve created 2 panels in which one of them shows the necessary Graphs for the user to manipulate with the Grafana time picker. When the desired timerange has been selected, you can click on the export button in the other panel which will query the desired data separately and will automatically download it to your downloads-folder.
(reason to query data again separately instead of reusing the Graph panel as a datasource, is to give flexibility to the user to get raw data. This is especially useful when you want Big Data, because Grafana will only “import” data to a certain limit amount thus needing an aggregation to group all data into smaller chunks).

  1. Create an empty panel and load any InfluxDB datasource along with HTML Graphics as the visualization
  2. Enter the following flux script in the datasource under the Query tab which will update with each time range and produces the start and stop unix timestamps. We need this to manually query the API for the raw data through JavaScript.
    Flux script:
from(bucket: "Winservices_v1")
  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
  |> filter(fn: (r) => r["_measurement"] == "win_disk")
  |> last() 
  |> map(fn: (r) =>
    return {
      start: v.timeRangeStart,
      stop: v.timeRangeStop

Note: I’m just using a dummy bucket database here in order to trick InfluxDB for us to use the map function. With this function I can create my own custom fields.
If you have a different database for this, then you could use that one as well if it supports a way to get the start and stop timestamps from the Grafana time picker (e.q. perhaps MySQL and so on).

  1. Go to the right properties panel in Edit-mode to configure the rest of the HTML Graphics visualization.
    Fill the following code windows and leave all others empty:

– HTML/SVG document:

<div id="mySubmitButtons">
    <button title="" id="myButton">
        <div id="myButtonIcon">💾</div>
        <span class="myButtonText">Export to .CSV</span>
    <span id='status'></span>

– onRender:

var myButton = htmlNode.getElementById("myButton");
var myStartTime = parseInt(data.series[0].fields[0].values.buffer[0] / 1000);
var myStopTime = parseInt(data.series[0].fields[1].values.buffer[0] / 1000);

var queryBody = `
from(bucket: "Winservices_v1")
  |> range(start: ${myStartTime}, stop: ${myStopTime})
  |> filter(fn: (r) => r["_measurement"] == "win_disk")
  |> filter(fn: (r) => 
    r["_field"] == "Percent_Disk_Read_Time" or 
    r["_field"] == "Percent_Disk_Time" or 
    r["_field"] == "Percent_Disk_Write_Time"
  |> filter(fn: (r) => r["instance"] == "C:")
  |> keep (columns: ["_time", "_field", "_value"])
  |> aggregateWindow(every: 1m, fn: mean)
  |> filter(fn: (r) => exists r["_value"])
  |> pivot(
      columnKey: ["_field"],
      valueColumn: "_value"
  |> drop (columns: ["_start", "_stop"])
  // |> map(fn: (r) =>
  // { 
  //   return {
  //     _time: experimental.addDuration(d: 1h, to: r._time),
  //     "Percent_Disk_Read_Time": string(v: r["Percent_Disk_Read_Time"]) + "  %",
  //     "Percent_Disk_Time": string(v: r["Percent_Disk_Time"]) + "  %",
  //     "Percent_Disk_Write_Time": string(v: r["Percent_Disk_Write_Time"]) + "  %"
  //   }
  // })

if (myButton) {
  myButton.setAttribute("onclick", `
    //console.log('Button click detected');

    function saveBlob(blob, fileName, rawData) {
      try {
        var uint8array = new TextEncoder("utf-8").encode(rawData);
        var string = new TextDecoder().decode(uint8array);
        var rawData_cleanedExcessData = rawData.replaceAll(',_result,0,','');
        var rawData_cleanedExcessHeaders = rawData_cleanedExcessData.replaceAll(',result,table,','');
		var hiddenElement = document.createElement('a');
        hiddenElement.href = 'data:text/csv;charset=UTF-8,' + '\uFEFF' + encodeURIComponent(rawData_cleanedExcessHeaders); = '_blank'; = fileName;;
        Button_status.innerText = "Download Complete!";
      } catch (error) {
        Button_status.innerText = "CSV Export Error!";
    this.disabled = true;
    var exportButton = this;
    var Button_status = this.parentElement.children[1];
    var iconButton = this.children[0];
    Button_status.innerText = "requesting data...";
    var xhttp = new XMLHttpRequest();"POST", "", true);
    xhttp.setRequestHeader("Authorization","Token W1UqlTdl80zMyE7jVMd71h1y6wLrlQMcZ03W4VtH1JfUYkUWPIa8rGP2cm0RB25Jfe7rqEOg6tWEY1t8ghlQfk==");
    var myDate = new Date();
    var fileName = "Graphs_" + myDate.toISOString() + ".csv";
    xhttp.onload = function () {
        if (xhttp.readyState == 4 && xhttp.status == "200") {
          console.log("Data has been downloaded...");
          var blob = new Blob([xhttp.responseText], {type:'text/csv'});
          Button_status.innerText = "Exporting to CSV...";
          saveBlob(blob, fileName, xhttp.responseText);
          iconButton.innerText = "✔";
            iconButton.innerText = '💾'; 
            exportButton.disabled = false;
            Button_status.innerText = "";
          }, 1000);
        } else {
          iconButton.innerText = "❌";
 = "red";
          Button_status.innerText = "Error in download process";
    xhttp.onerror = function () {
        alert("Something went wrong!");
        iconButton.innerText = "❌"; = "red";
    iconButton.innerText = "⏳";

– CSS:

#myButton {
    display: inline-flex;
    -webkit-box-align: center;
    align-items: center;
    font-size: 18px;
    font-weight: 500;
    font-family: Roboto, Helvetica, Arial, sans-serif;
    padding: 0px 20px;
    height: 48px;
    line-height: 46px;
    letter-spacing: 0.01071em;
    vertical-align: middle;
    cursor: pointer;
    border-radius: 2px;
    background: grey;
    color: rgb(255, 255, 255);
    border: 1px solid transparent;
    transition: background-color 350ms;

#myButton:hover {
    background: yellow;
    color: black;
#myButton:active {
  background-color: lightyellow;
  transform: translateY(2px);

#myButtonIcon {
    display: inline-block;
    vertical-align: middle;
    fill: currentcolor;
    margin-right: 8px;
    margin-left: -12px;
    overflow: hidden;

.myButtonText {
    display: flex;
    flex-direction: row;
    -webkit-box-align: center;
    align-items: center;
    white-space: nowrap;
    height: 100%;

  1. As for the Graph panel I used a standard Timeseries visualization with an InfluxDB datasource (match this datasource with the query in the JavaScript code).
    The flux script:
from(bucket: "Winservices_v1")
  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
  |> filter(fn: (r) => r["_measurement"] == "win_disk")
  |> filter(fn: (r) => r["_field"] == "Percent_Disk_Read_Time" or r["_field"] == "Percent_Disk_Time" or r["_field"] == "Percent_Disk_Write_Time")
  |> filter(fn: (r) => r["instance"] == "C:")
  |> aggregateWindow(every: v.windowPeriod, fn: last, createEmpty: false)
  |> yield(name: "last")
  1. That’s it! Tweak this however you want.

@penguinfr as for your 2nd request, I wouldn’t know how to automate this every day / week/ months. I guess you could write this same JavaScript in a NodeJS application or somethin similar in order to automate it as a server or service (much like your github link reference).


:open_mouth: Impressive!!!
Could I make some tweaks to this and add it as an example on the HTMLGraphics website?

I’ll give you credit of course if you have any profile links you want me to add (github, grafana, twitter, ETC) :smiley:

@zuperzee that would be awesome! It still needs a lot of implementation like download progress for large data and so on, but I’m eager to see how you’ll change/improve this. :slight_smile: (You can link my github and grafana profiles)

1 Like

Made an example on the HTMLGraphics website Export to CSV | HTMLGraphics which should work with most data sources. Thanks neshorg for the original post.

1 Like

@zuperzee thanks for mentioning me and great implementation of this idea!
I’m glad it served as some inspiration and look forward to other possible variations of this.