Trend graph / Complex promql or send summary results to mimir

I run a k6 test in a container and use prometheus remote-write to send the data to mimir. I’m using the grafana dashboards to visualize the tests. I can use native histograms.

I want to create a dashboard that shows the trend over test runs. Let’s say I want to see the development of the 95 percentile response time for transaction X during all testruns done during the last 30 days. Every testrun should have 1 single datapoint plotted in time.

I see two options:

  1. create a promql query that calculates this single datapoint per testrun and plots it in time
  2. at the end of the testrun send the summary output as an additional datapoint to mimir

I can’t figure out how to do the first solution. For the second solution I think it is possible as the summary is available in the data object and I could parse it and create prometheus remote write requests sending the datapoints to mimir.

Anyone already been here, done that, and hopefully found a simple solution?

Hi @martind62a,

I feel like this is more of a prometheus or grafana question than a k6 one. But here is what grafana assistant gave me back on your query:

Both approaches are viable. Here’s the practical breakdown:

Approach 1 — PromQL subquery

This works if each test run is tagged with a unique label (e.g., testid). Using a subquery with max_over_time grouped by testid:

max by(testid) (  
  max_over_time(  
    histogram_quantile(0.95,  
      rate(k6_http_req_duration{scenario="transaction_x"}[5m])  
    )[30d:1m]  
  )  
)  

Limitation: this gives you one flat line per testid (constant value for the test’s duration), not a true single plotted point in time. You can reduce it to a point in Grafana using the Last reducer with a stat or scatter panel.

Approach 2 — handleSummary push (recommended)

This gives you a clean single point per run. k6’s handleSummary has access to the pre-computed p95 regardless of whether you use native histograms:

export function handleSummary(data) {  
  const p95 = data.metrics["http_req_duration"]?.values?.["p(95)"];  
  // write to file or http.post to pushgateway  
  return { "summary.json": JSON.stringify(data) };  
}  

Then a post-run script sends it to Mimir or a Prometheus Pushgateway:

P95=$(jq '.metrics["http_req_duration"].values["p(95)"]' summary.json)  
cat <<EOF | curl --data-binary @- http://pushgateway:9091/metrics/job/k6_runs/testid/$TEST_ID  
# TYPE k6_p95_trend gauge  
k6_p95_trend{scenario="transaction_x"} $P95  
EOF  

Then in Mimir you get k6_p95_trend with one value per run, plotted as a scatter/time series over 30 days.

Trade-offs:

  • Approach 1: no extra infrastructure, but the “single point” is approximate
  • Approach 2: needs a Pushgateway or small post-processing script, but gives you precise, clean data with arbitrary metadata labels

If you already have a Pushgateway or can add a small sidecar, Approach 2 is the cleaner path for a trend dashboard.

Hope this helps you

Hi @mstoykov ,

Many thanks for your input!

I should have updated this thread with my own progress as well. Since last week I have a working setup that is similar to your second proposal. I only have to create the dashboard in Grafana, but I believe that is the easiest part now.

I’m using handlesummary() to write the full summary data to a summary.json. In the Jenkins pipeline, I am then running a Python script that reads this file and sends values to Mimir with prometheus remote write protocol (currently only the avg/min/max/p90/p95 for my labelled transactions, but this can be extended).

The remote write protocol needed some investigation, it uses protobuf for encoding and snappy for compression. I’m also new to Python, but both Python and the usage of protobuf are well documented ( Protocol Buffer Basics: Python | Protocol Buffers Documentation ).

I also came across pushgateway, which I understand is basically a webserver that can publish prometheus metrics for you, so they can be scraped by Prometheus. I am just not sure if I could create a stable solution where testresults would be stored exactly once in Mimir. (You do not only have to send the results to pushgateway, but also explicitly delete them after Prometheus has scraped them once.)