I might have a stupid question but after looking on the doc and on this forum, i don’t find the response.
We don’t use graphana/datadog/new relic but dynatrace in our compagny so i can’t export metrics by second on those tool.
So i would like to export :
- Request per second (for each second of the load test)
- Response time (for each second of the load test)
During our load test, we have seen sometimes we have a drop of rps for 3 or 4 seconds but with the summary result we don’t have enough data to debug.
With that i would be able to graph the rps for seconds like other tools do (jmeter/neoload/…)
Is it possible to have this in a csv or json export with K6.io and how to do it ?
Hi, welcome to the forum
If you’re using the k6 CLI tool, you can export the generated metrics with the JSON or CSV outputs. Then you can aggregate them per second using any tool you prefer, such as
jq for JSON. You can see some examples on the docs page.
If you need this in the k6 Cloud service, then you can download a CSV with the raw metrics from the test results menu. See the documentation.
It is perfectly clear ! I didn’t know we got one line by metrics by request. It is perfect !
Thanks a lot,
I have executed several basic tests (for HTTP and separately for gRPC) and exported CSV files.
I am going to rebuild the original CSV file from K6 to have 1 line to correspond to 1 response and then group by timestamps.
Could you please confirm or correct me, that for every request in the CSV file we will always have a block of 12 lines (1 line per metric)? (here it is 12 but it does not matter how many)
If so, this will confirm that each block cannot contain information about several responses.
In the CSV produced by k6 for my tests I always see iterations=1 as shown in the fragment output below:
k6 does not aggregate anything on it’s own. The only place where metrics are aggregated at all is the cloud output and even there it is only for some of them.
So you will see a line for each metric sample emitted by k6 in the csv file.
http_req* metrics you see above will be emitted per each request (we will actually add 1 more in the next release ) and will be grouped together, and very likely in this order.
iteration* are emitted at the end of an iteration - an execution of the “default” function (or the exec on if using scenarios).
And because there is no aggregation both
So in your case, I would expect that you just do only 1 request in that default function which is why you get these 12 metrics.
I hope this answers your question.