I need help how to aggregate the metrics for single test which runs parallelly in K8. I want to fail the Jenkins pipeline if threshold failed. I know we cannot do this straight forward so I am thinking to push all the metrics into influxdb then aggregate the metrics and call hook in Jenkins pipeline to pass the job or fail based on the threshold. Can we add any custom id field in K6 response to influxdb which can tell multiple metrics are part of single test?
Or any similar kind of approach if somebody has used?
Yes, you’ll need some additional setup for this kind of workflow and metrics is certainly one way to do that. Please see the docs on k6 metrics. Thresholds by themselves are basically evaluation of some metric. So if you use one specific metric as a threshold, you can also store it in InfluxDB with k6 output and process it there, similarly to what k6 does. The exact result might somewhat vary though, since different metrics engines process metrics in a slightly different manner.
You may also want to look at the checks.
Specifically for k6-operator test runs, another solution is to monitor the exit codes of runners and rely on abortOnFail option in the scripts. This can be done without dependency on InfluxDB or details of metrics aggregation, but it’d mean implementing such a monitoring solution.