Getting incorrect iteration and http_req metrics when running parallel scenarios

I have a API which has to be load tested.
I am planning to achieve 10RPS by utilising parallel scenarios and please refer the k6 options

export let options = {
  setupTimeout: '15m0s',
  teardownTimeout: '15m0s',
  thresholds: {
    http_req_failed: ['rate=0.00'],
    http_req_duration: ['p(99) < 2000'],
      scenario1: {
          "executor": "constant-arrival-rate",
          "rate": 2,
          "duration": "1m0s",
          "preAllocatedVUs": 2,
          "maxVUs": 20,
          "exec": "functionone"
      scenario2: {
          "executor": "constant-arrival-rate",
          "rate": 3,
          "duration": "1m0s",
          "preAllocatedVUs": 2,
          "maxVUs": 20,
          "exec": "functionOne"
    scenario3: {
          "executor": "constant-arrival-rate",
          "rate": 5,
          "duration": "1m0s",
          "preAllocatedVUs": 2,
          "maxVUs": 20,
          "exec": "functionTwo"

  summaryTrendStats: ['avg', 'min', 'med', 'max', 'p(95)', 'p(99)', 'count'],

When looking into above options i am running 3 scenarios parallels with different rate values. So when running parallel it should be 10RPS in iterations and in the HTTP_REQS but its only 5 RPS.

 http_req_sending...............: avg=68.72µs  min=19µs     med=65µs     max=491µs    p(95)=127.44µs p(99)=161.45µs count=752
     http_req_tls_handshaking.......: avg=5.3ms    min=0s       med=0s       max=619.22ms p(95)=0s       p(99)=181.54ms count=752
     http_req_waiting...............: avg=417.42ms min=283.03ms med=321.08ms max=13.37s   p(95)=546.76ms p(99)=1.67s    count=752
     http_reqs......................: 752     5.800814/s
     iteration_duration.............: avg=445.56ms min=297.58ms med=322.11ms max=46.36s   p(95)=510.52ms p(99)=969.75ms count=722
     iterations.....................: 720     5.553971/s
     vus............................: 0       min=0      max=13
     vus_max........................: 13      min=12     max=13

What i am doing incorrectly. Am i missing any configurations ?? PLease help me to achieve 10 RPS with the help of multi parallel scenarios.

Much appreciated

Hi @kaushik,

your configuration looks fine, but judging from your summary report it appears that your service is unable to deliver a sustained performance of 10 RPS.

Notice that your http_req_waiting p(99) is 1.67s and max 13.37s, and your iteration_duration p(99) is 969ms and max 46.36s. http_req_waiting is likely the culprit here, since a lot of your requests are spent waiting for the server to respond (a.k.a TTFB, time to first byte).

The constant-arrival-rate executor will try to start more VUs (up to maxVUs) to try and reach the desired iteration rate, and if it’s not able to you should see a warning like:

WARN[0009] Insufficient VUs, reached 20 active VUs and cannot initialize more  executor=constant-arrival-rate scenario=scenario3

You could try increasing the maxVUs value, but ultimately if your service can’t deliver 10 RPS, no configuration change will help you and you’ll need to optimize the service to reach that rate.

I would suggest adding more precise percentiles to the summaryTrendStats values, e.g.: summaryTrendStats: ['avg', 'min', 'med', 'max', 'p(95)', 'p(99)', 'p(99.9)', 'p(99.99)', 'count'], so you can better see the outliers. Or use an output and inspect the metrics in a system of your choice (InfluxDB+Grafana works great).

Note that k6 takes into account the iteration duration to achieve the desired rate, so if you have some operations in the exec function that delay the execution or any sleep()s, etc., then that would skew the overall results. So make sure you’re only making requests and keeping any additional operations to a minimum.

Hope this helps,


Thanks for pointing out the HTTP_REQ_WAITING. After reading couple of posts related to setup and teardown, we are now good to proceed further.

May be this is a separate question but i wanted to ask directly in this same thread. We are now in analysis phase of comparing k6 against gatling.

In gatling we can generate a report with list of error and count and percentage.
Is there a way to capture the list of error occurred similar to below screenshot

Sure, you can use a custom Rate metric to track errors, and the results will appear in the end-of-test summary.

From my understanding rate is used to bring percentage values. but how do we print the list of exceptions ? Am i missing something from documentation ?

A custom Rate metric will show up in the end-of-test summary with the percentage and count of passed and failed events, e.g.:


There’s no support for aggregating on the type of error like that, but you can achieve something similar by tagging the error when you add to the metric, e.g. errorRate.add(resp.status >= 400, { reason: "SocketException"}) (you’ll probably have some logic to determine the reason), and then you can aggregate by the reason in whatever output system you’re using.

1 Like

Re-initiating this thread for better understanding. As said above we are running parallel scenarios and trying to achieve 10RPS with constant-arrival-rate.

The problem i quoted above was not able to achieve 10RPS. But i have certain clarifications on this please find below

We are running our iterations for 1m which is going to be 60 seconds and when looking into the iterations count it is 720

So ideally the my RPS should be Iterations / seconds which is 720/60=12RPS. Though we achieved 12RPS, in metrics near iterations i can see 5.55/s.

From my understanding this does not deal with http_req_waiting time because none of the iterations was blocked and all iterations are successful.

Kindly assist me if my understanding is wrong