Missing Iterations in k6 Load Test Results

Hi all,

I recently ran an end-to-end (E2E) test using xk6-browser with k6 version 0.55.0, and I found something strange in the results.

In the summary result, I see this:

 ✗ success
  ↳  14% — ✓ 45 / ✗ 259

From this, it looks like the total iterations were 45 + 259 = 304.

However, when I check the last line of the run output, I see a different number:

running (34m56.0s), 000/150 VUs, 556 complete and 43 interrupted iterations
ui ✓ [============================] 000/150 VUs 30m0s

So the actual total should be 556 + 43 = 599 iterations.
That leaves around 295 iterations unaccounted for.

I tried wrapping my test code with try-catch to see if any errors were being thrown, but I couldn’t capture anything useful that explained where the “missing” iterations went.

To rule out environment issues:

  • The Load Test Runner has 192 vCPUs and 180 GB of memory.
  • The total bandwidth during the test was ~200 Mbps out of 1 Gbps available.

So it doesn’t look like CPU, memory, or network bottlenecks were the cause. And if I run with the minimal number of iterations. The results are far better.

My questions are:

  1. Why does the summary only show 304 iterations, while the execution log shows 599?
  2. Could there be a reason some iterations are not counted in the ✓/✗ breakdown?
  3. Is this related to interrupted iterations, or maybe to thresholds/metrics that I’ve defined?
  4. How can I properly debug or capture what happens to those “missing” iterations?

Any advice or guidance would be really appreciated.

Thanks!

Hi @paritwai,

Is this behavior something that you can reproduce, or just a thing that happened once?

If you can reproduce it (like, you always observe the same behavior), would you be able to share the script, please?

I know it may contain sensitive data, so I’d appreciate if you could build a minimal example that reproduces the same behavior, and/or share your script without the sensitive bits (URLs, etc), but giving us the opportunity to see its shape: what the test function does and how the scenarios configuration does look like.

I’d also appreciate it if you can tell us what that success metric means, how you define it and how do you use. Note that, depending on how you account it, if an iteration raises an uncaught exception before ticking the metric, the total count might easily be smaller than the total amount of iterations.

Even if CPU, memory, nor network aren’t bottlenecks, anything could go wrong while running the JS code that described your load test and end up in such situation.

See Thresholds evaluated incorrectly · Issue #4283 · grafana/k6 · GitHub as an example. It talks about “thresholds”, but the overall situation might be the same: a Rate metric not registering all iterations.

Thanks!