I’m working with K6 OSS for a couple of months now. In some runs, I’m not sure how the number of requests gets increases exponentially with the same test profile. The only correlation between those runs was some script errors.
In recent, I ran two rounds of test same users and workflows, and a particular function changed, but the number of requests between those two tests is totally different. The particular function I changed throw some script error, and the error was logged during the entire second run (it is fixed now).
level=error msg="TypeError: Cannot read property ‘scenario’ of undefined\n\tat
Are any behaviors like this due to script errors ???
Without seeing the script/s it is difficult to say for sure.
Take the example of having a constant-vus executor. And some of the iterations fail mid-execution, when the VU code has already sent some requests. Then you would be seeing more requests done by the VUs (more requests per second), as they are freed sooner than when there are not errors, to issue the next request / run the next iteration. Compared to when they run the complete iterations and have to wait for the endpoints to respond, etc.
If we can have a look at the script and details of where it failed with the function that changed, we might be able to better understand.
I hope this makes sense to your concrete case.