I guess you have read this explanation of arrival rate and open and closed model.
Each time you call
http.get this blocks the whole VU making it wait until it returns - which is (more or less) the time it takes to complete the http request.
Ramping-vus is increasing or decreasing the number of vus executiing code at any given time. Those Vus though still need to execute 1 line at a time and will wait for requests to finish before returning.
Even if you have 1m VUs if your server starts returning really slow they won’t be making 1m requests/s as the load generator waits for the response.
Because k6 in practice works with execution of code and one execution of the
default function is called iteration, the way to say you want to start iterations at some rate, which will then translate to starting requests at some rate- is to use arrival-rate executors. Those executors start iteration either at a constant rate or at a changing(usually ramping) one.
Those still need VUs/JS VMs to actually execute the code so if you were not getting anywhere near what you want as RPS at 500VUs you likely will need to set
preallocatedVUs to something higher than that.
This is likely why you still were seeing dropped iterations - k6 needed a VU to run code but no VU was free.
I would also recommend giving running large tests specifically you might need to tune your OS settings.
however I cant seem to wrap my head around how it works
k6 will try to hit the
target rate of iterations per
timeUnit (another configuration option) by starting a new iteration on a free VU. If no free VUs are available it will drop the iteration and emit a metric that it did so. If you have configured maxVUs it will also start initialiazing a new VU in the background. I would recommend against setting
maxVUs and in practice it has turned out to rarely help. If you inititalize VUs mid test it still takes the same memory/CPU resources but it now also uses up resources while running the test instead of in the very beggining, potentially changing the results.
I still see request/sec drop whenever response time increases, so I am really at a loss right now.
at some point any tool will need hit a limit depending on how it is designed. In k6 the limit you are more likely to hit is not having a VU to run the code, but even without it you will:
- hit a number of open files
- not having a network port to make the request from
- CPU/mem limits
- network bandwidth between the two systems
I guess as a workaround you can set a
timeout at which point k6 will abort the given request so you start a new one.
In general because the system under test(SUT) is doing a lot more then the load generator you will have hard time hitting those limits as the SUT will hit cpu/mem ones way before that. This might not be the case for you depending on how your SUT is designed and does.
Hope this helps you!
edit: I have opened an issue to update the documentation to not use
ramping-vus for the spike testing example