I have been using the new executor “ramping-arrival-rate” successfully, but now I would like to do something similar, controlling the ‘rate’ by the K6 API.
Is that possible with the executor “externally-controlled”?
If not, is it planned to be implemented anytime soon?
Thanks for the answer, though it is not the one I was hoping for.
I am testing nodes that communicate via HTTP/2 REST protocol and I would like to write an automatic test case via JCAT for stress testing. For this, I need to increase the RPS value (‘rate’ in the “ramping-arrival-rate" executor) remotely until the node cannot handle additional traffic anymore. I could control the number of VUs remotely, but that is of no use for this particular scenario.
Looks like I am not the only one with this kind of use case, I just found this request:
Can you encode that logic with the current k6 thresholds, with an abortOnFail? Thresholds
Or, alternatively, just have an ever-increasing rate for your ramping-arrival-rate scenario, way past the point where you expect the node to fail, and when it does, use the “stop test” k6 REST API functionality? It’s not properly documented yet, but I gave an example of how it can be used in use the rest API to stop the test · Issue #162 · grafana/k6-docs · GitHub
We’ve been somewhat trying to discourage the usage of the rps option since recently (see this discussion in the recent gRPC PR for example). It was a quick-and-dirty workaround from long before k6 had arrival-rate executors, but it has a lot of peculiarities and limitations that generally make it hard to reason about and use correctly. So we won’t deprecate it quite yet, but we definitely won’t make an API to control it. Between that and having a new REST API to control arrival-rate, the latter is much more likely. At this point, we might accept a good PR about it, we are just unlikely to get to it ourselves any time soon, given the big competition it has from other tasks much higher on the priority list…