I can't understand ramping executors behavior

I’m seeing the opposite behaviors to what I expected from each ramping executor.
For the ramping-vus with a single stage {duration: 2m, target: 1000} I see the following scenario where the number of vus keep increasing independently of system response time (which keeps growing)


The same scenario using the ramping-arrival-rate executor behaves like this instead

I’ve tried increasing the amount of preallocatedVUs in the raming-arrival-rate from 100 to 1000 but then it achieves the same rate of iterations (whit worst performance because of the increased number of connections)

I’m trying to create a test scenario to discover the saturation point where the throughput starts to decrease if the concurrency keeps increasing, e.g.

But I couldn’t achieve with neither of the executors and I couldn’t even make sense of the k6 behavior in relation to the configuration I’m setting.

I would be happy to test any scenario you will propose me and paste the results here.

Thanks.

Hi @gabrielgiussi1,

Looking at the first graph, you have found the zone - 300 RPS is what the current setup can handle.

Adding more VUs has the same throught put - which likely will fall if you sustain longer or increase the VUs more.

Although maybe it won’t depending on what the limitation is here. AFAIK this illutration depends on there being some finite resource that runs out and requesting more of it while it is not available will degrade the performance. But that might not be true or might be hard to observe.

For example if in your case you are badwidth limitted by your network interface (with 300 RPS you need 41kb per request for 100mbps or 410kb per request for 1gbps) adding more requests will likely not decrease the performance any more until you start making the interfaces inbetween not being able to keep up with all the connection states. Which is unlikely before you hit hundreds of thousands if not even million of connections.

On your other points:

The arrival rate graph seem off to me. THe VUs go to their target values half way through there being a ton of requests which seems way off.
Looking at the timeline they are aligned so I have no idea what is happening :person_shrugging: .

I couldn’t even make sense of the k6 behavior in relation to the configuration I’m setting.

I don’t know what is confusing to you and I see that you spell out exactly how the executors work.

Both executors (ramping-vus and ramping-arrival-rate) have a set of stages that they go.

One (ramping-vus) changes how many VUS are active during this stages. And each VU just loops and does the iteration as fast as possible. This means that if there are supposed to be 100 VUs your test will have 100 JS VM constantly looping over the default function.

Whether that will mean there will be 100 RPS or 1000 or 1 - depends entirely on how fast the system under test (SUT) answers (and what the scritp does, but let’s say it is something that is mostly dominated by the SUT). That does mean that if you have a SUT that returns in 5ms you will get a bunch of requetss done. But if the SUT starts returning slowere the 100 VUs will stil be doing only 100 requests (simplified). This is called closed model as the SUT influences how it is tested.

The ramping-arrival-rate has a number of VUs (I recommend not setting maxVUs, just use preallocated VUs) which will do a number of iterations (the target in that case) and that number changes. More accurately it will try to start that many iterations. K6 runs js code and that js code needs a VM to run in. And JS is single threaded by design. So only one iteration can be run by any given VU at any given time and it needs to finish before a new one can start.

But if you want to have a test that test that you system can continously match a given iteration rate (as RPS is not really a thing k6 understand all that much) - you can have an arrival-rate executor with that rate setup and a bunch of VUs and it will keep that rate if possible. If not it will emit dropped_iterations.

While the rate of the starting iterations is not based on the SUT returning results. We are still limitted by that in how many in flight iterations we have.

But wtih ramping-vus makign k6 keep a given iteration rate invovled a bunch of sleep and calculations, which is very troublesome to get right.

So ramping-vus will loop a changing/ramping number of VUs and get as much as possible.

Arrival-rate tries to reach a given rate of iteration starts and if not will signal that.

You can choose either one depending on what you want to test.

I would probably go with 5k VUs or something like that adn a fairly slowly ramping rate taht also stays at given levels for a minute or two.

Hope this helps you and sorry for the long reply :grimacing:

1 Like

Hi @mstoykov
Thanks for the detailed response.

The ramping-arrival-rate has a number of VUs (I recommend not setting maxVUs, just use preallocated VUs)

I was confused about the number of dropped iterations increasing but k6 not trying to create new VUs, I read the documentation again and this is caused because maxVUs is set to preallocatedVUs if no value is provided.
I think I will actually try with maxVUs because setting a large number of preallocatedVUs is causing my envoy container to be throttled because all the connections are attempted at the same time (when k6 starts), there is no ramping behaviour for this.
So I guess I’ll try setting a lower number of preallocatedVUs and pay the price of allocating new ones as the script goes.

causing my envoy container to be throttled because all the connections are attempted at the same time (when k6 starts), there is no ramping behaviour for this.

Connections will be made on first iteration with that VU.

arrival-rate goes through the VUs in a “fifo” way lets say. So all VUs will be used. But if you are doign 10 iterations a second only 10 VUs will try to make a new connection each second.

So if you have 1000 VUs and 10iter/s it should take around 100 seconds to through every VU.

If this is a problem for your envoy - I would recommend having very slow ramping up period. By I expect this will have the same problem with ramping-VUs

So I guess I’ll try setting a lower number of preallocatedVUs and pay the price of allocating new ones as the script goes.

This should in practice not help you at all as you will just drop a few iterations - which is usually not what you want and start making more and more new VUs and connections.

You can definitely try it and maybe for your particular case it will work fine. There is definitely a chance envoy will work better in that way - I have never used it so :person_shrugging: . But if at the end you will still have 1k VUs (for example ) having to do requests - envoy will still need to be able to handle this.

Also how certain are you that the envoy isn’t the thing currently throttling you at 300 req/s ?