Understanding stages in ramping-arrival-rate

Here is my sample executor

executor: 'ramping-arrival-rate',
timeUnit: '1m',
stages: [
	{ target: 10000, duration: '1m' },
	{ target: 20000, duration: '1m' },
	{ target: 30000, duration: '1m' },
	{ target: 40000, duration: '1m' },
	{ target: 50000, duration: '1m' }

How should I understand the arrival-rate in the above, lets say, if the test starts at 10am?

1. 10.00.00sec - 10.00.59sec: 10000 Requests (Total Req at the end of this stage: 10000)
2. 10.01.00sec - 10.01.59sec: 10000 Requests (Total Req at the end of this stage: 20000)
3. 10.02.00sec - 10.02.59sec: 10000 Requests (Total Req at the end of this stage: 30000)
4. 10.03.00sec - 10.03.59sec: 10000 Requests (Total Req at the end of this stage: 40000)
5. 10.04.00sec - 10.04.59sec: 10000 Requests (Total Req at the end of this stage: 50000)

Is my understanding correct?

Also, is there any way I can randomize my request. As an example, in my application, the req per min keep varying:
10.00.00am - 4394, 10.01.00am - 4000, 10.02.00am - 3422 and so on.
I would like to send a random request always in the range of [3000-4500] per min?
Does any executor provide this?

I’m asking as I have to prepare a model using ramping-arrival-rate to match my app request pattern. Any suggestion on this?

Hi @ampk6 Ramping Arrival Rate executor can generate required number of requests in stages can depend on preAllocatedVUs, what if number of preAllocatedVUs are not enought to generate desired RPS you can share summary console report from k6 for more information.

Thanks @Elibarick. But my question was mostly around different aspects and that I am looking for like understanding the ramping-arrival-rate behaviour and other questions.

Hi @ampk6,

I’ll try to explain how the ramping-arrival-rate executor works, and hopefully this clarifies some of your doubts.

First of all, it’s important to note that the target number of each stage specifies the number of iterations per timeUnit, which does not necessarily translate directly to requests per second (RPS) that you wish to achieve. The executor itself is not concerned whether the function it’s executing is making HTTP requests, or sending WebSocket messages, or whatever else it might be doing. This is important as k6 does much more than just make HTTP requests, so the executor is agnostic to the work that it’s doing.

So what the ramping-arrival-rate executor does is determine how many iterations per timeUnit it needs to run at any point in time, and then dynamically schedule the required VUs to achieve that rate. The duration value is how long the linear ramp up or down will last.

In your example, the first stage will linearly ramp up from 0 iterations per 1m, to 10000 iterations per 1m (approx. 166 iters/s). At the end of the first minute, the iteration rate will be 10000/1m, and then the following stages will keep increasing this linearly to 20000, 30000, etc.

Note that:

  • This might not be equal to 166, 333, 500, etc. RPS, as I mentioned above, as it will depend on what your VU function is doing. That said, if you’re only calling http.get() once and not doing any other calculation or processing, then the iters/s and RPS will be the same.

  • Because it’s a linear ramp up, this doesn’t mean that you will make 10000 requests in each stage.

    Rather, according to my calculations, with the configuration you’re using, you would end up with something like this:

    1. 10.00.00sec - 10.00.59sec: 5000 Requests (Total Req at the end of this stage: 5000)
    2. 10.01.00sec - 10.01.59sec: 15000 Requests (Total Req at the end of this stage: 20000)
    3. 10.02.00sec - 10.02.59sec: 25000 Requests (Total Req at the end of this stage: 45000)
    4. 10.03.00sec - 10.03.59sec: 35000 Requests (Total Req at the end of this stage: 80000)
    5. 10.04.00sec - 10.04.59sec: 45000 Requests (Total Req at the end of this stage: 125000)

    But, again, all of this is assuming that you’re making a single HTTP request in each iteration, so the actual numbers might be different for you.

    BTW, you can confirm all of this using the k6/execution module. It exposes instance and scenario level counters like iterationsCompleted and iterationInTest, which you can log to confirm the behavior.

    And the test summary output at the end of the test would give you the final iters/s and RPS, so you can comment out each stage to see what happens.

It’s also important to mention that the value of preAllocatedVUs is very important in this executor in order to achieve a stable performance in each test run. This is because if the required rate can not be met with the currently allocated VUs, the executor will initialize more during the test run, which would have a performance impact on the test itself, so your final metrics would be skewed. So you should set this to whatever value is the best fit for your test so that you don’t need to initialize more VUs during the test run. You can know this number by running the test with some baseline number of VUs, and then observing how many VUs are needed to achieve the rate that you configured. Then set preAllocatedVUs to this, or a slightly higher, value.

k6 will print a warning whenever it’s initializing VUs mid-test, which you can see if you run with k6 run --verbose:

DEBU[0000] Starting initialization of an unplanned VU...  executor=ramping-arrival-rate scenario=default
DEBU[0000] Initializing an unplanned VU, this may affect test results  executor=ramping-arrival-rate scenario=default
DEBU[0000] Initialized VU #3                             executor=ramping-arrival-rate scenario=default

As for your final question, in order to model that kind of load, you would need to simulate the ramp-downs as well. So something like this configuration might work:

executor: 'ramping-arrival-rate',
preAllocatedVUs: 1000,
timeUnit: '10s',
startRate: 30000,
stages: [
  { target: 23000, duration: '1m' },
  { target: 20000, duration: '1m' },
  { target: 19500, duration: '1m' },
  { target: 22000, duration: '1m' },
  { target: 26000, duration: '1m' },
  // ...

This should approximate the first 5 minutes of the graph you posted. I’m not sure about the timeUnit value, since it depends on how that graph is aggregated, so you should play around with this.

For the random requests, you could use a second ramping-arrival-rate scenario that runs a separate function, and has a different scheduling configuration from the main one.

Hope this helps!