How is Ramping Up Arrival Rate distributed amongst VUs?

We’re testing a graphql endpoint that receives orders. I’m not interested in the amount of concurrent users, simple how many orders per minute we’re sending to the endpoint.

I have the following scenario:

scenarios:
  {
    new_order: {
      executor: 'ramping-arrival-rate',
      startRate: 0,
      timeUnit: '1m',
      preAllocatedVUs: 1,
      maxVUs: 1,
      stages: [
          { target: 500, duration: '10m' },
          { target: 500, duration: '5m' },
          { target: 1000, duration: '10m' },
          { target: 1000, duration: '5m' },
          { target: 0, duration: '15m' },
      ],
    },
  },

I’ve noticed a large number of dropped iterations:

dropped_iterations…: 745 0.724711/s

If I add more VUs, will the requests be distributed amongst all VUs? Or will each VU be executing each stage?

For example, if I use the above scenario with 2 VUs will it ramp up to a max 1000rps total or will it be 1000rps/VU=2000rps for 2 VUs?

If more VUs equates to more stability in the test, then I’d like to scale up, however, if it means increasing the rps by a factor of the number of VUs then I’d like to keep it at 1.

Thank you!

I’d also be interested in extending this question to encapsulate region based testing. Does each region execute each stage, or is rps distributed amongst all regions?

Hi @loadsquad,
Welcome to the community forum :wave:

I’ll try my best to support you.

If I add more VUs, will the requests be distributed amongst all VUs? Or will each VU be executing each stage?
For example, if I use the above scenario with 2 VUs will it ramp up to a max 1000rps total or will it be 1000rps/VU=2000rps for 2 VUs?
If more VUs equates to more stability in the test, then I’d like to scale up, however, if it means increasing the rps by a factor of the number of VUs then I’d like to keep it at 1.

No, the rate is constant and will not be multiplied by the number of VUs. The Ramping Arrival Rate executor aims to keep the rate at the desired level. If needed k6 will attempt to dynamically change the number of VUs to achieve the configured iteration rate. So k6 will distribute the request among the VUs.

However, it’s possible that the number VUs is not enough (e.g., maxVUs:1 is defined) to run the iterations/s you have specified. In that case, k6 will log warnings to the output, like Insufficient VUs, reached 1 active VUs and cannot initialize more.

We also have an open issue about improving the documentation, but I believe it can help get more clarity on how the executor works.

I’d also be interested in extending this question to encapsulate region based testing. Does each region execute each stage, or is rps distributed amongst all regions?

When it comes to the multi-region testing (load zone distribution) in k6 cloud (if the question was about it), it distributes the stages’ load across the regions based on the percentage. In other words, the k6 instances in the region will execute portions of the total load. But the k6’s goal is to keep the desired arrival rate on the configured level in total.

Let me know if that helps,
Cheers

Thank you, Oleg! That answered my questions.