We’re testing a graphql endpoint that receives orders. I’m not interested in the amount of concurrent users, simple how many orders per minute we’re sending to the endpoint.
I have the following scenario:
scenarios:
{
new_order: {
executor: 'ramping-arrival-rate',
startRate: 0,
timeUnit: '1m',
preAllocatedVUs: 1,
maxVUs: 1,
stages: [
{ target: 500, duration: '10m' },
{ target: 500, duration: '5m' },
{ target: 1000, duration: '10m' },
{ target: 1000, duration: '5m' },
{ target: 0, duration: '15m' },
],
},
},
I’ve noticed a large number of dropped iterations:
dropped_iterations…: 745 0.724711/s
If I add more VUs, will the requests be distributed amongst all VUs? Or will each VU be executing each stage?
For example, if I use the above scenario with 2 VUs will it ramp up to a max 1000rps total or will it be 1000rps/VU=2000rps for 2 VUs?
If more VUs equates to more stability in the test, then I’d like to scale up, however, if it means increasing the rps by a factor of the number of VUs then I’d like to keep it at 1.
Thank you!