Ways to manage the pacing iterations of an user

In another tool I use an option called pacing (this option exact behavior can change from tool to tool) : this would allow to create a delay between the beginning of two iterations, i.e. the delay counter starts as soon as you start the iteration, so the pacing delay between the two iterations is a max, except if the iteration exceeds the delay, and in this case it will just start when the previous iteration has finished. I don’t see which combination of executor/sleep function would allow me to reproduce that. Any idea ?

Regards,
Arnaud

Hi,

I use a custom function to do this, takes a cycle time and start time then returns the correct wait time

export function pacing_9999(cycleTime, startTime) {

  let waitTime = 0;
  var endTime = Date.now();
  let duration = endTime - startTime;
  waitTime = cycleTime - duration;
  // convert waitTime to seconds not milliseconds
  waitTime = waitTime / 1000;
  return waitTime;
}
var cycleTime = 10000;
  var startTime = Date.now();

  // make a call
  
  sleep(pacing_9999(cycleTime, startTime));
2 Likes

Ok, it also indirectly confirms that there is no native way to do it :slight_smile: Thank you very much for this code.

Regards,
Arnaud

@BobRuub, @duarnad, can you explain what your use cases are and why k6’s arrival-rate executors are not sufficient?

I’m asking because if this is something sufficiently common, we can maybe add a new option to one of the existing executors (or add a new executor) that covers these use cases natively

1 Like

Hello @ned, I would say this is a kind of compromise where you try to be as close as possible to your target transaction throughput, without killing the platform if some requests are not in the expected time. Also adding VUs as is the case with arrival-rate executors may add some “passive” load as for example the memory to keep the session.
I work in a team dedicated to performance analysis for a long time and this is I believe the way most tests are configured there, so having this would have the additional benefit to make it easier to move to k6 from a “mental model” perspective.
Not to say this is the “right way” to do things :slight_smile:
I’m going to forward the topic to my team to check if they want to contribute more background on this.

Regards,
Arnaud

:thinking: ok, I can see the benefit here. However, can’t you achieve something very similar with the built-in arrival-rate executors? Say, for example, that you expect your system to respond in less than 1s, in general, and you want to test if it will endure 10 requests per second. If you have this scenario:

import exec from 'k6/execution';
import { sleep } from 'k6';

export const options = {
    scenarios: {
        sc1: {
            executor: 'constant-arrival-rate',
            preAllocatedVUs: 10,
            maxVUs: 15,
            rate: 10,
            duration: '30s',
        },
    },
}

export default function () {
    console.log(`[t=${(new Date()) - exec.scenario.startTime}ms] VU{${exec.vu.idInTest}} START iteration ${exec.scenario.iterationInTest}`);
    sleep(0.25 + Math.random()); // in lieu of an actual request, just waits between 0.25s and 1.25s
    console.log(`[t=${(new Date()) - exec.scenario.startTime}ms] VU{${exec.vu.idInTest}} END iteration ${exec.scenario.iterationInTest}`);
}

rate: 10 means that k6 will try to make 10 iterations/s (because the default timeUnit is 1s). More specifically, k6 will try to start a new iteration every 100ms. And, if your system behaves as expected, the 10 preAllocatedVUs will be enough to handle the load test. However, if your system under test starts to slow down, k6 won’t ever make more than 10 RPS.

Optionally, if you specify maxVUs, like I have in the script above, k6 may allocate some more VUs mid-test (up to 5 more, in my example). This allows you to have a bit of wiggle room if your system is a bit slower than expected, or there are some temporary bumps. However:

  • you can see something was potentially wrong because k6 will emit the dropped_iterations metric if it has to initialize these VUs, or if there isn’t a free VU when it needs to make an iteration in general, and you can add a threshold on that metric to alert you
  • maxVUs: 15 will still limit k6 to a maximum of 15 in-flight HTTP requests, even if the system under test slows way down, so you won’t completely overwhelm it with k6

The important bit is that you won’t overwhelm your system, but you’ll still know if something is wrong and you don’t have to calculate custom sleep times. See what happens when you increase the sleep() time, to simulate a slower server.

Yes, thank you. Funnily a colleague was answering me in the same direction on our internal forum. I was overlooking the maxVus directive which allows this to not be unbounded. So with your explanation I in did don’t think something new is needed. Interesting to know if @BobRuub has another point of view ?

Regards

Happy to hear that :tada:

Just to be clear, if you don’t specify maxVUs, it will be as if you specified maxVUs = preAllocatedVUs. That is, k6 won’t initialize VUs mid-test, it will only use the pre-allocated ones.

We are migrating some complex scripts from jmeter and while it has a plethora of throughput and pacing tools and options (probably too many!) we’ve always found it much clearer, and easier from a maintenance view point, to use code for this.

I’m new to K6 and will certainly have a go at the options you described. Now’s the time to do it while we are setting standards.

Hi,

Been playing with it a bit this morning and can sort of get it working using the rate command.

My scenario would be there are 10 users and each user needs to do two transactions (iterations) per minute.

export const options = {
    vus: 10,
    duration: '1m',
  };

and then set the cycleTime to 30000 and call pacing_999 as described above I get

iteration_duration...: avg=30s min=29.99s med=30s max=30s p(90)=30s p(95)=30s
iterations...........: 20  0.333298/s

By using the following

 export const options = {
     scenarios: {
         sc1: {
             executor: 'constant-arrival-rate',
             preAllocatedVUs: 10,
             maxVUs: 10,
             rate: 1,
             duration: '1m',
         },
     },
 }

I get a similar result

iteration_duration...: avg=30s min=30s med=30s max=30s p(90)=30s p(95)=30s
iterations...........: 20  0.2857/s

Which is good :slight_smile: and if I set the duration to 5m I get

iteration_duration...: avg=30s min=29.99s med=30s max=30s p(90)=30s p(95)=30s
iterations...........: 100 0.314458/s

However, I want to have say 75 iterations per 5 minutes there appears to be no way of lowering the rate under 1.

Using the pacing_999 code setting the cycleTime to 45000 would achieve this, well close to it anyway.

iteration_duration...: avg=45s min=44.99s med=45s max=45s p(90)=45s p(95)=45s
iterations...........: 70  0.222213/s

If I try to set the rate any lower than 1 or anything other than a full number I get any error

ERRO[0000] json: cannot unmarshal number 0.7 into Go struct field Options.scenarios of type int64 

These are contrived examples at low rates which tend to skew the results, I will attempt to this on a much larger VU test over a much longer period and see I can get it working.

Might just be a matter of adjusting the number of VU’s up and down as well to meet transaction goals?

Just to clarify something, you want all 10 VUs to simultaneously start one transaction (iteration) each at t=0s, and another transaction each at t=30? This is a hard requirement, i.e. you don’t want a new iteration started every 3 seconds? If so, Enhancement to arrival rate executors · Issue #1386 · grafana/k6 · GitHub will be required to satisfy your use case.

Judging by the iteration_duration, you are still using your pacing_9999 function to sleep() in the arrival-rate scenario. You most likely shouldn’t do that at the end of an arrival-rate iteration. The whole point is that k6 won’t reuse a VU if it doesn’t need to, so you don’t need to keep it busy with sleep().

Use the timeUnit option of the arrival-rate executors, it is the denominator for the rate: k6 will try to start rate iteration per timeUnit period. By default timeUnit is 1s, so it’s rate iterations per second, but if you want “75 iterations per 5 minutes”, you can just have rate: 75, timeUnit: '5m' as the arrival-rate options :slight_smile:

Thanks for the feedback.

Those options tied together look like they could make things much easier.