After running 600 RPS in linux, after 15 mins it is getting killed (out of memory)

export let options = {
  scenarios: {
    constant_request_rate: {
      executor: 'constant-arrival-rate',
      rate: 610,
      timeUnit: '1s', // 1000 iterations per second, i.e. 1000 RPS
      duration: '180m',
      preAllocatedVUs: 200, // how large the initial pool of VUs would be
      maxVUs: 550, // if the preAllocatedVUs are not enough, we can initialize more

Hi @Diksha,

Can you provide a little bit more infromation:

  1. how did it get killed? Does dmesg say it was killed because you ran out of memory?
  2. how much memory is there for k6?
  3. do you get a ton of messages (maybe add -v) that you can’t hit 610 iterations per second? And that you are making new and new VUs?
  4. What other options are you using if any? Outputs?

I expect that you are running out memory due to k6 keeping all of it’s metric samples in memory and specifically Trends being very bad for that.

For a 3 hour test I would seriously recommend using one of the outputs and adding --no-summary, --no-threshold if that is possible as otherwise I will expect that - yes k6 will use too much memory keeping all the stats in memory.

But the above might not be your case so it’s better if you first check that you are at all in that situation :).

Hi @mstoykov

[root@localhost ingest-loader]#
constant_request_rate [---------] 440/440 VUs 0h11m44.7s/3h0m0s 610 iters/s

  1. how much memory is there for k6?
    It show Out of memory in k6 status.

What to do for this.

I would run it with /usr/bin/time -v k6 run script.js to see how it’s going but also use top/htop/atop to see the memory usage from the system.

If you are running out of memory I would recommend reading Running large tests and Running large tests and trying the ways explored there to reduce the memory consumption. You should probably start with discardResponseBodies as that is usually the problem …

1 Like

--no-threshold sometimes (when run tests on live systems) not acceptable, becouse used for prevent overload. May be option for periodically flush stats (and dump to stdout) ?