Howdy, I have a question around your recommendation of using check and group, specifically with situation below:
In the k6 docs: "Checks and groups
k6 records the result of every individual check and group separately. If you are using many checks and groups, you may consider removing them to boost performance."
In my tests, I have a total of 26 Groups, 28 Checks and I organize my tests into 7 scenarios running “per-vu-iterations” executor.
Each scenario can have upto 4 groups and each group can have upto 1-2 checks.
When I run 300VUs, with total iteration of 10k and total http_reqs of 1.5Million I encounter an OOM on the machine running my tests 30 minutes into the run (the machine is quite large AWS ec2 with 32gigs memory). The first thing I did was to refactor to avoid having unique urls by specifically tagging each identical requests. Just wondering if this is enough to cure my OOM issue. Do I have too many checks/groups mentioned above? If so, what is the recommended number of checks/groups should I use?
Thanks in advance
sorry for the slow reply.
Are you using an output for collecting metrics? Consider that any added check and/or group will generate a dedicated time series in terms of metrics. This might or might not be the issue of your memory usage. There are so many variables that could lead to an OOM that makes really complex to guess without seeing the script of the test.
If you can’t post an anonymized script of your test then I suggest trying to reproduce with a smaller scale of everything (fewer checks, fewer groups, fewer VUs, fewer requests). Starting from a point where the memory usage is stable then increase gradually all of them until you find the breakpoint or a relevant mutation in your memory usage.
I hope it helps.
thanks for the tips. It turns out I messed up when doing the refactoring. I did not read the doco properly.
I was tagging things incorrectly:
my_tag: "prevent OOM",
Instead of overriding the name
name: "prevent OOM", << override the name!