I have a test in K6 that is consuming excessive and constantly increasing memory unless I specify --no-thresholds and --no-summary. The test only uses 50 VUs but memory ramps up linearly over the course of a couple of hours to over 25GB, at which point I encounter oom problems.
I’ve been through the “Running large tests” guide and looked at, and tried all the suggestions - the only thing having a significant impact on memory consumption for my test is adding --no-thresholds and --no-summary.
I believe this is because of the nature of the URLs used in the test - I have approximately 30 unique URL “patterns”, however, each of these has a dynamic substitution for a dynamically generated number with 10 million possibilities.
I’ve tried using URL grouping - both explicitly with the “name” tag within the request, and with url
url. I can see the grouping working, but it doesn’t affect the memory growth.
(Side note - I don’t think the “name” tag works properly for http.del() requests - I can only get it to work if I set a global name tag within options)
Setting --no-thresholds and --no-summary, I see no memory growth. Setting the output to cloud and looking at the performance insights, I don’t get warnings about the number of URLs or metrics, showing the grouping is working as expected.
However, I need local summary to work as I need to run tests for several hours, and everything else is working locally except the metrics reporting.
I attempted to output to a local influxdb but the volume / cardinality of metrics quickly blow memory and CPU on that too. Looking at the raw metrics by outputting to json, I see that although the name tag is correctly set, the unique url is still included with every metric - I believe this is the problem.
I tried to manually overwrite the url tag globally to a dummy value to get around this but it doesn’t seem to be possible. Is there any way to get around this, or anything else I should look at to potentially help the situation?