Out of memory K6

I’m testing a k6 script with 2600 ccu running in 3600s, 15gb Ram but I run it for about 20 minutes and it runs out of ram.
I used discardResponseBodies, my stream has about 30 api including using metrix Gauge, Counter, Trend, Rate to custom report.
Is there any way to reduce the amount of CPU?

 executor: 'ramping-vus',
        startTime: '0s',
        startVUs: 0,
        stages: [
            { duration: 1200 , target: 2600 },
            { duration: 2400 , target: 2600 },
        ],
        gracefulRampDown: '0s',
        exec: 'transferNapasAccount',

Hi @giadat Welcome to community, You can referer this documentation about large test in k6 documentation.

Hi @Elibarick I tried many ways but ram still increased and ran out of ram, i’m using SharedArray

Hi @giadat

It’s unclear in the original message since you mention Out of memory and then ask Is there any way to reduce the amount of CPU?, you meant memory?

If you have followed the advice @Elibarick shared, Running large tests, and still hitting the same issue, can you share the (sanitzed) script you are using, and the command you use to run?

At some point, you might need to distribute the load generator, if the bottleneck is in the load generator and not the system under test. It is difficult to help without eyes on your load generator and endpoints, but we can try.

If you are streaming the results, I would also try --no-thresholds --no-summary to see if that improves the memory usage. You might have hit Reduce memory usage for long duration tests · Issue #2367 · grafana/k6 · GitHub and that could help.

Cheers!

Hi @eyeveebe,
I apologize for the unclear question, i mean memory consumption.
For security reasons I can’t share the script. My script has about 30 api and select data from DB with about 1 million users then use shareArray
I need to show the report so I can’t use --no-thresholds --no-summary

Hey @giadat,

My script has about 30 api and select data from DB with about 1 million users then use shareArray

What do you mean by 30 APIs? Are you doing requests across 30 different URLs? Did you pay particular attention to the URL grouping part in the running large tests documentation?
Can you post an illustrative script of this part specifically? You don’t have to add any URL or logic dedicated to your business.

How are you pulling the data from the database? Are you using the experimental Redis module, an extension, or doing it across networking protocols like HTTP?

Note, 1 million users could be some critical amount of memory, even if you are using the SharedArray.

I need to show the report so I can’t use --no-thresholds --no-summary

This is the most important part. You can use it just for debugging, you don’t have to run it all the time in this way. But you should let us know the memory usage with these flags because it helps us to delimitate the area of the potential issue. If it is mostly related to your metrics or the imported data.

Hi @codebien
I call 30 different URLs and my script using group
I use xk6-file extension to select data from DB and sql query

Report me customized as follows

this is a piece of code in my script

I tested with the same amount of data as above but with less url thenthe memory consumption is improved

Do you have IDs in your URLs?

I tag with tagName and flowName for URLs and no IDs