I’m testing a k6 script with 2600 ccu running in 3600s, 15gb Ram but I run it for about 20 minutes and it runs out of ram.
I used discardResponseBodies, my stream has about 30 api including using metrix Gauge, Counter, Trend, Rate to custom report.
Is there any way to reduce the amount of CPU?
It’s unclear in the original message since you mention Out of memory and then ask Is there any way to reduce the amount of CPU?, you meant memory?
If you have followed the advice @Elibarick shared, Running large tests, and still hitting the same issue, can you share the (sanitzed) script you are using, and the command you use to run?
At some point, you might need to distribute the load generator, if the bottleneck is in the load generator and not the system under test. It is difficult to help without eyes on your load generator and endpoints, but we can try.
Hi @eyeveebe,
I apologize for the unclear question, i mean memory consumption.
For security reasons I can’t share the script. My script has about 30 api and select data from DB with about 1 million users then use shareArray
I need to show the report so I can’t use --no-thresholds --no-summary
My script has about 30 api and select data from DB with about 1 million users then use shareArray
What do you mean by 30 APIs? Are you doing requests across 30 different URLs? Did you pay particular attention to the URL grouping part in the running large tests documentation?
Can you post an illustrative script of this part specifically? You don’t have to add any URL or logic dedicated to your business.
How are you pulling the data from the database? Are you using the experimental Redis module, an extension, or doing it across networking protocols like HTTP?
Note, 1 million users could be some critical amount of memory, even if you are using the SharedArray.
I need to show the report so I can’t use --no-thresholds --no-summary
This is the most important part. You can use it just for debugging, you don’t have to run it all the time in this way. But you should let us know the memory usage with these flags because it helps us to delimitate the area of the potential issue. If it is mostly related to your metrics or the imported data.