Problems with 650Mb data set in k6 cloud


could you please help to find a solution.
In our use case we pre-generated 650mb user tokens and want to use them during load test.
But we have troubles to upload this data amount to k6 cloud.

First we tried to do that at init stage with SharedArray, but cloud is not uploading more than 50mb.
Second we uploaded our data file to CDN and introduced setup function where with http.get we tried to download and provide this data to functions. But cloud is responded with “The k6 process in instance #0 (amazon:de:frankfurt) was killed, likely because it used too much memory”.

What are other options?


Hey Danil,

First of all, 650 mb file containing tokens sounds like an excessive amount of tokens for any use case.
There are a couple of bigger issues on our cloud side that would prevent you from doing this in your tests:

  • What this means is that each VU in your test run will load this particular file into its memory which will in turn overload the load generator instance really quickly and be aborted by our backend
  • we have limits for the archive size you can upload to our cloud in order to execute a k6 Cloud test run. At the moment of writing this post it’s 50mb. If you would significantly reduce the size of your data file (to let’s say 40mb) and used the SharedArray object you mentioned, the memory utilisation might actually be low enough to successfully complete the test run

Generally speaking, in case you really require that many tokens being used in your test run, one of the most optimal solutions might be to create an endpoint which can be called inside of each VUs iteration.

support also mentioned another approach which might be better as it will not introduce extra latency per iteration. I haven’t tried it yet.


How did you finally solve the problem with uploading 650 mb of tokens to k6 cloud?

We have a similar problem.