Reusability of different binFile types & sizes

Hi team,

Currently we are trying to create a performance test for an uploading functionality. We want to run a test that uploads different file sizes and types. So for example, 1mb, 10mb, 30mb simultaneously targeting the same upload. I have managed to do it for separate files and that works fine, but for a real world example doing that simultaneously will be more realistic.

The functionality has a couple of endpoints that need to be targeted with different file sizes:

var content_type = 'image/jpg'
var filename = '../images/1mb'
const binFile = open(filename, 'b');

var assetId;
var presignedUrl;
var processId;
var params;
  1. http.get(${__ENV.URL}/upload?filesize=${binFile.byteLength}&filename=${filename}&contenttype=${content_type}, params)

This returns a presigned URL for step 2.

  1. http.put(presignedUrl, binFile, params)
  2. http.put(${__ENV.URL}/finishupload/${processId}, {}, params)`

After some trying from my side, I am running into some issues with the init context. If I want to create a function, so I can re-use the variables for different files sizes and content types in my test, it mentions an issue with the init context because you can’t assign opening files in an other context.
I tried without assigning a value to “filename” and also to load these strings “…/images/1mb”, “…/images/10mb” etc. via Sharedarray but no luck. Also checked some of the suggestions here on the forums.

How do I need to approach reusability in my tests for different file types, sizes and content types?
And, what is the best practice in this usecase?

Thanks in advance!

Hi @rodyb,

I would recommend reading Unique file upload per iteration - #3 by mstoykov while not the same as your case the current general advice on “how to upload a lot of different files” is somewhere between:

  1. don’t
  2. try to generate them when needed instead of loading them
  3. figure some other way to load test that doesn’t involve it
  4. combine multiple hacks to make it more viable/possible

Arguably you can figure out some way that works for your, but in general:

  1. you need to load files in the init context with pure k6. You can use xk6 and build an extension to circumvent this - but that likely won’t help you much - see 3.
  2. Unless you only load some files in some VUs - essentially sharding the files across VUS. you will need at least as much memory as * in reality it will be that times 2-3x due to inefficiencies.
  3. As explained in detail in this issue the problem are many and can’t really be fixed (IMO) with simple patches to the k6/http API. Again depending on your needs and desire to work around it you can probably write a lot of this as an xk6 extension. Although that one will need to emit http_req* metrics in order to be at least somewhat useful IMO.

Depending on all of your circumstances … you might just be okay with loading 10 files that go up to 200mb together and load testing with 20 VUs. Or maybe you can load test with 200 VUs but sharding the files in some tricky way between them so that it works for you and still not need hundreds of GBs of memory.

The exact specifics will depend solely on your exact situation.

Hope this helps you, and anyone who has done something similar is welcome to give their input