How big is the .csv file? Because if it’s too big for every VU to have a copy in memory, then, unfortunately, there is no way to currently do this efficiently You should probably use curl to download the file before your k6 run and then use open() and SharedArray in the init context in k6 to efficiently load it for all VUs.
The CSV file is not big, however our automated process requires reading data from S3 file.
we mimicked kind of shared array behavior i.e converting CSV to JSON array and for no iterations we are sending information from lambda to code build as a parameter. scenario.iterationInTest helped us to achieve to read all the data.
When it comes to downloading files directly from S3, you might find our AWS extension useful? As @ned mentioned, just be sure to avoid downloading/copying big files outside of the init context, as it might lead to bloat the k6 process’ memory consumption.
May I ask how to download big files within init context ? Because when I try to use the AWS extension for SharedArray (which is allowed only in init context), I received Making http requests in the init context is not supported error.