Getting error "unexpected HTTP error from https://ingest.k6.io/v1/archive-upload: 413 Request Entity Too Large" when running test on k6 cloud with json files and 160 users

Getting error “unexpected HTTP error from https://ingest.k6.io/v1/archive-upload: 413 Request Entity Too Large” when running test on k6 cloud with json files and 160 users.

It working fine in local with SharedArray and open method I open and read JSON file data and send it in API body.

Hi @mittal

Welcome to the community forum :wave:

Most likely you are hitting a limit in the max body size the k6 cloud ingest service. We currently accept 100 MB. I don’t think this is documented and we’ll fix that: Document k6 cloud file size limits · Issue #1057 · grafana/k6-docs · GitHub. Thanks for pointing this out.

Some alternatives, depending our your case could be:

  • If you are not using all the data in the JSON file, modify to include only the fields you are using.
  • Generate some of the data dynamically, from the script itself. There is the option to use faker.js: Data Parameterization
  • Store data somewhere else and reading it from the script in a buffered mode, to prevent exhausting the k6 agent memory.

Is this your case, that the JSON file is bigger than 100MB? If you can share more details of your case (or the sanitized script) we can probably advice further on the options.

Cheers!

hi @eyeveebe ,

Thanks for reply. JSON files are not bigger than 100MB. It’s size 4818kb only but they are 160 JSON files each for 160 users. I have required data in JSON files so there is no extra data I can remove. Also Faker.js will not useful much as this data are connected to DB.

I have 160 JSON files with size 4818kb for each user.

here is the script,

const dataFilesList = papaparse.parse(open('../DataFiles/data_files.csv')).data;
const dataFiles = new SharedArray('pump data', () => {
  const data: ArrayBuffer[] = [];
  dataFilesList .forEach((fileName: any) => {
    const filePath = `../DataFiles/${fileName}`;
    data.push(JSON.parse(open(filePath)));
  });
  return data;
});

export const options: Options = {
  ext: {
    loadimpact: {
      name: '*****',
      projectID: ****,
      distribution: {
        loadZoneUS: {
          loadZone: '******',
          percent: 100
        }
      }
    }
  },
  scenarios: {
    default: {
      executor: 'per-vu-iterations',
      vus: dataFiles.length, // VU should be <= unique data or file in DataFilesList
      iterations: 1,
      maxDuration: '1h'
    }
  }
};

export default function uploadData(accessToken: string): void {
  
  // Pick one file for one user, unique per user.
  const fileObject = dataFiles[scenario.vu.idInTest - 1];
  const serialNumber = fileObject?.serialNumber;
  const modelNumber = fileObject?.modelNumber;
  const base64String = fileObject?.base64String;
  const Id = uuidv4();

  group('web_upload', () => {
//passing above variables in API body
  });
}

Hi @mittal

Can you run k6 archive script.js and let us know big the tar file is? We are thinking the final file to upload is more than 100Mb.

Thanks!

@eyeveebe
It’s size 770,946 KB, which is 752.89453125 MB . Is there any way to increase this limit?

1 Like

Thanks @mittal We are discussing this with the k6 core and cloud teams and I’ll keep you posted.

1 Like

Hi @mittal

There are some issues you would face even if we bypassed the file size issues. Some described here: Problems and deficiencies around uploading big files with k6 · Issue #2311 · grafana/k6 · GitHub. We’ve seen that the memory usage with 10 times less VUs for you was already using 10-12% memory, so we might hit memory issues running on cloud.

The best at this point is for you to contact our k6 support team directly, and they can further elaborate and advice.

Cheers!

Thank you @eyeveebe . I will talk with support team.

1 Like