Parameterizing data

I’m trying to use K6 instead of JMeter to carry out performance tests but I’m having problems exactly with the use of .csv files in k6 calls.

Currently we need to use ‘shared-iterations’ for all our requests to control Througputs.

Each request has a separate .csv file with the inputs that will be used.

With the exec.scenario.iterationInTest option I can capture the execution iteration, but I cannot capture the specific iteration of each request.

Is there any way to control this counter?

Hi @vitormarinheiro16, welcome to the community forum :tada: !

but I cannot capture the specific iteration of each request.

I am not getting what you are trying to say here. Can you provide an example script and what you want to happen with it ?

1 Like

Sorry, let me try to explain better.

Currently, we are using JMeter to perform our performance tests, and we are migrating this architecture to use K6.

We have a .jmx file containing a Thread Group with a fixed number of Threads (80), and below it, some Throughput Controllers with configuration for percentage execution relative to the Thread Group.

Within each Throughput Controller, we have a request that will be executed at a certain percentage relative to the Thread Group, and each request uses a .csv data file as input. With each execution of this request, it will use the next line of the CSV data.

To replicate this architecture in K6, we are creating a class for each request with its information, similar to the one below:

import { check } from 'k6';
import http from 'k6/http';
import Utils from "../../utils/utils.js";

export default class req_test_extrac_01 {

  constructor() {
    this.params = Utils.baseHeaders();
  }
  
  exec() {

    let bodyData = `
    <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:aut="http://test.com.br">
      <soapenv:Header/>
      <soapenv:Body>
        <aut:req_test_extrac>
            <aut:dataRefer>06/09/2023</aut:dataRefer>
        </aut:req_test_extrac>
      </soapenv:Body>
    </soapenv:Envelope>`;

    let response = http.post(`${Utils.getServerAPI()}${Utils.getPathCC()}`, bodyData, this.params)

    check(response, { 'status was 200': (r) => r.status == 200 });
  }
}

With this class created, we use a FullLoad.test.js file to control all executions following the pattern below:

First, we define the execution percentage of each request relative to the total:

let groups = [
    { name: 'req_test_extrac_01', target: 0.10269602 },
    { name: 'req_test_extrac_02', target: 0.5 },
    { name: 'req_test_extrac_03', target: 0.5 },
    { name: 'req_test_extrac_04', target: 0.02613226 },
    { name: 'req_test_extrac_05', target: 0.005804558 },
    { name: 'req_test_extrac_06', target: 0.31 },
    { name: 'req_test_extrac_07', target: 0.5 },
    { name: 'req_test_extrac_08', target: 0.5 },
];

Then, we resize them so that the sum of all targets equals 1:

const totalTarget = groups.reduce((acc, group) => acc + group.target, 0);
groups.forEach(group => {
    group.target /= totalTarget;
});

We create a function to initialize instances of all request classes:

function createGroupInstance(groupName) {
    switch (groupName) {
        case 'req_test_extrac_01':
            return new req_test_extrac_01();
        case 'req_test_extrac_02':
            return new req_test_extrac_02();
        case 'req_test_extrac_03':
            return new req_test_extrac_03();
        case 'req_test_extrac_04':
            return new req_test_extrac_04();
        case 'req_test_extrac_05':
            return new req_test_extrac_05();
        case 'req_test_extrac_06':
            return new req_test_extrac_06();
        case 'req_test_extrac_07':
            return new req_test_extrac_07();
        case 'req_test_extrac_08':
            return new req_test_extrac_08();
        default:
            return null;
    }
}

We create the execution option using shared iterations with vus = 10 and unlimited duration (The idea is for our tests to run with unlimited iterations for a certain period of time):

export let options = {
    scenarios: {
        contacts: {
            executor: 'shared-iterations',
            vus: 10,
            iterations: 999999999999999,
            maxDuration: '10m',
        }
    }
};

Finally, we use the default function to execute the exec() method of each class according to its execution percentage relative to the total VUs:

export default function() {

    let random = Math.random();
    let selectedGroup = null;

    // Capture the group to be executed
    for (const groupInfo of groups) {
        if (random <= groupInfo.target) {
            selectedGroup = groupInfo.name;
            break;
        } else {
            random -= groupInfo.target;
        }
    }

    // Create an instance of the group to be executed
    const groupInstance = createGroupInstance(selectedGroup);

    if (groupInstance) {

        group(groupInstance.name, () => {
            groupInstance.exec();
        });
    }
}

Now, consider that we have one .csv file for each request (req_test_extrac_01, req_test_extrac_02, req_test_extrac_03, etc.). When we attempt to read this .csv file using SharedArray and pass this information to each request, we don’t have the information about which index of each request K6 is currently at so that we can read the corresponding line from the .csv file. :melting_face:

I’ve tried to use the exec.scenario.iterationInTest, but it returns the total iteration counter and I need a way to count each class.exec() run.

tl;dr: I think you are mapping the concept not so well between the different tools. I would recommend making each of the groups their own scenarios and using k6/exucution.scenario.IterationInTest doing one request per exectuion only of the specific group/class.

I am still not certain I got everything right but you currently are using 1 scenario that then switches between which group it is going to do on random princip.

While this works for a lot of cases and I would not clasify it as bad idea most of the times. Since scenarios have been added (3 years ago) it is preferable to just use them in those cases unless having single scenario is beneficial in some way (which rarely is).

In that way you can use per scenario counters (in your case) but also not have code that splits between the different “scenarios” you want to play, but just have separate ones.

In your particular case you seem to have some weights that if dynamic will likely still require you to have some code that generate the sepcific scenarios before setting them in the options

export const options = {
  scenarios: calculateScenarios(),
  // rest of options
}

I am not going to try to figure out what the exact math that you will need here as even now I don’t know the exact reasons for vus:10 and iterations: many9s :slight_smile:

Hope this helps you!

Sorry for the delay in my response, I spent some time studying and trying to find a way to do this architectural migration to K6, but I still haven’t managed to do it.

I think I didn’t make my objective very clear with this test.

Let me try to clarify things a little:

Imagine that today I have around 100 endpoints that I need to test at the same time with different Througputs between them, to validate harmonious competition between them. In other words, I can’t just test endpoint 1, then test endpoint 2 and continue that way. I need all 100 endpoints to be tested simultaneously.

Each endpoint has a corresponding .csv file with several lines, which will be used to complement the body of that endpoint.

For example:

EndPoint1 has a .csv file named EndPoint1.csv that contains 100 lines, each line with a value that will be used by 1 execution of this endpoint to complement the request body. In other words, I need to be able to know exactly what the execution index of that endpoint is to read the corresponding line from its .csv.
EndPoint2 also has a .csv file named EndPoint2.csv that contains 100 lines that will be used to complement the body.

Remembering that the endpoints will have different Througputs, so the number of requests from EndPoint1 will be different from Endpoint2. Therefore, I cannot have a counter of all executions of all endpoints, as this way I would not be able to read the correct line of each .csv for each request.

I know it’s a very difficult scenario to replicate, so I’m evaluating whether K6 can really replicate this JMeter architecture.