How does VUs and Iterations work when running a scenario

Hello,
Can someone please help me understand how the VUs and Iterations work when running a single scenario. I have a csv data file with 5 rows of data. My first use case is as below

export const options = {
	scenarios: {
		login: {
		  executor: 'per-vu-iterations',
		  vus: data.length,
		  iterations: 1,
		  maxDuration: '5s',
		},
	  },
};

When I log the individual response time for each POST request to the API, I see that the later requests have the response time very high.

INFO[0002] Individual API Response Time (ms): HTN MostRecent 99.7327 source=console
INFO[0003] Individual API Response Time (ms): HTN OnThiazide 1501.4011 source=console
INFO[0003] Individual API Response Time (ms): HTN PriorBlood 1618.4642 source=console
INFO[0004] Individual API Response Time (ms): HTN HasEssential 1875.9909 source=console
INFO[0004] Individual API Response Time (ms): HTN HasOsteo 1934.0077 source=console

Any idea why the response time for the fourth request is almost 2000 ms compared to the first one which is 99 ms. The API is the same in both the requests.
My second use case

export const options = {
	scenarios: {
		login: {
		  executor: 'per-vu-iterations',
		  vus: data.length,
		  iterations: data.length,
		  maxDuration: '5s',
		},
	  },
};

Below is the response time

INFO[0002] Individual API Response Time (ms): HTN MostRecentWeight 111.5304 source=console
INFO[0003] Individual API Response Time (ms): HTN OnThiazideOrThiazideTypeDiuretic 1259.0893 source=console
INFO[0003] Individual API Response Time (ms): HTN MostRecentWeight 66.1254 source=console
INFO[0003] Individual API Response Time (ms): HTN HasEssentialTremor 1616.3457 source=console
INFO[0003] Individual API Response Time (ms): HTN PriorBlood 1702.0189 source=console
INFO[0004] Individual API Response Time (ms): HTN HasOsteo 2019.2183 source=console
INFO[0004] Individual API Response Time (ms): HTN MostRecent61.5539 source=console
INFO[0004] Individual API Response Time (ms): HTN OnThiazide 229.8114 source=console
INFO[0005] Individual API Response Time (ms): HTN HasEssential 462.7554 source=console
INFO[0005] Individual API Response Time (ms): HTN HasOsteo 280.4483 source=console
INFO[0005] Individual API Response Time (ms): HTN PriorBlood 713.5124 source=console
INFO[0005] Individual API Response Time (ms): HTN MostRecent 67.969 source=console
INFO[0006] Individual API Response Time (ms): HTN OnThiazide 193.0096 source=console
INFO[0006] Individual API Response Time (ms): HTN HasEssential 449.4907 source=console
INFO[0007] Individual API Response Time (ms): HTN HasOsteo 480.4304 source=console
INFO[0007] Individual API Response Time (ms): HTN PriorBlood 479.7807 source=console

As you can see all the requests have low response time but since the iteration matches the data length, my report has multiple response time for the same data set.
How can I modify the second use case to have my report record the response time only once with the low response time?
Thanks in advance.

Hello,
Would anyone be able to take a look into this? Appreciate the help in advance.

Thanks

Hi @alex977 !

Let me try to help you!

Can someone please help me understand how the VUs and Iterations work when running a single scenario. I have a csv data file with 5 rows of data.

It’s important to say that the way how VUs and interactions work depends on the type of executor. For your options it will be like this:

export const options = {
	scenarios: {
		login: {
		  executor: 'per-vu-iterations',
		  vus: data.length,
		  iterations: 1,
		  maxDuration: '5s',
		},
	  },
};

Using these options, we’ll let 5 (data.length) VUs execute 1 iteration each, for a total of 5 iterations, with a maximum duration of 5 sec.

export const options = {
	scenarios: {
		login: {
		  executor: 'per-vu-iterations',
		  vus: data.length,
		  iterations: data.length,
		  maxDuration: '5s',
		},
	  },
};

Using these options, we’ll let 5 (data.length) VUs execute 5 (data.length) iteration each, for a total of 25 iterations, with a maximum duration of 5 sec.

A full explanation is in the documentation for that executor.

Any idea why the response time for the fourth request is almost 2000 ms compared to the first one which is 99 ms. The API is the same in both the requests.

It’s hard to make an assumption without knowing the specifications of the request (payload, etc), but the general assumption can be that API becomes slower when it serves concurrent requests :thinking:
Can I ask what exactly are you logging?

How can I modify the second use case to have my report record the response time only once with the low response time?

If you want to have this in the logs (because I see that the records INFO[0004] logs), that’s not possible since the logs have nothing with aggregation (they don’t know what is slower/faster).

Can I ask what do you want to achieve? Why do you want to record the best response time? I mean, it’s pretty important also to know when and how the system becomes slower to know your limits.

Hello @olegbespalov ,
Thank you for taking time to respond to the thread.
Here is a little description of what I am trying to achieve.

  • I am sending a POST request, the combination of two ids from data csv file as payload.
  • I want to be able to capture the response time when each of those id combination is sent as payload.
  • I have used [vu.idInTest - 1] value so that each VU would only pick one row of data as the payload. Is there a better approach to picking up unique data sequentially from the csv file and then iterating through a loop?
  • When I run the POST request individually with a different test, I get the correct response times am seeking. Please see attached screenshot. (api_timing_1 pic attached).
  • When I run the test with scenario and executor ‘per-vu-iterations’ the individual response time from the API is very high. (api_timing_2 pic attached.)
  • I do understand it might be because all the VUs are running concurrently and causing the server resources to spike and thus affecting my response time to be high. Is there any workarounds to resolve this?
    Please do let me know if this confused you enough


    :wink:

Well, I see the goal of the load testing is to find that at the load, their server responds slowly and fix it :smiley: So the proper fix is on your API, see the bottleneck and fix it that way that requests not slowing the service.

If you still want to get the “better” response time I’m afraid that the only way to do that is to do requests sequentially, also giving the server some time for recovering.

import { sleep } from "k6";
import http from 'k6/http';


export const options = {
	scenarios: {
      login: {
        executor: 'shared-iterations',
        vus: 1,
        iterations: 1,
        maxDuration: '10m', // enough time to perform all requests
      },
	  },
};


//  demo.json
// [{"tag":"foo"},{"tag":"lorem"},{"tag":"ipsum"}]
const demo = JSON.parse(open('./demo.json'));

export default function () {
  for (let i = 0; i < demo.length; i++) {
    const my_tag = demo[i].tag;

    const res = http.get("http://test.k6.io", { tags: { name: my_tag}});    

    console.log("a request ", my_tag, " took ", res.timings.duration)
    
    
    // enough time to recover 
    sleep(1);
  }
}

Is that what you’re looking for?

thank you much @olegbespalov
This does solve my requirement which I know is a little weird :slight_smile:

@alex977 no worries :smiley: happy to help!