Hi @syedrakeentest11,
Welcome to our community forums!
Is this correct, or should I update it? Can anyone help me?
Although I cannot completely see your script (perhaps for privacy/confidentiality reasons, I guess), I think your approach is correct, having 100 VUs and 100 iterations should technically perform 100 iterations concurrently, simulating that 100 concurrent users are doing what’s defined on your function.
If you want to make sure that every iteration is using different user credentials, you’ll need to have an array with them (at least 100), and you can pick each one by using the exec.scenario.iterationInTest
, as described here and here.
The only thing I’d like to suggest is to revisit the use of SharedArray
, so you avoid allocation one copy in memory of the entire JSON file for each VU, but just have a single one shared among all of them. If this is a small/short test, and/or the file is small, it won’t be a problem, but just in case cause in other scenarios the current approach might consume a huge amount of memory.
Please take a look at the metrics shown in the image. I would like to know which metrics are important for evaluating the login results, specifically the minimum, maximum, and average values. Is this approach correct?
That said, let’s jump into the metrics discussion. Although, I’d say that it really depends on the SUT (system under test), and how every iteration looks like (how many steps are involved).
For instance, if the entire function is performing the workflow to log in (despite of the amount of HTTP requests required to do so), and that’s what you want to evaluate, then iteration duration is likely what you want to pay attention to. If no, you way need to tag your requests and/or look at the metrics of the SUT to understand how the different endpoints involved in the operation are being impacted and which one (if any) represents a bottlekneck.
Regarding min, max, avg, etc I’d suggest to revisit what does terms mean statistically (especially percentiles), and to look for some docs around that, cause I don’t think this is the correct place to discuss that, but generally speaking, the near-end percentiles (p90, p95 or even p99) are a good starting point to have a look at, cause they represent the behavior experienced by the huge mass of users, in this case.
Additionally, for further scenarios, I need a token. I am creating a file for each scenario. How can I store the token for future use?
As far as I know, currently there’s no standard (built-in) solution for this, unless you really want to perform all the operations involved as part of each iteration.
What I’d suggest is to consider an external storage, like Redis (GitHub - grafana/xk6-redis: A k6 extension to test the performance of a Redis instance.). I know that eventually, GCk6 will have a solution transparent to users, but it’s not there yet.
Any help to clarify this confusion would be greatly appreciated!
I hope all those explanations help a bit If you feel like you still have some doubts around, just tell me and I’ll try to resolve them
Regards!