I have various groups of APIs from different companies, more than 500 APIs, and my question is how should I handle requesting these APIs, should I use batch(requests) or is there another way that I am missing?
batch requests will be sent together and K6 will wait for them to finish together. I have tried batch requests but they gave me skewed waiting time. I switched to ungrouped but grouped together in a scenario/iteration and that gave me better result. Maybe you can go for groups, one group per company APIs or group on the basis of utility of those APIs. You need to define the objective of the test before you can opt for the direction you wanna go in
Hi @nafiseh, welcome to the community forum!
Is that 500 APIs with multiple endpoints or 500 API endpoints?
I would argue if it’s the first you should break them in different scenarios.
You might want to use the name
tag as well so that they are not truly 500+ different URLs as that will likely be a problem for most outputs.
As @aakash.gupta mentions - batch will mean that you make all the request at the same time, but for various reasons there are limits:
- batch - is how many request from a single batch per VU will be start simultaneously - the rest will be started when one of those finishes. This defaults to 20.
- batch per host - is the same but per host. defaults to 6.
This is mostly to align with what browsers do. Also, it’s possible that some servers might not be okay with 500+ simultaneous requests from the same endpoint.
You can increase those, but it likely won’t be what a real user will be doing so .
I would argue what you use depends heavily on what the real situation is - after all that is what you want to test for ;).
@aakash.gupta what kind of skewed waiting time are you talking about? AFAIK batch
and rps
do not indicate for how long a request wasn’t done due to waiting.
Thanks for your answer, I think that I have got the solution.
When I used batch approach, the average waiting time that I got was 40 seconds. Then I switched to group, then average waiting time went down to 15 seconds range. This made me realize that batch works differently. Now I don’t use batch anymore.
@aakash.gupta this seems to me that the SUT has trouble once you start making 6 concurrent requests per VU. Whether that is a problem as that will be how the real system works or not is a different question. If this is a browser - the browser does make concurrent request … 6 actually comes from what a browser will do(or at least did in some point in time).
The number of failed requests wasn’t different between two approaches, only the average waiting time. Maybe I should create 2 scenarios where I run them batched and otherwise and see the results.