Loadimpact randomly aborts the same test

Hi guys. We continue testing our platform and identifying bottlenecks in our system, but here is one thing that breaks our monitoring and comparing.

Prerequisites:
Stages:

stages: [
        { "duration": "10m", "target": 300 },
        { "duration": "10m", "target": 300 },
        { "duration": "5m", "target": 10 },
]

Locations: 2

Groups:

I’m sharing with you with 4 latest test run:
The 1st run
/k6/runs/657058 - ABORTED (BY SYSTEM)

  1. It should have to run 300 VUs for 10 minutes but dropped to 150 after 2 minutes of stage 2 running and then was going like this yet 3 minutes until aborted by system.
    it says that the system under test is overloaded, and resulting in higher response times .
    but on the graph it’s flatlined…

The 2nd run
/k6/runs/657167 – passed full scenario (25 min)
Seeing the first run we thought that maybe the reason of the strage 2 failing is our 3rd-party endpoints that have fallen 12K times and we commented them out having left 1 failing endpoint only. Ok, now it passes and gives us a chance to analyze the results.

The 3rd run
/k6/runs/657220 – aborted by limit

Your test is creating too many requests and metrics.
This is an anti-pattern when load testing, where you want many data points per URL and then look for trends in the data.

This issue is usually a result of query parameters that vary per request (tokens, resource or session IDs etc).
If you want to group multiple HTTP requests, we suggest you use the URL grouping feature of k6 to aggregate data into a single URL metric.

P.S don’t understand why this error appeared and interrupted the test session even though we have only several groups (total 5) wherein only 1 group contains endpoints where set a parametrization for some arguments in URL

The 4th run
/k6/runs/657258 – passed successfully again
P.S we decided to uncomment failing endpoints and run again with all endpoints and we wondered again, everything is passed again.

Please, could you help us to sort it out? Thanks

Hi again @Alexander!

We took a quick look at the scripts. We see you are making batch requests and you aren’t handling the responses. For this reason we suggest setting the option discardResponseBodies: True within the options object. In cases where you need to handle something from the response body, you can sent a responseType param with that request. Test 657058 reaches very high memory utilization. You can see this by clicking “Analysis → add metric → Utilization”. It’s good practice to make sure you are not over-utilizing the load gens.

As for the aborted by limit, that’s due to the unique URLs. Each unique URL is going to produce 7 individual metrics. It will also make the HTTP tab a bit hard to parse / derive trends from. Test 657220 has a bunch of requests with a count of 2. You should use tags, specifically the name tag to group those similar URLs together. This will make the min/max/percentile metrics much more meaningful for those “same” endpoints requested.

Example of grouping with the name tag. This would produce “unique” urls, but they are all the same endpoint. Do note the tag must be name in order for grouping to work.

for (var id = 1; id <= 600; id++) {
   http.get(`http://test.loadimpact.com/?ts=${id}`, {tags: {name: 'test.loadimpact.com?ts'}});
}

I think addressing these two issues will clear up most things on your end.

Finally - you are of course welcome to post things here, but you can also use the “help” button within the web app to start a chat or send a ticket into our support queue. The CS team is a little but quicker to answer those.

@mark Thanks for all your explanations right here. It’s very useful for us and makes(already made) test results much more meaningful that helps us to build more and more realistic scenarios and understand how to interpret it and deliver to management correctly and clearly wrapped-up report.

  1. discardResponseBodies - done. Resolved

  2. aborted by limit - I thought that being enclosed in the group() makes it grouping into the unique one. Ok, I fixed it as well, it seems works nice now.

Summary
group("IdentityServer UI login endpoint", function () {
            let req2, res2;
            req2 = [{
                "method": "post",
                "url": `/Services/IdentityServer/core/login?signin=${signInId[1]}`,
                "body": {                 
               },
                "params": {
                    **"tags": {"name": 'TAG - IdentityServer UI login endpoint'},**
                    "headers": { 
                    }
                }
            }
  1. About the place of questioning. I thought this is the best place to ask you guys at least for two reasons:
  2. I can provide an extensive question here, neatly organized etc.
  3. You have a tracked history of questions visible for other users and in case of the apearing similar one - someone will fin the answers here intead of asking you the same question over and over again.
    Isn’t it?

@Alexander Great - glad everything seems to be in order then! This isn’t the first time there has been some confusion between group() and url grouping with a name tag. I think we need to be a little bit clearer in our docs.

It’s not a problem to ask here in the open, that’s fine to us. The only reason I mention is because the CS team I manage actively monitors those other channels mentioned. Here could take longer for us to reply (but also maybe someone else would reply :thinking:). No big deal - send us questions any way you would like. :smile:

1 Like