I hope someone can help with this issue which is driving me crazy.
While testing some newly implemented caching on an export endpoint here (locally) we ran into a weird problem with the k6 tests to verify the new and hopefully faster response times.
After some troubleshooting we have simplified the test as much as possible, and are now sending one single http get request (receiving about 23 MB of JSON). We have set discardResponseBodies to true. The script is about as minimal as is possible.
It seems like k6 adds about 20 seconds overhead on the response times. Using other tools (we have tried Postman, Insomnia, ThunderClient, curl and now finally Gatling) we were able to measure the response times to 300-500ms with a populated cache, while k6 consequently shows ~20 seconds.
Requests to other endpoints in the same API are showing fast response times, so we figure it must have something to do with k6 and the 20+ megabyte response, somehow.
Any idea what could be causing this?
edit;
here is an example of a report showing a http_req_receiving time of 19.4s:
Version of k6 might also be relevant I guess As well as anything else about the request. You might try --http-debug and copy the output here after some cleanup.
Do you have a proxy of some kind in between? Maybe it is configured to let postman and co. true without throttling but not k6?
Thanks for replying and trying to reproduce the situation.
Using the URL you provided, I can’t reproduce it either.
I have upgraded k6 to the latest version available (0.46.0) and tried again locally, and I get the same behaviour.
When I ran the test with the --http-debug flag I saw that I got the HTTP 200 OK almost immediately, and it seems like it’s the data transfer stage that takes time. (Although this was already indicated by the report in my OP). I don’t understand why this would take so long.
I don’t have a proxy in between that would differentiate between user agents, though the target API is ran in a Docker container. This doesn’t seem to affect any of the other tools, so not sure if it’s relevant.
I do see in my report that I only get about 1 MB/s transfer speed.
To test this further, I will deploy the solution to an actual environment and do some further investigation.
I’m not sure how I could make a reproducible example for you, I’m afraid.
update:
I do not get the same behaviour if I straight up serve the same json as a static file, so something must be happening when the request is going through the asp.net core API logic.
With you saying the words “docker” and “asp.net” … I now wonder whether you are on windows and running one of the things throw WSL and some not through it and … there being strange performance implications
I agree, it’s super weird and it now gets even weirder.
I am on Windows (10) and using Docker Desktop (which runs in WSL 2) to run the API container, so your assumptions are correct.
I have recreated the same export functionality (with caching) in an API written in Go, and when I run that in the same way (in a container) I do not get the same performance issue!
The first request takes ~17 seconds to fetch the data and fill the cache:
I got these results, the first one is populating the cache:
C:\dev\source\flr-go> go run .
HTTP/2.0
200
23.7698516s
C:\dev\source\flr-go> go run .
HTTP/2.0
200
111.2603ms
C:\dev\source\flr-go> go run .
HTTP/2.0
200
50.9962ms
C:\dev\source\flr-go> go run .
HTTP/2.0
200
260.4256ms
C:\dev\source\flr-go> go run .
HTTP/2.0
200
43.1727ms
Then I did the same call a couple of times using K6 (with a filled cache) and got this:
At this point this seems strange and as a not asp.net or windows user I doubt I will be able to continue debugging it, but hopefully someone else on the team will be able to reproduce it and track it down to something more concrete.