Strange k6 behaviour with HTTP2

Hello community!

I am new to k6 and tested an API by requesting a large dataset (local-environment).
Script:

import http from 'k6/http';
import { check } from "k6";


export let options = { maxRedirects: 4, iterations: 1, vus:1, duration: "120m" };
const params = {
  headers: {
    'Content-Type': 'application/json',
  },
};


export default function() {
  let res = http.get('https://localhost:5001/data', params);
  check(res,{    
    'is status 200': (r) => r.status === 200,
    'is HTTP/2.0': (r) => r.proto === "HTTP/2.0",
  });
}

Summary:

....
data_received: 21MB,
data_sent: 35KB
....

I was a bit confused of the sent data-size. Therefore i inspected the connection with wireshark:
(screenshot is the tcp-timeline: k6 → server)

Does anyone know, why k6 is sending so much application-data during the receive of the response?
(Because of the TLS-Protocol i cannot see inside the data)

I also tested the exact same request with k6 - HTTP1.1 and chrome (HTTP2) and there are no such strange packets.

Thank you!

Hii @phil,

From what I can see experimenting with wireshark locally as well all 4:

  • k6
  • firefox
  • curl
  • chromium

have the same application data frames, but they come from the server.

Are you sure that you actually filtered with the filter above? If you don’t hit enter it doesn’t actually filter it stays with the old one, and maybe you were just looking at all packets for the connection before :man_shrugging: .

I also used modified version of feature: add KeyLogWriter runtime option and envVar by chtnnh · Pull Request #2210 · grafana/k6 · GitHub to get sslkey log working with k6. Tested with curl with sslkeylog as well and I am getting the same behaviour unless I am missing something a lot.

Can you also give more info about the type of connection you have. I did test with http1.1 and “https://httpbin.test.k6.io/drip?duration=5&numbytes=10000&code=200&delay=2” so maybe there are more variables to this :person_shrugging:

Hi @mstoykov,
thank you for the reply!

More info about the env:

  • .NET Core3.1 Web-Service (tested with kestrel and IIS) on a windows machine
  • packets only appear with HTTP2/TLS:
    Wireshark-Filter is set and you can see in description, that the source port is k6 (61797) - server_port (5001)

Steps already done to reproduce:

  • tested it on a blank .NET Core-Project with default-Example (weatherforecast) → same packages appear
  • tested it on a second windows machine (HTTP2/TLS)-> same packages appear
  • tested it with HTTP1.1/TLS → packages DO NOT appear
  • called REST-URI in chrome (HTTP2/TLS) and firefox (HTTP2/TLS), checked connection in wireshark → packages DO NOT appear

So after the tests it looks like this packages only appear when using HTTP2(TLS) with k6

Hi @phil, sorry for the really slow reply, but this took a bit more to play around to see what is happening.

After some amount of debugging, this seems to come from http2 WINDOW_UPDATE
send each time go (the language k6 is written in) reads from the connection.

Relevant stack trace:

runtime/debug.PrintStack()
        runtime/debug/stack.go:16 +0x19
go.k6.io/k6/lib/netext.(*Conn).Write(0xc000574000, {0xc0005310e0, 0xc0005310e5, 0x4746ce})
        go.k6.io/k6/lib/netext/dialer.go:270 +0xa5
crypto/tls.(*Conn).write(0xc001ccc000, {0xc0005310e0, 0x5, 0x5d})
        crypto/tls/conn.go:912 +0x108
crypto/tls.(*Conn).writeRecordLocked(0xc001ccc000, 0x17, {0xc0004bb000, 0xd, 0x1000})
        crypto/tls/conn.go:980 +0x351
crypto/tls.(*Conn).Write(0xc0026060c8, {0xc0004bb000, 0xd, 0x1000})
        crypto/tls/conn.go:1151 +0x405
golang.org/x/net/http2.stickyErrWriter.Write({{0x7f8318315780, 0xc001ccc000}, 0xc0023402b8}, {0xc0004bb000, 0xc002606120, 0xba177a})
        golang.org/x/net@v0.0.0-20211209100829-84cba5454caf/http2/transport.go:398 +0x4f
bufio.(*Writer).Flush(0xc001a52240)
        bufio/bufio.go:607 +0x62
golang.org/x/net/http2.transportResponseBody.Read({0xc00135cc00}, {0xc000a49e00, 0x7f8318315af0, 0xc002606210})
        golang.org/x/net@v0.0.0-20211209100829-84cba5454caf/http2/transport.go:2308 +0x46c
bytes.(*Buffer).ReadFrom(0xc00135cc00, {0x7f8318315ab0, 0xc002340300})
        bytes/buffer.go:204 +0x98
io.copyBuffer({0x12f4d20, 0xc00135cc00}, {0x7f8318315ab0, 0xc002340300}, {0x0, 0x0, 0x0})
        io/io.go:409 +0x14b
io.Copy(...)
        io/io.go:382
go.k6.io/k6/lib/netext/httpext.readResponseBody(0xc00235c000, 0x0, 0xc0024be090, {0x0, 0x0})
        go.k6.io/k6/lib/netext/httpext/compression.go:197 +0x6b3
go.k6.io/k6/lib/netext/httpext.MakeRequest({0x13105a8, 0xc0024d65a0}, 0xc00235c000, 0xc0024be480)
        go.k6.io/k6/lib/netext/httpext/request.go:299 +0x1625

This results in 42 bytes written back (before additional bytes for other layers such as TCP and IP)

On k6/lib/netext/httpext/compression.go:197 k6 is reading the response

On The Go Programming Language (link to code) we can see that we flush back after calling WriteWindowUpdate

This seems like way too aggressive on go’s side. Browsers (and curl) seem to just seem to not update the window at all(after the beginning) at least with the small requests I had.

It seems like that has been reported x/net/http2: WINDOW_UPDATE sent rate too high and can't be configured · Issue #28732 · golang/go · GitHub but nothing has been done.

As confirmation on the fact this are the same packages - if we log from k6 to a ssl keylog file and user that in wireshark that same packages become http2 WINDOW_UPDATE and the payload is exactly 42 bytes :tada:

This probably isn’t really a problem in most cases and there really is nothing we can do apart from maybe trying to fix it upstream, which seems just a bit daunting. Looking at the code it won’t be super easy to just skip that given some condition. If anything parts of it look like they are trying to prevent exactly this but … don’t. But also all of this code is 6+ years old, so :person_shrugging:

Some stackoverflow question with some info around “how often should window_update be issued?”

There is some more info here which in practice confirms my suspicion that the window_update is done way too often. There are some claims on how much slower this is, which has relevance for k6. \

But the numbers given there are quite big, especially as k6 usually has to do a ton more than just sending things around. And usually not on the local network.

Given that we haven’t noticed any problems so far it seems to me like this isn’t that big of an issue for our use case, but still nice to keep an eye on it, hopefully it will be “fixed” soon :crossed_fingers:

Bumping transportDefaultStreamMinRefresh from 4kb(to 65kb in my case) in practice removes the WINDOW_UPDATE calls. I do think that 4kb is really low especially as the window size is 4mb, so we are updating it on 0.1% change :frowning: .

I guess there are some downsides to it just being bigger :person_shrugging: - will try to figure out some way to communicate this with the go team and see if there is some desire for most of those constants to be bumped.

Hey @mstoykov ,

thank you for the detailed analysis!
Very interesting to see, where the problem happens…

For me it is totally fine, because i switched to HTTP/1.1 and the testing-results seem to be good. It was just strange and i could not explain this behaviour and that’s why i asked for help here.

I agree with you, that in most cases it is not that relevant and the fact, that there were no questions about it, should proof it…

Thank you very much for your help and the insights!

1 Like