Performance of requests with larger payload between HTTP/1.1 and HTTP/2.0

Hello k6 Community,

I have a question regarding performance of requests when using ‘k6’ with default support for HTTP/2.0 compared to HTTP/1.1. Sending the same payload (XML file of size: 519 KB, flattened XML file (1 line)) via regular ‘k6’ execution is taking around: 2-2.5 seconds:

INFO[0004] [pwxtt8dvb2] [Thu Feb 15 2024 16:30:53 GMT+0100 (CET)] [TP] SUBMISSION STOP, took: 1801  source=console
INFO[0004] pwxtt8dvb2 blocked 0                          source=console
INFO[0004] pwxtt8dvb2 connecting 0                       source=console
INFO[0004] pwxtt8dvb2 tls-hs 0                           source=console
INFO[0004] pwxtt8dvb2 sending 1560.2153                  source=console
INFO[0004] pwxtt8dvb2 waiting 134.8165                   source=console
INFO[0004] pwxtt8dvb2 receiving 96.1651                  source=console
INFO[0004] pwxtt8dvb2 duration 1791.1969                 source=console

Where on HTTP/1.1 when using GODEBUG=http2client=0,http2server=0:

INFO[0203] [75a7j6wqhr] [Thu Feb 15 2024 16:27:18 GMT+0100 (CET)] [API] SUBMISSION STOP, took: 304 ms  source=console
INFO[0203] 75a7j6wqhr blocked 0                          source=console
INFO[0203] 75a7j6wqhr connecting 0                       source=console
INFO[0203] 75a7j6wqhr tls-hs 0                           source=console
INFO[0203] 75a7j6wqhr sending 7.6531                     source=console
INFO[0203] 75a7j6wqhr waiting 294.6727                   source=console
INFO[0203] 75a7j6wqhr receiving 0.9378                   source=console
INFO[0203] 75a7j6wqhr duration 303.2636                  source=console

My scenario is as follows:

  execution: local
     script: ./k6/performance-test.js
     output: Prometheus remote write (http://localhost:9090/api/v1/write)

  scenarios: (100.00%) 4 scenarios, 60 max VUs, 3m50s max duration (incl. graceful stop):
           * type2: 0.50 iterations/s for 3m20s (maxVUs: 5-15, exec: sendType2Declaration, gracefulStop: 30s)
           * type1: 0.50 iterations/s for 3m20s (maxVUs: 5-15, exec: sendType1Declaration, gracefulStop: 30s)
           * type3: 0.15 iterations/s for 3m20s (maxVUs: 5-15, exec: sendType3Declaration, gracefulStop: 30s)
           * type4Large: 0.05 iterations/s for 3m20s (maxVUs: 5-15, exec: sendLargeType4Declaration, gracefulStop: 30s)

The only option that I am passing to k6 is:

k6 run ${K6_FILE} \
    --config ${SCENARIO} \
    -o experimental-prometheus-rw \
    --tag runId=${RUN_ID} \
    -e RUN_ID=${RUN_ID} \
    -e PAYLOADS_DIRECTORY="${PAYLOADS_DIRECTORY}" \
    -e HOST="${HOST_URL}" \
    --no-vu-connection-reuse

JMeter which is not using HTTP/2.0 is by default responding from an endpoint in around 600 ms. same for Postman collection that is not using HTTP/2.0 as it’s not yet supported as far as I know.

Why does my endpoint return around 6-7 times slower than the one being connected to via HTTP/1.1 protocol?

Thank you for your responses. :slight_smile:

Hello @ncgcz,

Welcome back to the community forums! And apologies for the delayed answer :pray:

To be honest, I’m trying to understand what your problem is, but I don’t fully get it. Am I wrong if I say that you successfully managed to disable HTTP/2.0 in k6? And that the results with k6 and HTTP/1.1 are comparable to the rest of the tools?

Why does my endpoint return around 6-7 times slower than the one being connected to via HTTP/1.1 protocol?

That’s a good question, where many variables (beyond k6’s scope) can be involved.
Have you tried to reproduce the same behavior with another HTTP/2.0 client?
Have you measured the latency from the client or from the server?
Do you observe difference latencies with k6 and other clients, when over HTTP/2.0?

Please, try to bring a bit more information (or a reproducible scenario) and I’ll be happy to help.
So far, I feel like the scope of this scenario is still too broad to try to determine the reasons behind the behavior you observed.

Thanks! :bowing_man: