From wrk to k6: equivalent parameters and testing methodology

We are in the process of migrating our performance testing repository from utilizing wrk to k6. However, we aim to maintain the testing methodology employed in our previous setup.

In our current configuration, we handle large files containing approximately 300,000 requests. These requests are read line by line in our Lua script, and subsequently concatenated for utilization by the wrk tool. The command currently in use is as follows:

./wrk -d 1m -t 8 -c 256 --timeout 30s -R 16000 -q -s pipeline.lua http://localhost:8080 ./requests.reqs

Referencing the wrk documentation:

-c, --connections: total number of HTTP connections to keep open with
                   each thread handling N = connections/threads

-d, --duration:    duration of the test, e.g. 2s, 2m, 2h

-t, --threads:     total number of threads to use

-r:  throughput argument, total requests per second

My inquiry is regarding the equivalent parameters in k6. Can these be specified within the ‘options’ variable during k6 testing?

The concept of -c in wrk appears similar to the ‘vus’ concept in k6, but the documentation is somewhat ambiguous. The k6 documentation contains related options such as ‘batch’, ‘rps’, and ‘iterations’, but they don’t exactly match the parameters in wrk.

I would greatly appreciate it if someone could provide an example of a complete test using k6 that mimic wrk methodology.

disclaimer: I know k6 is not focused achieving the highest concurrent users or RPS performance of tools like wrk.

Hi @omerratsaby

Welcome to the community forum :wave:

I’m not familiar with wrk. We saw this was also asked at load - From wrk to k6: equivalent parameters and testing methodology - Stack Overflow, which contains an example from the community. Maybe someone in the community will be able to help more :raised_hands:

First we should point out that there isn’t a direct translation between wrk concepts and those in k6. That is not to say that it isn’t possible to perform a similar test in k6 as what you have in wrk.

Regarding equivalents to k6:

  • The k6 concept of a Virtual User is the one that most closely matches up to what wrk calls a thread. Each VU is essentially its own thread of compute/execution that does things independently of the other VUs/threads. Similar to JMeter, that also calls VUs threads.
  • As per -R, requests per second, note that some executors allow you to define that. E.g. in constant arrival rate, you can use the rate option. This allows you to set target iterations/sec (or min, or hour), which is what you want to achieve across all of your VUs/threads.
  • connections does not seem like something we can directly translate. The number of open connections will be the VUs running at a given point in time to reach the rate you specify.

Assuming you were to use the constant arrival rate executor and your script (or more specifically, the export default function or the function called by the scenario’s exec property) consists of a single HTTP request, then iterations/sec will equal requests/sec. And requests/sec should equal active HTTP connections.

If you have a list of URLs that are fed into the function so requests are made to a random endpoint (e.g. using a SharedArray), the only thing you would not have control over is the number of VUs/threads used to make the requests. You could limit maxVUs. In this case, though, if the server increases latency under the load, at some point, k6 might not be able to reach the required rate. By default, k6 sends requests synchronously, so it’s at the mercy of how quickly the server responds to requests and the maxVUs it can run.

That said, you probably won’t care how many threads/VUs are used, so you could focus on the requests per time unit you want. This should be doable with a constant arrival rate executor or similar executors (constant-vus).

I also recommend you look at Concepts | Grafana k6 documentation to check the different scenarios and the type of load they can generate.

What could also work to mimic how you use wrk more closely is using http.batch with a sizable input array of requests and then batchPerHost to control the throughput/active number of connections.

Finally, you can usually parametrize options as well.

I hope this helps :bowing_woman: