I’m trying to interpret the results from a test session and it would appear that my api backend received more requests than indicated via the iterations metric. My test contains a large number of VU’s (1000), I’m assuming that iterations may not capture the total number of call made to the backend as a VU will be terminated once my test duration is complete (i.e. there could be many VU’s that have made a request but have not completed, and as such are not included in the stats).
Is this the case?
If so, is there a way to ensure that all VU’s complete gracefully so that I can make a valid reconciliation between API logs and K6 stats?
As an example, I configured K6 with the following:
export let options = {
// simulate rampup of traffic from 1 to 1000 users over 30 secs.
stages: [
{duration: '30s', target: 1000},
{duration: '30s', target: 0},
{duration: '30s', target: 0},
],
}
My backend logs tell me that 9615 requests where made, yet K6 stats tell me the following:
execution: local
output: -
script: ./test/load_test/k6test.js
duration: -, iterations: -
vus: 1, max: 1000
done [==========================================================] 1m30s / 1m30s
✓ post successful
202........................: 9170 101.888701/s
Initiated..................: 10170 112.999791/s
checks.....................: 100.00% ✓ 9170 ✗ 0
data_received..............: 7.8 MB 86 kB/s
data_sent..................: 11 MB 122 kB/s
http_req_blocked...........: avg=24.69ms min=0s med=1µs max=5.65s p(90)=1µs p(95)=60.6ms
http_req_connecting........: avg=9.84ms min=0s med=0s max=1.57s p(90)=0s p(95)=16.54ms
http_req_duration..........: avg=1.53s min=92.77ms med=206.69ms max=46.99s p(90)=3s p(95)=11.29s
http_req_receiving.........: avg=159.42ms min=60µs med=344µs max=33.13s p(90)=81.21ms p(95)=163.99ms
http_req_sending...........: avg=40.38ms min=53µs med=241µs max=1.48s p(90)=111.94ms p(95)=347.02ms
http_req_tls_handshaking...: avg=11.18ms min=0s med=0s max=4.51s p(90)=0s p(95)=41.57ms
http_req_waiting...........: avg=1.33s min=0s med=198.19ms max=34.64s p(90)=2.86s p(95)=10.29s
http_reqs..................: 9170 101.888701/s
iteration_duration.........: avg=1.55s min=93.31ms med=213.34ms max=46.99s p(90)=3.25s p(95)=11.3s
iterations.................: 9170 101.888701/s
vus........................: 0 min=0 max=998
vus_max....................: 1000 min=1000 max=1000
So I can see that 10170 requests were ‘initiated’ (that is, they got as far as the call to the test function), 9615 requests arrived at the API (so 10170 - 9615 = 555 initiated tests did not make the request)
Is it possible to ensure that all VU’s complete gracefully and add their stats / metrics before they are terminated?