If you mean the end-of-test summary that k6 emits, then unfortunately you can’t easily discard or ignore them…
Add explicit tracking and ignoring of metrics and sub-metrics · Issue #1321 · grafana/k6 · GitHub is a proposal for a way to do that, but it’s not implemented yet. It (or something like it) is high on our priority list, but I can’t give you an ETA yet, sorry…
If you use an external output like InfluxDB/k6 cloud/etc., you can filter out setup() and teardown() metrics by their group or scenario tags.
Hmm, actually, you can use that and a few other tricks to filter out setup() and teardown() metrics in the end-of-test summary even now! Because you can (1) define thresholds on tags and (2) these threshold definitions generate sub-metrics, which are displayed in the end-of-test-summary, and (3) post-v0.27.0 you can apply tags in each k6 scenario!
So, something like this:
import http from 'k6/http'
export let options = {
vus: 5,
duration: "10s",
thresholds: {
// We're using 'scenario:default' because that's the internal k6
// scenario name when we haven't actually specified `options.scenarios`
// explicitly and are just using the old execution shortcuts instead...
'http_req_duration{scenario:default}': [
// Some dummy threshold that's always going to pass. You don't even
// need to have something here, I tried it and just this
// `'http_req_duration{scenario:default}': []` is enough to trick
// k6, but that's undefined behavior I can't promise we won't break
// in the future...
`max>=0`,
],
},
}
export function setup() {
http.get('https://httpbin.test.k6.io/delay/5?stage=setup')
}
export default function () {
http.get('http://test.k6.io/?where=default')
http.get('https://httpbin.test.k6.io/delay/1?stage=default')
}
export function teardown() {
http.get('https://httpbin.test.k6.io/delay/7?stage=teardown')
}
will result in an end-of-test summary like this:
data_received..............: 497 kB 21 kB/s
data_sent..................: 11 kB 442 B/s
http_req_blocked...........: avg=47.02ms min=2.48µs med=5.49µs max=541.85ms p(90)=173.73ms p(95)=407.58ms
http_req_connecting........: avg=19.31ms min=0s med=0s max=132.97ms p(90)=131.37ms p(95)=132.16ms
http_req_duration..........: avg=775.35ms min=136.49ms med=1.13s max=7.13s p(90)=1.13s p(95)=1.14s
âś“ { scenario:default }.....: avg=641.27ms min=136.49ms med=644.4ms max=1.15s p(90)=1.13s p(95)=1.14s
http_req_receiving.........: avg=3.76ms min=61.65µs med=896.96µs max=17.83ms p(90)=11.89ms p(95)=14.19ms
http_req_sending...........: avg=56.48µs min=14.18µs med=44.96µs max=331.36µs p(90)=108.14µs p(95)=127.76µs
http_req_tls_handshaking...: avg=24.36ms min=0s med=0s max=341.86ms p(90)=0s p(95)=276.07ms
http_req_waiting...........: avg=771.52ms min=133.11ms med=1.13s max=7.13s p(90)=1.13s p(95)=1.14s
http_reqs..................: 82 3.402872/s
iteration_duration.........: avg=1.46s min=1.27s med=1.28s max=5.67s p(90)=1.87s p(95)=1.87s
iterations.................: 40 1.659938/s
vus........................: 0 min=0 max=5
vus_max....................: 5 min=5 max=5
Notice how the ✓ { scenario:default } row doesn’t contain the long setup() and teardown() times. You can do this even if you have multiple scenarios, but then you’d have to add tags for each one. On the flip side, that gives you even more flexibility, you’d be able to get any number of cross-sections for each sub-metric.
See the last example from the “Advenced examples” section in the scenarios docs (sorry, can’t link directly to it), the one with:
export let options = {
// ...
thresholds: {
// we can set different thresholds for the different scenarios because
// of the extra metric tags we set!
'http_req_duration{test_type:api}': ['p(95)<250', 'p(99)<350'],
'http_req_duration{test_type:website}': ['p(99)<500'],
// we can reference the scenario names as well
'http_req_duration{scenario:my_api_test_2}': ['p(99)<300'],
},
};
If you run that, the end of test summary will look somewhat like this:
...
http_req_duration..............: avg=140.2ms min=131.85ms med=136.63ms max=233.94ms p(90)=149.76ms p(95)=154.92ms
âś“ { scenario:my_api_test_2 }...: avg=148.57ms min=139.63ms med=145.59ms max=233.94ms p(90)=157.21ms p(95)=167.33ms
âś“ { test_type:api }............: avg=148.95ms min=139.63ms med=145.69ms max=233.94ms p(90)=158.46ms p(95)=168.87ms
âś“ { test_type:website }........: avg=135.07ms min=131.85ms med=134.01ms max=160.74ms p(90)=139.29ms p(95)=140.97ms
...