I have multiple tests running sequentially in a CI job. They all have different init, setup and teardown contexts so I don’t think I can import their default functions and run them using one ‘master’ script?
But the problem is, if one of them ‘fails’ the thresholds that are set, then the whole CI job fails without running the rest of the performance scripts. Is it possible to ensure that all the scripts run sequentially, and only have the CI job fail at the end of the run if one of the k6 scripts has failed thresholds?
Hi @larchie ,
This is in general because of the CI - the moment a command in the CI exits with a non-zero exit code it aborts. This in general can be fixed by adding
|| true to every command you don’t want to abort if it stops, although this is probably not a good idea if at the end something gets deployed so be careful.
k6 run script.js # this will abort if k6 returns non-zero status code, for example because a threshold is triggered
k6 run script2.js || true # this will not abort
Another way you could do it, depending on the capabilities of the CI platform you are using, is set each test up as a separate job in the pipeline. We use Gitlab CI at my company, and I have it so all of the test jobs I want to run as part of a scheduled pipeline are part of a resource group, which results in them executing one at a time. All of the test jobs are in the same stage and run regardless of whether any fail.
Only caveat with my setup is the jobs run in a random order. I could probably adjust that with some additional configuration if I needed them to run in a specific order, but for my use case the order was not important.