K6-Operator Test Cleanup

I have github actions triggering k6 tests. Sometimes the kubectl destroy doesn’t get run in the github action workflow run. How can I cleanup all the k6 tests at the end of the day (deleting pods, jobs, configmaps) without the original yaml file used to deploy them with the kubectl apply command. Would it be simply deleting the k6 job?

Hi @megaman, welcome to the community forum :wave:

Could you please explain a bit more about how you notice this problem and how it impacts you? Do you by chance use self-hosted runners and the job for some reason fails to execute till the end? If so, what reason would it be?

AFAIU, Github-hosted runners shouldn’t impact each other’s execution. I.e. it shouldn’t matter if pods are destroyed or not (unless that’s what you’re trying to check in the workflow?). According to the Github doc:

This is not an issue with GitHub-hosted runners because each GitHub-hosted runner is always a clean isolated virtual machine, and it is destroyed at the end of the job execution.

1 Like

I figured out my solution already. Still learning kubernetes and the k6-operator deploys a custom resource definition of k6s.k6.io. I can get a list of k6s deployed in a name space with this command.

kubectl get k6s.k6.io

You can also describe or delete a k6 kind by running command below.

kubectl describe k6s.k6.io/<your-k6-test-run-name-here>
kubectl delete k6s.k6.io/<your-k6-test-run-name-here>
1 Like