K6 distributed - starter/jobs are not created

Created config map using
kubectl create configmap scenarios-test --from-file=archive.tar

I am trying to schedule a job using a config map like this. I tested the scenario locally and there is no issue.

apiVersion: k6.io/v1alpha1
kind: TestRun
metadata:
  name: k6-test
spec:
  parallelism: 4
  script:
    configMap:
      name: scenarios-test
      file: archive.tar

It creates only k6-test-initializer-rx5j8 pods. K6 is not creating starters and jobs.

Can you please provide any suggestions or documentation to debug this issue?
Thank you!

facing similar issue.
NAME READY STATUS RESTARTS AGE
k6-sample-initializer-rp9qf 0/1 Completed 0 2m33s

only the initializer job started, there is no starter job or k6-sample jobs running the test

Hi @gunjanvmirchandani!

Were you able to verify your test script is working by running outside of the operator? Often, an invalid script is the cause. You should be able to get more information about the issue by checking the logs of the initializer job.

If everything is still good and there is still an issue, please provide more details so we can better understand what may be happening.

2 Likes

thanks @javaducky, i think the issue was with parallilism. i set it from 4 to 1 and now i see all pods coming up including starter and k6-sample
however its erroring out

sample-starter :
Error from server: Get “https://10.240.169.89:10250/containerLogs/k6-demo/k6-sample-starter-shlhn/k6-curl”: dial tcp 10.240.169.89:10250: connect: connection refused

kubectl get all -n k6-demo
NAME READY STATUS RESTARTS AGE
pod/k6-sample-1-8g69d 0/1 ContainerStatusUnknown 1 5m21s
pod/k6-sample-initializer-w8b6j 0/1 Completed 0 5m35s
pod/k6-sample-starter-m7hdq 0/1 Error 0 3m9s
pod/k6-sample-starter-sflgv 0/1 Error 1 4m12s
pod/k6-sample-starter-shlhn 0/1 Error 0 4m18s
pod/k6-sample-starter-tvdvx 0/1 Error 0 2m44s
pod/k6-sample-starter-zrh7f 0/1 Error 0 86s

This happens, when you try to run the test script on higher paralellism than the maximum VUs. There is no error in the initializer pod, but you can see the error message in the manager container of the k6-operator-controller-manager pod:

ERROR	controllers.K6	Parallelism argument cannot be larger than maximum VUs in the script
2 Likes

@bandorko that is right. i was able to solve the problem by adjusting the parallelism.
I can see all the pods coming up but they are erroring out :frowning:
Do you have any suggestions for investigating the issue

sample-starter :
Error from server: Get “https://10.240.169.89:10250/containerLogs/k6-demo/k6-sample-starter-shlhn/k6-curl”: dial tcp 10.240.169.89:10250: connect: connection refused

kubectl get all -n k6-demo
NAME READY STATUS RESTARTS AGE
pod/k6-sample-1-8g69d 0/1 ContainerStatusUnknown 1 5m21s
pod/k6-sample-initializer-w8b6j 0/1 Completed 0 5m35s
pod/k6-sample-starter-m7hdq 0/1 Error 0 3m9s
pod/k6-sample-starter-sflgv 0/1 Error 1 4m12s
pod/k6-sample-starter-shlhn 0/1 Error 0 4m18s
pod/k6-sample-starter-tvdvx 0/1 Error 0 2m44s
pod/k6-sample-starter-zrh7f 0/1 Error 0 86s

@gunjanvmirchandani

I think, this error message is not produced by k6, rather it is produced by kubernetes, because it cannot reach the k6 pod to get the logs

I would check the logs of the initializer and the starter pods for any error, and also the manager container of the k6-operator-controller-manager. And I would also check the result of kubectl get pods k6-sample-1-8g69d -o yaml

2 Likes