Reconcile(); stage = {“namespace”: “default”, “name”: “k6-sample”, “reconcileID”: “68b7bcb6-bd99-4f31-8967-aad21fdb879c”} │
│ -09-30T13:30:45Z INFO controllers.TestRun Initialize test {“namespace”: “default”, “name”: “k6-sample”, “reconcileID”: “68b7bcb6-bd99-4f31-8967-aad21fdb879c”}
Initializer job is ready to start with image ghcr.io/grafana/k6-operator:latest-runner
and command `[sh -c mkdir -p $(dirname /tmp/k6_script_notifySessionFlow.js.archived.tar) && k6 archive /test/k6_script_notifySes │
│ -09-30T13:30:45Z INFO controllers.TestRun Reconcile(); stage = initialization {“namespace”: “default”, “name”: “k6-sample”, “reconcileID”: “81bcb0bd-fb2e-4585-800f-b4737815384c”}
No initializing pod found yet {“namespace”: “default”, “name”: “k6-sample”, “reconcileID”: “81bcb0bd-fb2e-4585-800f-b4737815384c”}
Yaml file :
apiVersion: k6.io/v1alpha1
kind: TestRun
metadata:
name: k6-sample
namespace: default
spec:
parallelism: 1
script:
configMap:
name: k6-test
file: k6_script_notifySessionFlow.js
When running - kubectl get testrun , it shows below testrun stuck in initialization forever
NAME STAGE AGE TESTRUNID
k6-sample initialization 10s
kubectl describe testrun k6-sample
Conditions:
Last Transition Time: 2024-09-30T14:51:39Z
Message:
Reason: TestRunTypeUnknown
Status: Unknown
Type: CloudTestRun
Last Transition Time: 2024-09-30T14:51:39Z
Message:
Reason: TestRunPreparation
Status: Unknown
Type: TestRunRunning
Last Transition Time: 2024-09-30T14:51:39Z
Message:
Reason: TeardownExecutedFalse
Status: False
Type: TeardownExecuted
Last Transition Time: 2024-09-30T14:51:39Z
Message:
Reason: CloudTestRunAbortedFalse
Status: False
Type: CloudTestRunAborted
Last Transition Time: 2024-09-30T14:51:39Z
Message:
Reason: CloudPLZTestRunFalse
Status: False
Type: CloudPLZTestRun
Stage: initialization
Events:
Logs:
Creation of initializer pod takes too long: your configuration might be off. Check if initializer job and pod were created successfully.
Okay that was the issue with kyverno policy violation , now the sample initializer has created as seen from below command :
kubectl describe job k6-sample-initializer
Normal SuccessfulCreate 4m51s job-controller Created pod: k6-sample-initializer-mw2bw
But still the test is not starting , this POD is running but the other 2 POD which is responsible for running tests are not yet started . how should i debug this?
When i delete the POD manually using
kubectl delete pod k6-sample-initializer-mw2bw
I can see the k6-sample-1 and k6-sample-starter running and test is getting executed . Can anyone lease help me debug why it doesn’t start on it’s own but works fine after i delete the k6-sample-initializer POD.
1 Like
I have looked up in the k6-operator source code and i can see that the reason it is not spinning up test run is due to below condition as in my case we have linkerd-proxy which is in running state all the time.
if podList.Items[0].Status.Phase != corev1.PodSucceeded {
log.Info("Waiting for initializing pod to finish")
return
}
I am stil searching for a work around on this.
Hi @regearpit93, welcome to the forum
kubectl describe job k6-sample-initializer
Normal SuccessfulCreate 4m51s job-controller Created pod: k6-sample-initializer-mw2bw
Here, how did initializer pod finish? It is a very short-running pod and it’s very unlikely to get stuck if it was started. Could you check the status and the logs of initializer?
Sorry for the delay , this pod is in running state and is not finishing it since in my cluster │ linkerd-proxy container is in running state and i can not get rid of it due to namespace polocies.
i saw in the k6-operator source code that unless the initialiser pod goes in finished/completed state then it will run any test. Currently the initialiser POD is in running state but no other POD spins up and tests does not run.
if podList.Items[0].Status.Phase != corev1.PodSucceeded {
log.Info("Waiting for initializing pod to finish")
return
}
![image|690x100](upload://fE725TCHdZ45tEn6Tr6TygRyTWW.png)
Please let me know if you have any solution for this