localFile usage throws error about mount volumes

When running with the following config:

apiVersion: k6.io/v1alpha1
kind: K6
  name: one-load-test
  parallelism: 1
  arguments: --out statsd
    localFile: /app/tests/load/scriptv1.js
    image: some.custom.repo:latest
    serviceAccountName: k6-runner

I am presented with the following error:

manager 2022-06-27T23:31:47.730Z    ERROR    controllers.K6    Failed to launch k6 test    {"k6": "default/one-load-test", "error": "Job.batch \"one-load-test-1\" is invalid: [spec.template.spec.volumes[0].name: Required value, spec.template.spec.containers[0].volumeMounts[0].name: Required value, spec.template.spec.containers[0].volumeMounts[0].name: Not found: \"\", spec.template.spec.containers[0].volumeMounts[0].mountPath: Required value]"}
manager github.com/go-logr/zapr.(*zapLogger).Error
manager     /go/pkg/mod/github.com/go-logr/zapr@v0.1.0/zapr.go:128
manager github.com/grafana/k6-operator/controllers.launchTest
manager     /workspace/controllers/k6_create.go:80
manager github.com/grafana/k6-operator/controllers.createJobSpecs
manager     /workspace/controllers/k6_create.go:51
manager github.com/grafana/k6-operator/controllers.CreateJobs
manager     /workspace/controllers/k6_create.go:24
manager github.com/grafana/k6-operator/controllers.(*K6Reconciler).Reconcile
manager     /workspace/controllers/k6_controller.go:70
manager sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
manager     /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.6.2/pkg/internal/controller/controller.go:235
manager sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
manager     /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.6.2/pkg/internal/controller/controller.go:209
manager sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
manager     /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.6.2/pkg/internal/controller/controller.go:188
manager k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1
manager     /go/pkg/mod/k8s.io/apimachinery@v0.18.6/pkg/util/wait/wait.go:155
manager k8s.io/apimachinery/pkg/util/wait.BackoffUntil
manager     /go/pkg/mod/k8s.io/apimachinery@v0.18.6/pkg/util/wait/wait.go:156
manager k8s.io/apimachinery/pkg/util/wait.JitterUntil
manager     /go/pkg/mod/k8s.io/apimachinery@v0.18.6/pkg/util/wait/wait.go:133
manager k8s.io/apimachinery/pkg/util/wait.Until
manager     /go/pkg/mod/k8s.io/apimachinery@v0.18.6/pkg/util/wait/wait.go:90

This seems like it may be a regression of localFile feature is broken · Issue #111 · grafana/k6-operator · GitHub

@one-miles, how exactly are you using the k6-operator? From what I can see, this probably is not a regression, it just seems like localFile fix hasn’t been released in a stable release yet :disappointed:

So you probably have to either git pull the latest version of the k6-operator repo locally or use one of the docker image versions that are based on the recent main repo commits: https://github.com/grafana/k6-operator/pkgs/container/operator

I’ll ping the k6-operator maintainer next week to maybe release a new version, since they are currently on vacation and I am very out of touch and not sure how to do that myself or if there are any blockers for it… :sweat_smile: Sorry about that and hope this advice helps.

And if the latest main version of the k6-operator still results in the same problems, please comment here or in the issue and we’ll reopen it.

Hey @ned thanks for the response!

I am pulling (and building) from the latest main. FWIW, there is another report of this in the k6 #community-discussion slack channel as well.

Thanks, I’ve reopened the issue, but I don’t understand the problem well enough to fix it on my own quickly, sorry.

Hi @one-miles,
I was unable to repeat this error with the latest from main: could it be that the image just wasn’t being pulled or some cache was involved? Just in case, I’ve built a new latest and also pushed an RC release so you can deploy it like this now:

IMG=ghcr.io/grafana/operator:controller-v0.0.8rc1 make deploy

Please try it out and let me know if the issue persists.