Running k6 in a kubernetes environment

Given that the ENTRYPOINT to the k6 docker image is the k6 binary itself, I’m wondering if there has been any documented efforts of how best to run k6 from a k8s cluster. I have a few ideas, but I’m finding nothing on t3h g00gles when I search for anyone else having gone through this as well.

Ideally this will become part of our CI/CD pipeline, meaning that pull requests in our team’s codebase will trigger specific tests to be run in an environment that mirrors production from an infrastructure resource capacity standpoint (for us, that’s our UAT environment). Given that, here are my ideas:

  • Deploying k6 as a Job. This approach requires some mechanism to get the test script loaded into the job’s local file system for k6 to access.
  • Deploying k6 as a Deployment/ReplicaSet. This approach requires overriding the ENTRYPOINT of the k6 docker image via command: ["some-command"] syntax in the deployment yaml file. Test scripts could be written as ConfigMap’s or Secret’s and attached to the image as volumes. Once deployed, I would SSH into the container and manually run k6 with the mounted scripts. This approach is definitely the most “hacky”.
  • Copy a pre-built k6 binary onto our running Jenkins instance, and incorporate it into the build script for our UAT deployments. This might be the best solution, but would definitely require me to beef up the resources on our Jenkins container to handle the load tests with higher VU counts. This adds some extra complexity around auto-scaling resources since we don’t want a huge amount of RAM allocated 100% of the time (huge $$$).

Anyway, would love to hear everyone else’s thoughts on these approaches, and of their own approaches. Thanks!

1 Like

I haven’t used Kubernetes in over a year, so I don’t feel very qualified to give advice here, since I probably don’t know or remember a lot of nice tricks. That said, here are some more ideas:

  • k6 can run or import scripts from https URLs - you can execute k6 run There are even some shortcuts for github and cdnjs URLs, this is the same as the previous command: k6 run
  • Instead of running your tests in Jenkins, you can use it to build custom docker images based on k6’s image (i.e. FROM loadimpact/k6:latest) that also contain your load testing scripts. Then you can deploy those as jobs in the k8s environment and execute them there.
  • Finally, instead of building a new docker image every time you change your load testing scripts, you can build a single custom docker image, again based on the official k6 one, with an entrypoint that is a simple shellscript. That script can clone the git repo with your load testing scripts and then launch k6. The credentials for the git repo can be passed as a k8s secret, and the image will only have to be updated when we update the k6 one.

Thanks for your advice @ned! Here’s what I ended up doing (which I think is more or less your 2nd idea, and my 1st one above):

  • A Docker image for k6 is built in nearly the same way loadimpact/k6 is built, which is using the FROM golang:1-alpine, then installing git, and go getting the loadimpact/k6 GitHub repo. The go install is run, and that about does it. The built image gets tagged with the current date + a counter (YYYYMMDD_00x), and gets pushed to my registry as k6. (I manually increment 00x each time I build it throughout the day)

The above is basically a one-time thing. Once it’s done, I only need to repeat it if k6 updates and I want to grab the latest version. Next up…

  • A GitHub repo called k6-tests houses an /src folder containing any and all k6 tests, a Dockerfile and a Jenkinsfile.

The Dockerfile in this repo uses FROM (for instance), and does the things in the 2nd part of the loadimpact/k6 Dockerfile as well as adds the test scripts:

RUN apk add --no-cache ca-certificates
RUN cp /go/bin/k6 /usr/bin/k6 && \
    mkdir /k6-tests
ADD src/ /k6-tests/
WORKDIR /k6-tests
CMD ["run", "index.js"]

That’s the bulk of the work there. The Jenkinsfile is basically only responsible for building this image, tagging it, and pushing it to my private registry as k6-tests:<git sha> where <git sha> is the truncated Git commit hash.

And that’s pretty much it. In Jenkins I specify the git sha, the number of VUs (gets overridden by the test script, if it specifies VUs), and the path to the test script to run. I also have it setup to specify the CPU/RAM requests/limits for that specific Job run, in case I want to run a test with a bunch of VUs. By default though, it uses a pretty small amount (200Mi/200M). If I go > 5 VUs, or write a particularly heavy load test, then I bump it up a bit.

Now, whenever I want to run an existing test/write a new test, I just need to throw the git sha of the commit into Jenkins, specify the VU count if necessary, adjust the resources, and specify the load test to run and click Deploy. It gets deployed as a job and the ENTRYPOINT is still just k6, with the args set to ["run","/k6-tests/<SpecifiedTest>.js"]. It outputs to an influxdb instance I have running in the same k8s namespace, which is then read from a running Grafana instance in my monitoring namespace. It’s working quite nicely. :slight_smile:

So, thank you to the k6 team. You guys are doing awesome work. :+1: :smile:


Hey @ohheyitsbrian and @ned, this is how my Dockerfile looks like to build the Docker image for k6 :

FROM golang:1-alpine as builder
ADD . .
RUN apk --no-cache add --virtual .build-deps git make build-base &&
go get . && CGO_ENABLED=0 go install -a -ldflags ‘-s -w’

I did a docker build on this, but I get the following error:

can’t load package: package no Go files in /go/src/
The command ‘/bin/sh -c apk --no-cache add --virtual .build-deps git make build-base && go get . && CGO_ENABLED=0 go install -a -ldflags ‘-s -w’’ returned a non-zero code: 1

What changes did you make? Thanks!


Hey @Sharath, here’s what my Dockerfile looks like for building the k6 image:

FROM golang:1-alpine AS builder
RUN apk --no-cache add --virtual .build-deps git make build-base
RUN go get ""
ADD . .
RUN go get . && CGO_ENABLED=0 go install -a -ldflags '-s -w'

FROM alpine:3.9
COPY --from=builder /go/bin/k6 /usr/bin/k6

Thanks @ohheyitsbrian. k6 image works now, but in the second Dockerfile for k6-tests, the line, RUN cp /go/bin/k6 /usr/bin/k6 still fails because the path does not exist. Also, why do we need to push the k6 image to our public/private repo? I was wondering if we could get the base image directly from loadimpact/k6 and use it in the following way:

FROM loadimpact/k6
RUN apk add --no-cache ca-certificates
RUN cp /go/bin/k6 /usr/bin/k6 && \ #FAILS in either case
mkdir /k6-tests
ADD src/ /k6-tests/
WORKDIR /k6-tests
CMD [“run”, “index.js”]

What do you think? Or am I doing something wrong here?

@Sharath, If you’re using the loadimpact/k6 image as your base, you just need to make a slight change:

Change the following line:
RUN cp /go/bin/k6 /usr/bin/k6 && mkdir /k6-tests
to remove the cp command entirely, so all you’re left with is:
RUN mkdir /k6-tests
You can do this because - if you reference loadimpact’s image - you’ll notice they do that for you ahead of time.

To answer your other question, I push my k6 image to my private repo only because I’m using it as my base image instead of using loadimpact’s image, and so I need a place to save it.

You might ask why I’m building it myself instead of just using loadimpact’s image?

Short answer: I like having control :smile:
Slightly longer answer: I like to have control over as much of the build process as I can. Obviously I still need to import their source code, but outside of that I prefer to take over from that point on. This allows me to do things like making sure the image is using the latest version of alpine (you might notice I’m using alpine 3.9, rather than 3.7), and have the ability to change how the k6 binary is built if I ever need to do so.

1 Like

Hey @ohheyitsbrian Makes total sense. Thanks a lot for the detailed answer. Even I adapted it as per my requirement now :smiley:

1 Like

Hey @Sharath, could you help me with the deployment files which you used for k6 to kubernetes? what were the ports you opened for the worker pods, any help will be appreciated thanks!

Hey, @ohheyitsbrian Your example is perfect thanks for that. I am not an expert with GO so posting this question here… When i tried the above, line 3 (RUN go get “GitHub - grafana/k6: A modern load testing tool, using Go and JavaScript -”) is failing for me…
details below:
=> ERROR [builder 3/6] RUN go get “GitHub - grafana/k6: A modern load testing tool, using Go and JavaScript -” 2.0s

[builder 3/6] RUN go get “GitHub - grafana/k6: A modern load testing tool, using Go and JavaScript -”:
#8 0.616 go: downloading GitHub - grafana/k6: A modern load testing tool, using Go and JavaScript - v0.36.0
#8 2.010 go get: parsing go.mod:
#8 2.010 module declares its path as:
#8 2.010 but was required as: GitHub - grafana/k6: A modern load testing tool, using Go and JavaScript -

executor failed running [/bin/sh -c go get “GitHub - grafana/k6: A modern load testing tool, using Go and JavaScript -”]: exit code: 1

Any idea why?

Hi @Kesavan , that’s an easy one :slight_smile:

Since this thread was started a few years ago, k6 changed where it keeps its Golang module. It’s no longer at GitHub - grafana/k6: A modern load testing tool, using Go and JavaScript -, it’s at

If you change this line:
RUN go get ""
to this:
RUN go get ""

You’ll also need to change the next line to reference the new folder path of the module in your filesystem. It should be something like WORKDIR $GOPATH/src/

I hope this helps!


Hi @ohheyitsbrian, Thanks so much. it helped.