Passing environment variables in init code on the k6 operator

I’m using the k6-operator on a Kubernetes cluster. My script uses environment variables to know how many VUs to use.

const VUS = __ENV.TEST_VUS || 10000;

export const options = {
    vus: VUS,
    duration: __ENV.TEST_DURATION || '300s',
    setupTimeout: '600s'
};

const sharedData = new SharedArray("user IDs and event IDs", function () {
    const arr = new Array(VUS);
    // ...

My config is this:

apiVersion: k6.io/v1alpha1
kind: K6
metadata:
  name: <...>
spec:
  parallelism: 4
  runner:
    # ...
  script:
    configMap:
      name: <...>
      file: archive.tar
  arguments: --include-system-env-vars --env TEST_VUS=10000 --env TEST_DURATION=300s

I generated a k8s configmap with

k6 archive --include-system-env-vars <my-file>

So, whatever I do, I cannot get the __ENV filled in during the init stage on k8s. It works fine locally.

Any suggestions? :upside_down_face:

Hi @whatyouhide

Welcome to the community forum :wave:

I haven’t tested this concrete case. However, have you tried passing environment variables the Kubernetes way? E.g.

# k6-resource-with-extensions.yml

apiVersion: k6.io/v1alpha1
kind: K6
metadata:
  name: k6-sample-with-extensions
spec:
  parallelism: 4
  runner:
    env:
      - name: TEST_VUS
        value: 10000

Let me know if that does not work and I’ll dig into it.

Cheers!

Hiya @eyeveebe :wave: Yep, sorry, I should have mentioned that. I tried the env: approach (a la k8s) as well, but no luck there either.

1 Like

Hi @whatyouhide

Thanks for clarifying. The person I want to talk to will be back this week, and we’ll get back to you.

Cheers!

Hi @whatyouhide,

The example posted by @eyeveebe above should work actually, with minor adjustments. I can access env vars in init context with the following:

apiVersion: k6.io/v1alpha1
kind: K6
metadata:
  name: k6-sample
spec:
  parallelism: 1
  script:
    configMap:
      name: "env-test"
      file: "archive.tar"
  runner:
    env:
      - name: TEST_VUS
        value: "4"

The script is:

import http from 'k6/http';
import { check } from 'k6';

const VUS = __ENV.TEST_VUS || 10000;

export let options = {
  stages: [
    { target: 200, duration: '30s' },
    { target: 0, duration: '30s' },
  ],
};

export default function () {
  console.log("VUS is", VUS)
  const result = http.get('https://test-api.k6.io/public/crocodiles/');
  check(result, {
    'http response status code is 200': result.status === 200,
  });
}

And ConfigMap was generated with k6 archive --include-system-env-vars test.js. Could you please try the same and describe the result?

1 Like

Regarding your example, it works because parallelism is 1. However, if it is more than this, the VUs are split in segment across the replicas, so the VUs per replica is divided by it.

Is there any way to get the VUs that are supposed to run on each replica (Total VUs/replicas) without a new variable that copies the replicas field in the CR (and avoid issues not in sync of both values)?

Hi @rgordill,

Is there any way to get the VUs that are supposed to run on each replica (Total VUs/replicas)

Could you please clarify, where do you want to get them? In script?

Now that I look at it, it seems there were two examples in this thread actually…

Just in case, I’d like to clarify this part:

Regarding your example, it works because parallelism is 1

Not exactly. In this case, there will be a difference in behaviour, depending on whether one uses .js or .tar.

Assuming I have a test.js with the following options:

const VUS = __ENV.TEST_VUS || 10000;

export const options = {
    vus: VUS,
    duration: '300s',
    setupTimeout: '600s'
};
  1. If ConfigMap has test.js and TestRun is:
apiVersion: k6.io/v1alpha1
kind: TestRun
metadata:
  name: k6-sample
spec:
  parallelism: 2
  script:
    configMap:
      name: "env-test"
      file: "env-test.js"
  runner:
    env:
      - name: TEST_VUS
        value: "42"

Then there will be 42 VUs in total, equally split between 2 runners. (This can be confirmed with execution API)

  1. If ConfigMap has archive.tar which was created with the following command:
TEST_VUS=4 k6 archive --include-system-env-vars test.js

And TestRun is:

apiVersion: k6.io/v1alpha1
kind: TestRun
metadata:
  name: k6-sample
spec:
  parallelism: 2
  script:
    configMap:
      name: "env-test"
      file: "archive.tar"
  runner:
    env:
      - name: TEST_VUS
        value: "42"

Then there will be 4 VUs in total, split between 2 runners, and at the same time TEST_VUS env variable inside the script will be equal to 42.

  1. If ConfigMap has archive.tar which was created with the following command:
TEST_VUS=4 k6 archive test.js

And the same TestRun as in 2) case, then there will be 10000 VUs in total, split between 2 runners. And TEST_VUS inside the script will be 42 again.

This is not fully k6-operator defined behaviour: the operator only passes the env vars to the pods. But if one uses k6 archive to create a script, the VUs will be defined at that step. This diff between configuration for k6 archive and k6 run is described in the docs here and here.

Let me know if that helps or if additional info is needed!

1 Like

The issue is related with extracting the number of VUs per instance. If I want 10000 VUs and parallelism is 4, when the operator splits them in segments, there would be 2500 VUs per instance. In the init script, there is no way to know the 2500 number, unless either you pass it or the number of replicas in another var.

That creates a duplicate info and could led to error if the VUs or the parallelism is updated without recalculating that.

It would be great if some internal var is passed, so the test can infer how many that particular instance is supposed to be running.

You can access that info from the script with execution API.

For example, there’s a script:

import exec from 'k6/execution';

export const options = {
    vus: 4,
    iterations: 4,
};

export default function () {
    console.log(exec.instance.vusInitialized);
}

If this script is run with parallelism: 2, the value of vusInitialized would be 2 (i.e. 4 / 2).

init script

I’m not certain what is “init script” :thinking: If you mean initializer pod, then it’s not executing the script at all. It just looks at options. Normally, I’d say one shouldn’t care about initializer pod.

If it’s setup function, then it can also access execution API.

I have a suspicion that I don’t fully understand the use case you’re describing. If you could please provide a code example, I’d appreciate that!

Hi.

Let’s use the following example:

apiVersion: k6.io/v1alpha1
kind: K6
metadata:
  name: k6-mqtt
spec:
  parallelism: 5
  script:
    configMap:
      name: mqtt-producer-1
      file: test.js
  runner:
    env:
      - name: REPLICAS
        value: "5"
...

If I want to init each VU in an replica, something like:

export const options = {
    scenarios: {
        // This is the per-VU setup/init equivalent:
        vu_setup: {
            executor: 'per-vu-iterations',
            vus: VUsTotalCount,
            iterations: 1,
            maxDuration: `${vuInitTimeoutSecs}s`,
            gracefulStop: '0s',

            exec: 'vuSetup',
        },

Then, I cannot use VUsTotalCount to wait until initialised:

    thresholds: {
        // Make sure all of the VUs finished their setup successfully, so we can
        // ensure that the load test won't continue with broken VU "setup" data
        'vu_setups_done': [{
            threshold: `count==${VUsCount}`,
            abortOnFail: true,
            delayAbortEval: `${vuInitTimeoutSecs}s`,
        }],

and I have to do something like

const replicas = __ENV.REPLICAS
const VUsTotalCount = __ENV.VUS;
const VUsCount = Math.floor( VUsTotalCount / replicas );

So I needed to add a REPLICA env var with the same value as parallelism.

Hello @rgordill

I think I understood your use case and have tried to repeat it myself. TBH, I couldn’t make it work reliably even with k6 standalone (k6 run test.js): it seems there’s something special going on with threshold evaluation that doesn’t allow abortOnFail in the first place (at least in one case of total failure). I.e. there’s another problem unrelated to number of VUs when one tries to define threshold threshold: 'count==${VUsCount}'. We’ll try to follow up on that with k6 team.

What can work is the following setup:

    thresholds: {
        'vu_setup_failed{init:true}': [{
            threshold: `count==0`,
            abortOnFail: true,
            delayAbortEval: `${vuInitTimeoutSecs}s`,
        }],
    }

So instead of using a Counter of successful VU setup, define a Counter of failed VU setup. It’s a bonus that this threshold definition doesn’t depend on number of VUs per runner: it’ll always be zero. So this approach should work both with k6 standalone and k6-operator.

Let me know what you think!

Hi, @olhayevtushenko.

I like the approach. However, I have other similar threshold for tear_down:

        // Also make sure all of the VU teardown calls finished uninterrupted:
        'iterations{scenario:vu_teardown}': [`count==${VUsCount}`],

I got the code and the idea from Per-VU init lifecycle function · Issue #785 · grafana/k6 · GitHub

Thanks for the reference. This example was written quite some time ago: there was a major rewrite of thresholds since then, so maybe that’s why there’s an issue that I mentioned :person_shrugging:

Yes, built-in metric iterations is a “positive” Counter so it’ll need a VU count to be used in this way. The workaround would be to define your own “negative” Counter for failures or to pass VUs from external var as you did initially.

I think this case could be handled better, in terms of UX. Grokking this led me to write up this feature request: Defining thresholds in `setup` · Issue #3424 · grafana/k6 · GitHub

IMHO I have circled around the initial request. In this link, we needed a setup/teardown per-VU function.

Defining a test will be much easier and won’t need all this custom code to achieve this simple feature.

1 Like