Not able to access environment variables in init phase (k6-operator)

As the title suggests, I am not able to access environment variables in the init phase when running distributed tests. It works fine locally.

Here is my CRD:

apiVersion: k6.io/v1alpha1
kind: K6
metadata:
  name: perf-test
spec:
  parallelism: 2
  script:
    configMap:
      name: perf-test
      file: perf-test.js
  arguments: ' --tag test_run_id=Wed_Nov__1_08:53:42_PDT_2023'
  runner:
    env:
      - name: TEST_TYPE
        value: "soft"

Here is the init code:

import http from 'k6/http';
import { sleep, group, check } from 'k6';

const stage_configs = {
    soft: [
        { duration: '30s', target: '10'},
        { duration: '14m', target: '10'},
        { duration: '30s', target: '0'}
    ],
    medium: [
        { duration: '30s', target: '160'},
        { duration: '14m', target: '160'},
        { duration: '30s', target: '0'}
    ],
    hard: [
        { duration: '30s', target: '2000'},
        { duration: '14m', target: '2000'},
        { duration: '30s', target: '0'}
    ]
};

export const options = {
    discardResponseBodies: true,
    stages: []
};

const test_type = __ENV.TEST_TYPE;
options.stages = stage_configs[test_type];

Thank you in advance!

Hi @rajmehta53 !

On my cluster your example produces an error message in the k6-operator-controller-manager

ERROR	controllers.K6	Parallelism argument cannot be larger than maximum VUs in the script	{"namespace": "default", "name": "perf-test", "reconcileID": "5c6d8c57-974f-400e-ad40-9d7d92f2f944", "maxVUs": 1, "parallelism": 2, "error": "number of instances > number of VUs"}

If I understand right this is because k6-operator runs an initializer job prior test run jobs. Initializer job is basically a k6 inspect command run. Although k6 inspect allows environment variables (via the flag -e), k6-operator doesn’t pass such flag to k6 inspect. see here the source. I don’t know if it is intentional or not. Because of this, environment variables are unavailable in the initializer job. So when you write:

const test_type = __ENV.TEST_TYPE;
options.stages = stage_configs[test_type];

__ENV.TEST_TYPE is undefined, and options.stages also become undefined, and k6-operator doesn’t even start the test with the error message above.

Workaround:
I don’t know if this workaround produces a correct result or not, but you can trick initializer job with whatever stages value that needs more VU than the set parallelism. After k6-operator starts the test run jobs, the environment variables will be available.

So. Long story short, try this line:

const test_type = __ENV.TEST_TYPE || 'soft';

instead of this:

const test_type = __ENV.TEST_TYPE;
1 Like

Hey @bandorko !
Thank you for your response. I already tried the workaround you mentioned, but it defaults to the ‘soft’ stage always since __ENV.TEST_TYPE is always empty.

This defeats the purpose of passing the test type as an environment variable. I wanted to decouple this so that we don’t need to change anything in the test script.

@rajmehta53

I experienced, that with this workaround, the initializer job thinks, that the test will run on 10 VU’s, but the actual test run performed with higher VU count (2x80) if you run it with medium TEST_TYPE.

Interesting. But it didn’t work for me. I set the test_type to medium and parallelism to 1. The end of test results on the pod still showed 10 users, instead of the increased number in medium.

@rajmehta53
After test run finished on ‘medium’ TEST_TYPE I have 4 pods

NAME                          READY   STATUS      RESTARTS   AGE
perf-test-1-txp22             0/1     Completed   0          4m17s
perf-test-2-klr86             0/1     Completed   0          4m17s
perf-test-initializer-6kzq6   0/1     Completed   0          4m30s
perf-test-starter-66phw       0/1     Completed   0          3m55s

the logs:

> kubectl logs perf-test-1-txp22                                                                                                                                              

     data_received........: 0 B    0 B/s
     data_sent............: 0 B    0 B/s
     iteration_duration...: avg=256.99µs min=3.05µs med=3.87µs max=1.08s p(90)=7.53µs p(95)=9.25µs
     iterations...........: 272263 13555.7368/s
     vus..................: 28     min=0        max=80
     vus_max..............: 80     min=80       max=80
> kubectl logs perf-test-2-klr86                                                                                                                                              
     data_received........: 0 B    0 B/s
     data_sent............: 0 B    0 B/s
     iteration_duration...: avg=174.2µs min=2.86µs med=3.85µs max=1.09s p(90)=7.52µs p(95)=9.12µs
     iterations...........: 295271 14701.22307/s
     vus..................: 27     min=0         max=80
     vus_max..............: 80     min=80        max=80
> kubectl logs perf-test-starter-66phw
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   238  100   151  100    87  24257  13975 --:--:-- --:--:-- --:--:-- 30200
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   238  100   151  100    87  82739  47671 --:--:-- --:--:-- --:--:--  147k
{"data":{"type":"status","id":"default","attributes":{"status":4,"paused":false,"vus":0,"vus-max":80,"stopped":false,"running":false,"tainted":false}}}{"data":{"type":"status","id":"default","attributes":{"status":4,"paused":false,"vus":0,"vus-max":80,"stopped":false,"running":false,"tainted":false}}}
> kubectl logs perf-test-initializer-6kzq6
{
  "paused": null,
  "vus": null,
  "duration": null,
  "iterations": null,
  "stages": [
    {
      "duration": "3s",
      "target": 10
    },
    {
      "duration": "14s",
      "target": 10
    },
    {
      "duration": "3s",
      "target": 0
    }
  ],
  "scenarios": {
    "default": {
      "executor": "ramping-vus",
      "startTime": null,
      "gracefulStop": null,
      "env": null,
      "exec": null,
      "tags": null,
      "startVUs": null,
      "stages": [
        {
          "duration": "3s",
          "target": 10
        },
        {
          "duration": "14s",
          "target": 10
        },
        {
          "duration": "3s",
          "target": 0
        }
      ],
      "gracefulRampDown": null
    }
  },
  "executionSegment": null,
  "executionSegmentSequence": null,
  "noSetup": null,
  "setupTimeout": null,
  "noTeardown": null,
  "teardownTimeout": null,
  "rps": null,
  "dns": {
    "ttl": null,
    "select": null,
    "policy": null
  },
  "maxRedirects": null,
  "userAgent": null,
  "batch": null,
  "batchPerHost": null,
  "httpDebug": null,
  "insecureSkipTLSVerify": null,
  "tlsCipherSuites": null,
  "tlsVersion": null,
  "tlsAuth": null,
  "throw": null,
  "thresholds": null,
  "blacklistIPs": null,
  "blockHostnames": null,
  "hosts": null,
  "noConnectionReuse": null,
  "noVUConnectionReuse": null,
  "minIterationDuration": null,
  "ext": null,
  "summaryTrendStats": [
    "avg",
    "min",
    "med",
    "max",
    "p(90)",
    "p(95)"
  ],
  "summaryTimeUnit": null,
  "systemTags": [
    "check",
    "error",
    "error_code",
    "expected_response",
    "group",
    "method",
    "name",
    "proto",
    "scenario",
    "service",
    "status",
    "subproto",
    "tls_version",
    "url"
  ],
  "tags": {
    "test_run_id": "Wed_Nov__1_08:53:42_PDT_2023"
  },
  "metricSamplesBufferSize": null,
  "noCookiesReset": null,
  "discardResponseBodies": true,
  "totalDuration": "50s",
  "maxVUs": 10
}

So for me, the initializer pod shows 10 VUs, but the starter and the runners show 160 VUs

1 Like

Hey @bandorko !
That actually worked. Thank you so much for your help, really appreciate it.

2 Likes