Hi,
I recently started looking into k6-operator as my company is moving our infrastructure to k8s. My test cases have been stored on Gitlab as single source of truth and the test cases have been executed on dev’s local laptop or executed via Jenkins job for automation.
Going forward with k8s, following the documentation, the test scripts have to be added as a ConfigMap or inside a PersistentVolume.
Based on my use case where all test cases are stored in a GitLab repo, and I do not have plan to move my test cases away from the GitLab repo as the single source of truth, what’s the best way to transition my test cases to be able to be executed on k8s via k6-operator? Also the test case in the documentation is quite simple, which only involves a single .js file, while in reality, the test case could be complex, which involves many scripts…
I am looking for some guidance on how to transition the existing test cases to k8s. Thanks.
Hi @zzhao2022 !
If I were you, I would create archives from the test script with k6 archive <script.js>
(maybe as part of a release process) (Archive Command). After that you can use the created tar file in the configmap:
kubectl create configmap loadtest --from-file archive.tar
Then you can use it like this:
apiVersion: k6.io/v1alpha1
kind: K6
metadata:
name: loadtest
spec:
parallelism: 2
script:
configMap:
name: loadtest
file: archive.tar
I hope it helps.
Hi @bandorko,
Thanks for your suggestions. I don’t think it’s gonna work by just archiving the test script for my case. I have a standard structure to organize my script. It’s sth. like below:
product
-
test case 1
- config - contains .json file to manage the test case configuration per environment. like thresholds, executor, scenarios and etc…
- scenarios - contains .js wrapper scripts per scenario to call functions that are stored in scripts folder below.
- scripts - contains functional scripts
- data - contains testing data
- main.js - the entry point
-
test case 2
-
Common - contains functions/modules that are shared across products
Here is an example of my main.js
import { authenticateCRM } from "../../../shared/modules-k6/authenticateCRM.js";
export { runAsyncSignatureRequest } from "./scenarios/asyncSignatureRequest.js";
export { baselineChannelCreation } from "./scenarios/baselineChannelCreation.js";
globalThis.crmBase = __ENV.crmBase;
globalThis.crmVersion = __ENV.crmVersion;
...
globalThis.configFile = __ENV.configFile || "conf-di.json";
const testConfig = JSON.parse(open(`./config/${globalThis.configFile}`));
// combine the above with options set directly
export const options = Object.assign(
{
insecureSkipTlsVerify: false,
},
testConfig
);
export function setup() {
const [_, SFSession, SFEndpoint] = authenticateCRM(
globalThis.username,
globalThis.password
);
return {
sfSession: SFSession,
sfEndpoint: SFEndpoint,
};
}
export default function () {
console.log("No scenario in test.json. Executing default function...");
}
As per my understanding, the configuration is gonna be handled by k6-operator instead when it’s running on k8s. So it requires code changes in my test case. I am trying to figure out the best way to keep the necessary changes minimum…
@zzhao2022
The k6 archive command collects all the imported scripts and all the data files used from the test script into the tar file, so I don’t see why it wouldn’t work for you.