Writing logs to a persistent storage

Hi all.,
I am running k6 operator in argo workflows with multiple instances. I want to write /pipe all the logs generated by test pods to a persistent storage so that it can be later ingested into graylog.

Any suggestion on how to achieve this

name: run-k6
      inputs: {}
      outputs: {}
      metadata: {}
      resource:
        action: create
        manifest: |
          apiVersion: k6.io/v1alpha1
          kind: TestRun
          metadata:
            name: load-test   
          spec:
            parallelism: {{workflow.parameters.parallelism}}
            volumes:
            - name: k6-logs
              persistentVolumeClaim:
               claimName: k6-logs-pvc
            script:
              configMap:
                name: {{workflow.parameters.configMapName}}
                file: {{workflow.parameters.scriptFileName}}
            arguments: --tag testid={{workflow.parameters.testId}} --log-format raw --log-output=file=/var/log/kaarya/loadtest.log 
            runner:
              image: {{workflow.parameters.k6Image}}
              imagePullSecrets:
                - name: dockercred
              imagePullPolicy: IfNotPresent
              env:
                - name: K6_OUT
                  value: influxdb=<>
            
           

need to mount /var/log/kaarya/loadtest.log to host volume

Hi @vipulkhullar

I’m not quite sure if I got what the problem is, but if the issue of instruction the k6 to output log into the file, maybe you could use the environment variables altogether with the K6_LOG_OUTPUT option.

Will this work for you?

Cheers!

hi ,
I am using this yml in argo workflow to run the k6 test

name: run-k6
      inputs: {}
      outputs: {}
      metadata: {}
      resource:
        action: create
        manifest: |
          apiVersion: k6.io/v1alpha1
          kind: TestRun
          metadata:
            name: load-test
          spec:
            parallelism: {{workflow.parameters.parallelism}}
            script:
              configMap:
                name: {{workflow.parameters.configMapName}}
                file: {{workflow.parameters.scriptFileName}}
            arguments: --tag testid={{workflow.parameters.testId}} --log-output=file=./k6.log
            runner:
              image: {{workflow.parameters.k6Image}}
              imagePullSecrets:
                - name: dockercred
              imagePullPolicy: IfNotPresent
              env: 
                - name: K6_OUT
                  value: influxdb=http://admin:admin@<hidden>
                
               

and getting the following error : time=“2024-07-30T11:51:08Z” level=error msg=“failed to open logfile /home/k6/k6.log: open /home/k6/k6.log: permission denied”

Hi @vipulkhullar, in this case you’ll need to mount volume for logs as it’s usually done in Kubernetes, with volumes and volumeMounts. But in k6-operator context and TestRun specifically, those fields would be under spec.runner.volumes and spec.runner.volumeMounts respectively.

Hope that helps!

2 Likes

it worked thanks !