K6-operator output to grafana prometheus

Hi, I’m trying to run some browser tests on k6-operator and output the results to grafana prometheus. So from Prometheus remote write and GitHub - grafana/k6-operator: An operator for running distributed k6 tests., and Grafana Cloud Prometheus i have the following resource file:

apiVersion: k6.io/v1alpha1
kind: K6
  name: k6-sample
  parallelism: 1
    image: wjcheng1/k6-prometheus-browser:latest
        memory: "1Gi"
        cpu: "250m"
     - configMapRef:
         name: prometheus-config
     - secretRef:
         name: prometheus-secrets
 arguments: -o xk6-prometheus-rw  
      name: test
      file: test.js

where as prometheus-config contains


and prometheus-secrets contains


but I’m getting errors like time="2023-07-10T20:53:18Z" level=error msg="Failed to send the time series data to the endpoint" error="got status code: 429 instead expected a 2xx successful status code" output="Prometheus remote write"
any idea what i’m doing wrong?

Hi @wjc

A 429 status code means too many requests and it tells that the Prometheus instance is hitting the limits and is pushing back not accepting these write requests.

I discussed this topic internally with the k6 developer team and, at this point, we do not think it’s related to the operator since you are using parallelism of 1.

Can you share a bit more information so we can further dig into this?

  • Is it possible to see a (sanitized) version of the test.js script? We’re especially interested in things that generate metrics like “custom metrics”, requests, etc.
    • We can then check how many requests you are pushing and how many time series are generating. To see if there are any improvements we can suggest.
    • If we don’t see anything in the script that can be improved, it might be time to look at the Prometheus side.
  • Is this a Prometheus instance used only for k6 or other services (not k6)?
  • Are you testing this solely with Grafana Cloud, or is a local Prometheus instance also causing the same issues?
  • Have you tried using the built-in experimental module instead of the extension?
    • Can you tell us what version of k6 you are running, is it one of the latest?

This issue is very interesting, as it might lead to considering implementing a retry mechanism for the 429 errors, for temporary situations/ traffic spikes on the Prometheus side. However, it’s best to initially look into the root cause and take it from there.


So the 429 error seem to have gone away yesterday, I don’t think I did anything so maybe it’s just a network issue few days ago?

So the test.js script is something like this:

import http from 'k6/http';
import { check } from 'k6';
import { chromium } from 'k6/experimental/browser';

export default async function () {
    let data = { machine_token: 'xxxxx', client: 'terminus'};
    let token = http.post('https://url-to-get-token', JSON.stringify(data), {
        headers: { 'Content-Type': 'application/json', 'User-Agent': 'Terminus/1.0' },

    const browser = chromium.launch({ args: ['no-sandbox'],
        headless: true, 
        timeout: '120s' });
    const context = browser.newContext({ignoreHTTPSErrors: true});
    context.setDefaultNavigationTimeout(15000); // 15s
          name: 'session',
          value: token.json().session,
          domain: 'test-url',
          path: '/',
          secure: true,
          httponly: true
    const page = context.newPage();

    try {
        const pg = await page.goto('https://test-url');

        const titleText = page.locator('h1.site-heading');

        check(page, {'Title visible': page.locator('h1.site-heading').isVisible(),
            'Title correct': page.locator('h1.site-heading').textContent() == 'Next Wp Prod 627',
            'Live history section': page.locator('div#live-build-history-table').isVisible(),
            'Multidev section': page.locator('div#multidev-branch-table').isVisible(),
            'PR section': page.locator('div#pull-request-table').isVisible(),
            'Live deployment URL correct': page.locator('div.mb-4 a#deployment-url').textContent() ==
             'test-url '
    } catch(e) {
        console.log('e--->: ' + e);
        page.screenshot({ path: 'er.png' });
    } finally {
        page.screenshot({ path: 'final.png' });

the Promoetheus instance I’m writing to is the prometheus instance from my free Grafana cloud account. I’m using an image I built using the following docker since I needed to have the browser also installed:

# Build the k6 binary with the extension
FROM golang:1.19-bullseye as builder

RUN go install go.k6.io/xk6/cmd/xk6@latest
RUN xk6 build --output /k6 --with github.com/grafana/xk6-output-prometheus-remote@latest \
    --with github.com/grafana/xk6-browser

FROM debian:bullseye
RUN apt-get update && \
    apt-get install -y chromium

COPY --from=builder /k6 /usr/bin/k6
USER root



if there’s any issue with that then also please do let me know.

1 Like