K6 Browser generated high unique time series data

Hello k6 team!

Thanks for always assisting on my inquiries.

I have encountered a new problem with my load test - where I’m getting this error:

WARN[3548] The test has generated metrics with 200007 unique time series, which is higher than the suggested limit of 100000 and could cause high memory usage. Consider not using high-cardinality values like unique IDs as metric tags or, if you need them in the URL, use the name metric tag or URL grouping. See https://k6.io/docs/using-k6/tags-and-groups for details.  component=metrics-engine-ingester

My test is solely k6 browser driven but has heavy polling behind the scenes, usually forming a unique URL, and a lot of marketing / analytics going on in the network as well. I reckon this is the reason why I’m hitting this error.

My test also is simple - it logs in via the browser and sits in the homepage for 3hrs to generate significant load (due to very heavy polling behind the scenes).

My question is, how do I avoid this error from happening? It uses up so much memory that my ECS tasks are peaking memory, and as a result, my k6 browser is crashing (a lot of Target Closed and Abnormal Closure). This has very quickly became an expensive trial and error in ECS.

I’m using scenarios in my options, and i formed it like this:

export const myLoadScenario = {
  tags: {
    name: 'myDashboard'
  },
  discardResponseBodies: true,
  systemTags: ['proto', 'method', 'status', 'name', 'group', 'check'],
  scenarios: {
    myLoadScenario: {
      startTime: `${randomStartTime}s`,
      executor: rampingVUs,
      stages: [
        { duration: `5m`, target: 5 }, 
        { duration: `10m`, target: 10 },
        { duration: `30m`, target: 10 }
      ],
      gracefulStop: '4h', // this ensures that the VU script gets executed and is not interrupted
      tags: { name: 'myDashboard' },
      exec: 'myDashboard'
    }
  },
  thresholds: myCustomThreshold
};

I have observed this error around the 30-40 minute mark, perhaps because at that point it’s where I’ve accumulated so many time series data - but this also means, failing in my experimental solution is slow and painful. :frowning:

Thanks in advance!

Hi @icedlatte :wave:

This happens because of how time series metrics are generated in k6. Because the URL of the request is used as a label for the time series, each different URL will essentially create a new time serie. If, as you say, in your test there are many requests that are performed with a unique URL, that will eventually lead to the memory problems that you are experiencing.

In k6, with the HTTP module there are ways to fix this. See http.url helper and URL grouping.
Unfortunately in k6 browser this is an unsolved problem, as we don’t have a method that would allow grouping time series under a single label. This is a rare issue in browser tests, but nevertheless possible, so we will create an issue to try to address this.

You can track URL Grouping/Aggregation · Issue #371 · grafana/xk6-browser · GitHub for improvements on this.
Thank you.

Thanks for the response, @Daniel.J . Will track this k6 browser URL grouping issue and im looking forward to a solution for this. Cheers!