[StatsD ] How to periodically emit and flush k6 data to cloudwatch

Hello k6 team,

Kindly guide us on how to regularly emit and flush k6 stats to cloudWatch on an interval of 5 seconds. This would greatly help us reduce k6 memory usage per ECS container and can help us sustain longer runs.

This is our stasd configuration:

{
  "metrics": {
    "namespace": "e2e-k6-dev",
    "force_flush_interval": 5,
    "metrics_collected": {
      "statsd": {
        "service_address": ":8125",
        "metrics_collection_interval": 5,
        "metrics_aggregation_interval": 5,
        "allowed_pending_messages": 100000
      }
    }
  }
}

Thanks in advance!

Hi @icedlatte

Have you tried the option K6_STATSD_PUSH_INTERVAL? It defaults to 1 second; you can adjust it to 5 seconds if required.

Let me know if that is not what you were after or if it does not work :bowing_woman:

Cheers!

Thanks for your response @eyeveebee !

Do we use this on k6 run, as such:

K6_BROWSER_ENABLED=true K6_STATSD_PUSH_INTERVAL=5 k6 run dist/login.test.js

But if the default is already 1s then we don’t really need to configure this during the k6 run since it’s emitting stats frequently. Is my assumption correct?

Also, does pushing data flush the k6 process memory? For example we regularly push every 5s:

  1. at t – k6 memory is 10MB
  2. at t+4s – k6 memory is 12MB
  3. at t+5s – we push stats to cloudWatch – does this flush k6 memory?
  4. at t+6s – k6 memory is reset, let’s say (lesser than t+4s)

Thank you!

Hi @icedlatte

Apologies for the delay, I was discussing this internally to better understand better the memory consumption.

But if the default is already 1s then we don’t really need to configure this during the k6 run since it’s emitting stats frequently. Is my assumption correct?

Correct. If the default value works for you, there is no need to specify it.

does pushing data flush the k6 process memory?

This is difficult to predict since memory allocation is under Golang’s Garbage Collector. What you describe is what we expect to happen. However, since we are generating new metrics in a stream, some new metrics are already there even after the flush. So it won’t be linear.

If you saw memory issues on an extended test run, you can have a look at Running large tests --no-thresholds --no-summary will reduce memory and CPU footprint, similar to what we recommend for a cloud output.

I hope this helps.

Cheers!