Metrics dimensions when running multiple replicas of k6

Hi,

We’re running K6 tests via AWS Batch Jobs that execute in containers on ECS. We run a minimum of two, the goal is to load test our application. We are sending metrics to a statsd-exporter so the metrics end up in prometheus in the end. The connection is all set up, it works, no issues, we were able to create ~40k connections.

The problem we’re facing are metrics. There is no depth to the metrics and the containers send the same metrics, so if we receive k6.data_received with a value, the next container will send the same but with the different value so it gets overwritten. We also don’t know how many VUs we have globally, we can only tell the last metric persisted from a random container.

We tried adding K6_SYSTEM_TAGS env variable with ip and vu included, didn’t change the metrics. We tried adding custom tags with --tag BATCHJOBID=$AWS_BATCH_JOB_ID --tag, this also did not work.

The best solution would be if we would have a unique identifier / k6 container for the metric received, ex. k6.data_received.b5930e29-46b7-468c-bb43-9f8b2226f504, where b5930e29-46b7-468c-bb43-9f8b2226f504 is the container identifier inside ECS.

Any RTFM or suggestions welcome.

Hi @puck
Are you using the statsd output? because that output doesn’t understand or send tags as … well there are no tags in statsd. As far as I can see dogstatsd is supported by statsd-exported so maybe try using the datadog output which does also send tags :wink:
This should fix the other problem … hopefully, good luck!

Hi @mstoykov,
Yeah, I switched to the datadog statsd and things are looking to be much better, this seems to be the solution, I now have labels in prometheus thanks to the statsd-exporter. I’ll provide a full working solution once it’s all working as we expect.

Thanks for taking the time to reply.

1 Like

Issue has been fixed by altering the Dockerfile which we run with Batch Jobs + switching to Datadog.

We had to add the two env variables as part of the Dockerfile build and then modify the entrypoint so it’s executed by the shell, which can do env variable substitutions.

FROM loadimpact/k6

ENV AWS_BATCH_JOB_ID=1
ENV AWS_BATCH_JOB_ARRAY_INDEX=0

COPY --from=builder /tmp/lib/*.js /tmp

WORKDIR /tmp

ENTRYPOINT ["sh", "-c"]
CMD ["k6 run --tag BATCHJOBID=$AWS_BATCH_JOB_ID --tag BACTHJOBATTEMPT=$AWS_BATCH_JOB_ARRAY_INDEX --compatibility-mode=base --no-thresholds --no-summary --include-system-env-vars=true ./socket-test.js"]

Switching to the datadog metrics endpoint also worked, ran statsd-exporter on UDP port and it works well with datadog “type” metrics. Statsd-exporter modifies the extra dimension to labels and behold, bellow you can see a screenshot of the VUs in Grafana with Prometheus source, we stacked them so it’s clear how many VUs are coming from each container running k6.

1 Like