Loss of metrics when output to InfluxDB

Hello. I am using InfluxDB OSS v2.7.4 with the xk6-output-influxdb extension binary built with the latest version and k6 v0.49.0 to save k6 output metrics. I run everything locally.

When comparing the end-of-test summary from k6 and the data from InfluxDB (which I view using Grafana v10.2.2) I notice that there are some metrics that aren’t saved in InfluxDB. There are missing http-req*, checks, and especially iterations, especially if there is a number of failed requests and checks which makes the overall metrics loss greater. The average requests rate varies in my tests, say 10 up to 70 requests per second, and some with around 400 r/s. The loss varies between 1 to 13% for the requests and between 1 and 35% for the iterations for example, but mostly depends on how many failed requests a test has, the greater that number, then the greater the loss of the requests and checks and iterations. The loss of requests and checks metrics, and of failed requests and failed checks is usually parallel, the same.

Based on this answer I tried removing the vu and/or the iter tags from the K6_INFLUXDB_TAGS_AS_FIELDS environment variable and adding them to the --system-tags flag. However, since this answer is from before the introduction of “metadata”, I don’t think it does anything, InfluxDB seems to ignore vu and iter and there are no changes in the metrics InfluxDB saves.

I have also tried increasing the K6_INFLUXDB_CONCURRENT_WRITES and lowering the K6_INFLUXDB_PUSH_INTERVAL environment variables for the extension, but there were no differences at all, the loss was the same. I also tried observing the information in the --verbose k6 logs to modify the values for these variables, but there were no improvements with any of the values I tried.

Does anyone know why and how this happens?

Has anyone encountered this and maybe fixed it in any way?

Hi @marijamitevska !

Welcome to the community forum!

It sounds strange :thinking:

Are there any errors during the execution captured or any weird (let’s say high-latency) logs when you run k6 with the Verbose flag?

Hello. Thank you for the reply, and sorry for replying quite a bit later.

Firstly, to answer your question, I didn’t notice anything strange in the verbose logs.

However, I neglected to mention in my post that I was executing the tests on Windows. I looked some more through posts in the forum here, and I found this comment about the difference in time precision between Go and Windows.

I looked at the JSON output example in the documentation, and the seconds have 9 decimal points, as the k6 default nanosecond precision dictates, like so 2017-05-09T14:34:45.625742514+02:00. In the JSON outputs from tests I ran on Windows the timestamps are in about microsecond precision (the seconds have either 6 or 7 decimal points), like so 2024-03-07T11:13:21.269543+01:00, so they are not in the default nanosecond precision. I compared the timestamps for the metrics in the JSON output and the InfluxDB data, and for all the metrics that have the same timestamp in the JSON output, there is only one of those saved in InfluxDB, which is how InfluxDB handles duplicate data – for all data with the same measurement, tags and timestamp ( duplicate timeseries), only the last one is saved. The conclusion would be that Windows limits the default k6 precision, which impacts what is saved in InfluxDB.

Adding a tag to every metric (to make every timeseries unique), like adding this in the VU code (the scenario function):

execution.vu.metrics.tags['trace_id'] = execution.scenario.iterationInTest;
// rest of the VU code
delete execution.vu.metrics.tags['trace_id'];

makes it so all the metrics are saved in InfluxDB, because that way there is no data duplication. This increases the database timeseries cardinality, so it is not a(n optimal) solution, but it does confirm that the metrics loss is because more metrics have the same timestamp when the time precision is lower (microsecond rather than nanosecond).

EDIT: And the solution would be not to execute tests in Windows.

1 Like