Hello. Thank you for the reply, and sorry for replying quite a bit later.
Firstly, to answer your question, I didn’t notice anything strange in the verbose logs.
However, I neglected to mention in my post that I was executing the tests on Windows. I looked some more through posts in the forum here, and I found this comment about the difference in time precision between Go and Windows.
I looked at the JSON output example in the documentation, and the seconds have 9 decimal points, as the k6 default nanosecond precision dictates, like so 2017-05-09T14:34:45.625742514+02:00
. In the JSON outputs from tests I ran on Windows the timestamps are in about microsecond precision (the seconds have either 6 or 7 decimal points), like so 2024-03-07T11:13:21.269543+01:00
, so they are not in the default nanosecond precision. I compared the timestamps for the metrics in the JSON output and the InfluxDB data, and for all the metrics that have the same timestamp in the JSON output, there is only one of those saved in InfluxDB, which is how InfluxDB handles duplicate data – for all data with the same measurement, tags and timestamp ( duplicate timeseries), only the last one is saved. The conclusion would be that Windows limits the default k6 precision, which impacts what is saved in InfluxDB.
Adding a tag to every metric (to make every timeseries unique), like adding this in the VU code (the scenario function):
execution.vu.metrics.tags['trace_id'] = execution.scenario.iterationInTest;
// rest of the VU code
delete execution.vu.metrics.tags['trace_id'];
makes it so all the metrics are saved in InfluxDB, because that way there is no data duplication. This increases the database timeseries cardinality, so it is not a(n optimal) solution, but it does confirm that the metrics loss is because more metrics have the same timestamp when the time precision is lower (microsecond rather than nanosecond).
EDIT: And the solution would be not to execute tests in Windows.