I recently setup a K6 script to run every 15 minutes, and we consistently see the following two error events in each run:
level=error msg=“Log output truncated at 261120 bytes”
level=error msg=“Metrics output truncated at 102400 bytes”
I wasn’t able to locate any documentation about these - it would be useful to have more insight into a few things around these events:
Is there any configuration I can make anywhere to increase the byte limit for both log and metric output generated from this synthetic monitor so no info is truncated?
I am not clear what the impact of having this data truncated is. Is the impact that if/when a group fails, it may not have any events saved - particularly if it is in the latter portion of the workflow?
This truncation is a safety limit to protect the public probe infrastructure from overloads, and it triggers when the metrics/log output breaches certain thresholds as indicated in the log line. Our understanding is that these limits are well above than the typical SM usage for monitoring a few endpoints, or simulate a couple user journeys. Can you elaborate a bit on what your use case is?
Regarding your questions:
This limit is unfortunately not configurable at this time.
For logs, that means that the lines above the byte limit will not appear in Grafana Cloud (i.e. they are discarded). For metrics, time series that cause the output to go above the limit will be discarded and similarly will not appear in Grafana Cloud. As for the script, it will however run entirely (up to the allowed timeout), it’s just the output what won’t be visible.
As for how to prevent this from happening, I’d suggest to try a few things:
Simplify the script, if it’s testing too many things maybe splitting it to a few separate checks might be a good idea.
Run the script locally with k6, and inspect the log output, looking for log lines that are too big or too verbose. We’ve seen the log limit breached a couple times due to HTTP body dumps being logged as a leftover from debugging, for example.
If you’re using custom metrics, try to reduce the number of tags, as each tag produces a time series which adds up to the limit.