[fatal error] k6 memory leak

Dear K6 support,

I run k6 in 4 cpu cores/16 RAM VM with 160 concurrent users, after running 2 hours, it throws OOM error. K6 version: k6 v0.24.0.

Pls feel free to let me know if you need additional info, thx!

init [----------------------------------------------------------] starting^Mtime=“2019-08-01T16:13:23Z” level=info msg=Running i=6341 t=945.091641ms

time=“2019-08-01T16:13:24Z” level=info msg=Running i=13561 t=1.945092543s

time=“2019-08-01T18:10:03Z” level=info msg=Running i=50540007 t=1h56m39.57512173s
fatal error: runtime: out of memory

runtime stack:
runtime.throw(0xde89df, 0x16)
/usr/lib/go/src/runtime/panic.go:617 +0x72
runtime.sysMap(0xc39c000000, 0x20000000, 0x18a5eb8)
/usr/lib/go/src/runtime/mem_linux.go:170 +0xc7
runtime.(*mheap).sysAlloc(0x188ce40, 0x1e1fc000, 0x188ce50, 0xf0fe)
/usr/lib/go/src/runtime/malloc.go:633 +0x1cd
runtime.(*mheap).grow(0x188ce40, 0xf0fe, 0x0)
/usr/lib/go/src/runtime/mheap.go:1232 +0x42
runtime.(*mheap).allocSpanLocked(0x188ce40, 0xf0fe, 0x18a5ec8, 0x0)
/usr/lib/go/src/runtime/mheap.go:1150 +0x3a7
runtime.(*mheap).alloc_m(0x188ce40, 0xf0fe, 0x101, 0x0)
/usr/lib/go/src/runtime/mheap.go:977 +0xc2
/usr/lib/go/src/runtime/mheap.go:1048 +0x4c
runtime.(*mheap).alloc(0x188ce40, 0xf0fe, 0x7fc91d000101, 0x4282b0)
/usr/lib/go/src/runtime/mheap.go:1047 +0x8a
runtime.largeAlloc(0x1e1fc000, 0x450100, 0xc37c000000)
/usr/lib/go/src/runtime/malloc.go:1055 +0x99
/usr/lib/go/src/runtime/malloc.go:950 +0x46
/usr/lib/go/src/runtime/asm_amd64.s:351 +0x66


Hi @royzhang007 ,

I don’t know without anything else, but given it took 2 hours I would wager it is based on the accumulation of metrics.
There have been several issues (1,2)about this and the current workaround is to run with --no-summary --no-thresholds at which point k6 will NOT store any metrics internally.

This means that you will need to use one of the metrics outputs. Unfortunately we (somewhat) recently found out that the influxdb one is … maybe leaking memory although me trying to find where and how has lead me to believe it is just generating way too many objects and because of this the golang’s GC just can’t keep up and at some point the k6 process runs out of memory.

I am currently working on:

  1. Upgrading the library we use as it shows some improvement but all my tests are very flaky so, mostly updating to be with the latest version
  2. More knobs on the influxdb output in order to make it send data more frequently and in this way trying to combat running out of memory.

Hopefully this will be finished next week and we will release 0.25.1 with it, but I can’t promise anything.

You can try to run with GOGC=50 k6 run .. this will make the Golang’s GC run (around) every time there is 50% more memory than it was at the end of the previous cycle instead of the default 100%. This might help you especially if using influxdb, but again … can’t promise you anything.

If you share some script details I might try to give you some better advice, but without any script details …

Thx @mstoykov! Will we have issue if we use JSON/Kafka/StatsD/Datadog as metrics outputs?