Hi,
We are facing this issue on InfluxDB when we run more of 5000 request per second.
level=error msg=“InfluxDB: Couldn’t write stats” error=“{"error":"engine: cache-max-memory-size exceeded: (1075899908/1073741824)"}\n”
This is my configuration :
k6 v0.30.0
(2021-01-20T13:14:50+0000/2193de0, go1.15.6, linux/amd64)
Server : Centos 7.0
DB :
InfluxDB
1.7.9
RAM : 64GB
There is some best practice to avoid this issue? some configuration to modify in InfluxDB?
Thanks in advance
Hi, welcome to the forum
This question is better suited for an InfluxDB forum than here, but I’ll give it a shot.
You can try increasing the cache-max-memory-size
value from the default of 1G to whatever makes sense for your use case.
If you’re starting it with Docker Compose, you can specify it with the INFLUXDB_DATA_CACHE_MAX_MEMORY_SIZE
environment variable, e.g.:
services:
influxdb:
image: influxdb:1.8
networks:
- k6
- grafana
ports:
- "8086:8086"
environment:
- INFLUXDB_DB=k6
- INFLUXDB_DATA_CACHE_MAX_MEMORY_SIZE=4g
Also see this InfluxDB forum answer for other tuning ideas, including making use of fast storage.
In general tuning InfluxDB for better performance at large scale is a difficult problem, since k6 generates so much metric data.