Docker image is consuming a lot of memory and cpu

I am running this docker image

FROM loadimpact/k6:master
COPY files files
WORKDIR files
CMD ["run", "test.js"]

The docker process grows in memory with time from 3 GBto 40 GB
Also, CPU consumption is growing from 5 CPUs to 16 CPU
If I run it without docker locally using k6 CLI it takes only 3 GB and CPU is way less
Is there a reason for that?

Hi @ibrahiem.94

Welcome to the community forum :wave:

Can you share the (sanitized) test.js script so we may have a look?

Locally, without dockers, are you running the same version of k6? Can you also share k6 version?

Do you see the same effect running the latest docker image grafana/k6:latest?

Thanks in advance for the info. With that we should be able to further dig into this.

Cheers!

The minimal code that I can share
test.js

import { check, group } from 'k6';
import { SharedArray } from 'k6/data';
import grpc from 'k6/net/grpc';
import {
  getlistAvailableVendorsResponse,
  getlistAvailableVendorsResponseStatus,
} from './availability-api.js';
import { randomItems } from './helpers.js';

const hostname = __ENV.HOSTNAME;
const geid = __ENV.GEID;
const countryCode = geid.split('_')[1]

// customer coordinates array shared across VUs
const customerCoordinates = new SharedArray(`${countryCode} customer coordinates`, () => {
  return __ENV.CUSTOMER_COORDINATES.split(';').map(coordinate => {
    const coordinateSplit = coordinate.split(',')
    return {latitude: parseFloat(coordinateSplit[0]), longitude: parseFloat(coordinateSplit[1])};
  });
});

export function setup() {
  let vendorIds = [];
  while (!vendorIds.length) {
    const customerCoordinate = randomItem(customerCoordinates);
    const response = getlistAvailableVendorsResponse(hostname, geid, customerCoordinate);
    vendorIds = response.message.vendors.map(vendor => vendor.vendorId);
  }
  return { vendorIds: vendorIds }
}

export function listAvailableVendors() {
  group('listAvailableVendors', () => {
    const customerCoordinate = randomItem(customerCoordinates);
    const response = getlistAvailableVendorsResponseStatus(hostname, geid, customerCoordinate);
    check(response, {
      'status is OK': (r) =>  r === grpc.StatusOK,
    });
  });
}


export const options = {
  userAgent: 'logistics-vendor-availability-k6/1.0',
  discardResponseBodies: true,
  scenarios: {
    listAvailableVendors: {
      executor: 'ramping-arrival-rate',
      exec: 'listAvailableVendors',
      preAllocatedVUs: __ENV.VUS_LIST,
      startRate: 1,
      timeUnit: '1s',
      stages: getStages('listAvailableVendors'),
    }
  }
}

function getStages(scenario) {
  const soakDuration = __ENV.SOAK_DURATION ? __ENV.SOAK_DURATION : '10s';

  if (scenario === 'listAvailableVendors') {
    return [
      { target: __ENV.WARM_UP_RPS, duration: __ENV.WARM_UP_DURATION },
      { target: __ENV.PEAK_RPS_LIST, duration: __ENV.RAMP_UP_DURATION },
      { target: __ENV.PEAK_RPS_LIST, duration: soakDuration },
    ];
  } 
}

function randomIntBetween(min, max) {
  return Math.floor(Math.random() * (max - min + 1) + min);
}

function randomItem(arrayOfItems){
  return arrayOfItems[Math.floor(Math.random() * arrayOfItems.length)];
}

availability-api.js

import exec from 'k6/execution';
import grpc from 'k6/net/grpc';

const client = new grpc.Client();
client.load(['protorepo/protos/'], 'public_api.proto');

export function getlistAvailableVendorsResponse(hostname, geid, customerCoordinate) {
    if (exec.vu && exec.vu.iterationInInstance === 0) {
        client.connect(hostname);
    }

    let data = { global_entity_id: geid, customer: { location: customerCoordinate } };
    let response = client.invoke(service_url, data, observabilityParams());
    return response;
}

export function getlistAvailableVendorsResponseStatus(hostname, geid, customerCoordinate) {
    let response = getlistAvailableVendorsResponse(hostname, geid, customerCoordinate);
    return response ? response.status : {};
}



function observabilityParams() {
    const startTime = Date.now();
    return {
        metadata: {
            'Traffic-Type': 'TESTING',
            'Accept-Encoding': 'gzip, deflate',
            'Request-Start-Instant': startTime.toString(),
            'Request-Expiry-Instant': (startTime + 1000).toString(),
        },
        timeout: '1000',
    };
}

my local version of k6 is k6 v0.43.1 ((devel), go1.20.1, darwin/arm64)
same behavior yes with both grafana/k6:latest and with alpine Linux with k6 installed in

1 Like

I shared the code above, thanks in advance.
I see the same effects yes running it on grafana/k6:latest and even with Alpine Linux with k6 installed on
my local k6 version is k6 v0.43.1 ((devel), go1.20.1, darwin/arm64)

1 Like

Hi @ibrahim.ng

Thanks for sharing all the information. Initially we thought this might be related to the master image, as it includes new features not yet released. We can discard that if you are running locally with 0.43.1 and docker image grafana/k6:latest reproduces the same effect (it’s also 0.43.1).

Our thinking now goes to the scripts having different behavior, which we also need to look into. Can you also share the helpers.js one? It will help us reproduce this.

Knowing the concrete values instead of environment ones will also facilitate reproduction (__ENV.GEID, __ENV.HOSTNAME, __ENV.VUS_LIST, __ENV.SOAK_DURATION, etc.).

And finally, can you share the full outputs (including end summary) of both runs, when using grafana/k6:latest and locally? We might spot something that gives us a hint of what is causing the different behavior.

Thanks!

I can’t share the HOSTNAME unfortunately, but GEID is a string, VUS_LIST is 1000 and SOAK_DURATION is 30m for example.
The helper.js

import { randomItem } from 'https://jslib.k6.io/k6-utils/1.2.0/index.js';

export function randomItems(list, size) {
  const items = new Array(size).fill(0);
  return items.map(() => {
    return randomItem(list);
  })
}

I will run the test and share the results with you later

but isn’t it weird that the docker image grows up so much but running through cli locally takes only 3 GB?
could be something related to the Garbage collector not freeing up space or something on the docker process?

Thanks for the additional info @ibrahiem.94

There are more variables that the script uses, HOSTNAME is not important, but __ENV.WARM_UP_RPS and __ENV.PEAK_RPS_LIST could be relevant. I also see __ENV.CUSTOMER_COORDINATES. Maybe you can share all the environment variables (sanitize the ones that you can’t share like HOSTNAME)?

We don’t know yet why the memory keeps growing, so our initial approach would be to run a similar scenario on our labs and see if we can reproduce it.

With the outputs we might be able to spot something as well that points to the root cause. And that is why we also asked for that.

__ENV.WARM_UP_RPS is 10
__ENV.PEAK_RPS_LIST 1000
__ENV.CUSTOMER_COORDINATES is a string like

"28.444113538089,45.9976878762245;25.9015990857594,45.3449146449566;25.3910538856083,49.6704695373774;21.7934518495793,39.1336481273174;24.6864065084689,46.7344326898456;26.3914462029843,50.1413041725755;31.6617309696013,38.7280595675111;18.2993931889641,42.7204385027289;21.4279407286907,39.7725566849112;21.2858914070702,40.4115698486567;21.4182218893086,39.8532033339143;24.6310759962515,46.71486094594;24.5619351892502,46.7824301496148;24.549639798581,46.7805589735508;24.4642558082506,39.6092111617327;24.1204373196967,47.2768008336425;24.7641136631505,46.6534873098135;24.6232875,46.5172969;19.1465988694695,41.0763503238559;24.815096974301,46.6155662387609;24.1344695258722,47.3521186038852;24.8662947631506,46.8694919720292;24.6441295027657,46.7122092470527;17.1198245428349,42.6542744413018;24.6572821979227,46.8041691184044;21.6384294540874,39.141242466867;16.910085552815,42.5531997531652;24.8544221412167,46.7187162861228;24.7796487802502,46.7952967062593;24.7558596038404,46.8128634989262;24.7090223366759,46.6777897998691;24.7207395499973,46.6818282008171;24.6561624002871,46.7948772758245;24.5569758874437,46.5253494679928;25.4192677455228,49.6902012079954;17.5477936987275,44.232910014689;24.6772293660308,46.8263141065836;21.59295302242,39.1605151444674;26.4023255544104,50.0587156042457;18.2593931569798,42.7781995385885;24.7404877695534,46.6311367973685;21.6078914811923,39.1361918672919;26.4331501175739,50.0700911879539;26.3200343874092,43.9972202852368;18.2111363610979,42.5017335265875;24.7938247690321,46.8073522299528;26.402004227914,50.0881380960345;24.5764481693283,46.8921217694879;28.4169280946082,48.4818636998534;21.6385569184748,39.1411190852523;24.7173848824581,46.6458805650473;21.533081172011,39.2065713554621;24.620981320744,46.8808015063405;24.7396488636435,46.8889839202166;24.4737219799703,39.6274917572737;24.702790798234,46.721840724349;24.4642082009463,39.6121193468571;24.787393541707,46.7348658666015;24.6713328997408,46.8692529201508;21.3900342543649,39.8044357448816;21.3157457440239,39.2657237872481;26.3214110546435,50.020564198494;24.5474970770964,46.7263233661652;21.5072717808307,39.2771640792489;24.822114677192,46.7803517729044;18.2461961701448,42.4769029766321;21.4508015915013,39.2460662126541;21.2642886598852,40.4318835586309;27.4243415157318,41.5758339688182;24.4467344947296,39.6069876104593;24.4578478864002,46.2649713456631;21.3764274787119,39.7628977149725;21.2334804000911,40.3709726035595;25.4450199998337,49.5135026425123;24.6905452018101,46.70913644135;20.0134076986782,42.6152680814266;25.3891992807341,49.5755511894822;24.7461705609762,46.5032021328807;24.7218340992145,46.762300170958;21.2649251184965,40.4653571918607;27.5634359879216,41.6954370215535;16.8951424056841,42.5796282291412;21.4547664728053,39.8399250581861;28.4312811948252,45.9779665991664;27.565660374608,41.6964726895094;24.5777229602204,46.6940755024552;26.3666402365679,50.1988657191396;20.051438518857,41.4742124453187;27.0162286402188,49.6542700007558;21.524253571109,39.1760924085975;24.8227588824571,46.813737899065;21.6136094654991,39.1565203294158;21.4915620771465,39.2181065306067;24.6479913823264,46.7036576941609;26.3272541202099,50.2122757583857;18.2026387198248,42.5353731215;26.3511576131533,50.1989086344838;25.340928953843,49.5992498472333;21.325731237596,39.6987057477236;21.5794562406932,39.2024933919311"
1 Like

One thing to add this docker is running inside k8s pod with a limit of 16 CPU and a limit of 40 GB memory

Thanks @ibrahiem.94 , will you able to share the full outputs (including end summary) of both runs?

One thing to add this docker is running inside k8s pod with a limit of 16 CPU and a limit of 40 GB memory

Is anything else (other containers) in the pod, or just the k6 image/container?

Unrelated to this, just curiosity, have you tried the k6-operator? Or running on k8s as a regular pod?

It’s running as a normal pod and the container running alone in the pod

Good to know the pod only contains the k6 image, thanks! Will you be able to share the full outputs (including end summary) of both runs? Before creating the lab for the reproduction it’s good to have a look in case it contains clues as to what is causing the different behavior.