I’ve been using k6 on kubernetes. My k6 version is 0.44.0.
-
Testing Scenarios:
Using constant vus in one of k6 scenarios (50VUs and 10mins)
Requesting a single api endpoint (echo service) which is in POST method and with body 1024 Bytes (1024 eng characters) for each single request.
-
Executing Resource on K8s:
CPU-> 2000m Mem-> 2G
I Monitor Pod Resource Usage, Found out it touch it’s limit.
-
Monitor Result:
With the same pod resource jmeter seems to perform well.
Jmeter 1500rps, K6 300Rps
import http from 'k6/http';
import { sleep } from 'k6';
export const options = {
discardResponseBodies: true,
scenarios: {
contacts: {
executor: 'constant-vus',
vus: 50,
duration: '600s',
},
},
};
export default function () {
const data = { 'abcdasdqwqfqwgwgefwepofkewofkwpeofkpowe' }; //with 1024 characters
// Using a JSON string as body
let res = http.post(url, JSON.stringify(data), {
headers: { 'Content-Type': 'application/json' },
});
}
Hi @jim0530
Welcome to the community forum
In your script, you declare const data = { 'abcdasdqwqfqwgwgefwepofkewofkwpeofkpowe' };
in the default
function. Is the body value different for each VU? Is this just a 1024-bytes constant body, or do you do any operations not shared in the sample script to generate it for each VU?
Are you running k6 with the k6-operator? Or simply deploying a k6 pod to run your k6 test?
What you see is the k6 pod reaching the CPU limit, with no problems with the memory. Am I wanted to make sure we understand correctly.
Do you see the CPU increasing as the test progresses, or the CPU usage is fairly constant?
Thanks in advance for the additional context.
Cheers!
Hi @eyeveebee
Thanks for the kindly reply.
-
Const data
Actually the var of data will not be modified and actually i write a function of outputting fix length of random string so that it can let me to simulate different payload size.
-
Executing K6 Method
As mentioned before i run k6 only by deploying a pod using k6.
-
Jmeter vs K6
Compared to jmeter, i run a simple pod using jmeter image. and find out with the same spec of testing scenario and same pod resource, jmeter get higher rps thant k6.
Hi @jim0530
Thanks for sharing additional information.
After discussing this internally with k6 developers, we think that one potential cause would be that the script is computing the data
string + encoding it all the time (for each VU iteration). Is it possible to isolate the issue, testing before with a static string, and assert that the test can run fine in that case?
Another potential cause could be the url
. Is it static, or does it change the path (adding an id or similar)? If the URL changes for each iteration, you could hit a high cardinality issue with the metrics. See HTTP Requests. If you can update to k6 v0.44.1
, there is a check for this and a dedicated log warning. So you would be seeing the warning in the logs.
If it’s none of these let us know, and we’ll keep digging.
Cheers!
Hi @eyeveebee
Again Thanks for the kindly reply.
- So should I use sharedarray to enhance my performance.
- url is static variables.
i’ve seen same issues on github. Poor performance when using big request bodies · Issue #3237 · grafana/k6 · GitHub
Here’s my some experimence data.
Test Scenario:
vus: 50 duration: 600s payload: 10240 Bytes
Target Endpoint:
An echo service wich will return same payload to user.
Pod Resource:
8core 16g
Jmeter(800rps) K6(120rps)
Hi @jim0530
@codebien reviewed the shared array, and it seems this won’t help your case: SharedArray: Bad performance with big items · Issue #3237 · grafana/k6 · GitHub
We are working on an example to try and reproduce your scenario and will get back here as soon as we can with the next steps. We don’t think this is related to running in Kubernetes, though we’ll see.
Thanks for your patience.
Hi @eyeveebee
Thanks for the reply sincerely. Looking forward to see the patch version.
1 Like
Hi @jim0530
I did some testing with what I expect is a similar script. Running 50 VUs I get 400 rps.
import { check } from 'k6';
import http from 'k6/http';
export const options = {
discardResponseBodies: true,
scenarios: {
contacts: {
executor: 'constant-vus',
vus: 50,
duration: '300s',
},
},
};
const characters = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789";
function generateRandomString(length) {
let randomString = "";
for (let i = 0; i < length; i++) {
const randomIndex = Math.floor(Math.random() * characters.length);
const randomCharacter = characters[randomIndex];
randomString += randomCharacter;
}
return randomString;
}
const randomString = generateRandomString(1024);
const url = "https://test.k6.io/flip_coin.php";
export default function () {
let data = { bet: randomString };
let res = http.post(url, JSON.stringify(data), {
headers: { 'Content-Type': 'application/json' },
});
check(res, {
'is status 200': (r) => r.status === 200,
});
}
Then I ran with 100 VUs and got around 800 rps.
export const options = {
discardResponseBodies: true,
scenarios: {
contacts: {
executor: 'constant-vus',
vus: 100,
duration: '300s',
},
},
};
Here are a few things to look into, based on these results:
- Can you run my script, and see if you get similar results? I ran on my laptop with plenty of resources. I can attempt to restrict the resources running on a docker container limiting memory, but not sure if that will be the issue.
- What is the latency of your endpoint? With 50 VUs we might be limited to what they can do in 1 second. If we need 300rps, we need each VU to finish 6 requests per second. Can you share the output summary of your test runs and we’ll have a look?
- What happens if you reduce the body size, do you see the rps increased?
- Finally, have you tried increasing the number of VUs in your script?
- Another option is to try to run a constant-arrival rate executor, and set it to 1500. If this is doable with the resources in the pod, you’ll probably see how many VUs are needed. You might need to adjust
preAllocatedVUs
to a higher number if it’s not reaching 1500 rps.
Let me know your thoughts.
Cheers!