Always getting longs p99

Hi.

I’m testing k6 and I don’t know why but I’m always getting a p99 with long times.
If I repeat the test with other tools it reports correctly.

this is my k6 config

import http from "k6/http";
import { check, sleep } from "k6";

const scenarios = {
	fixed: {
		executor: "constant-arrival-rate",
		rate: 2, // requests per second
		timeUnit: "1s", // per second
		duration: "1m", // total test duration
		preAllocatedVUs: 150, // VUs to allocate (can be tuned)
		maxVUs: 300, // maximum VUs allowed
	},
	ramping: {
		executor: "ramping-vus",
		stages: [
		{ duration: "10s", target: 25 },
		{ duration: "60s", target: 35 },
		{ duration: "10s", target: 0 },
		],
	},
}

const { SCENARIO } = __ENV;

export const options = {

  thresholds: {
    http_req_duration: [
		{
			threshold: "p(99) < 3000",
		}
	],
	http_req_failed: ['rate==0'], // http errors should be 0
  },
  scenarios: SCENARIO ? { [SCENARIO]: scenarios[SCENARIO] } : scenarios,
  summaryTrendStats: ["avg", "min", "max", "med", "p(90)", "p(95)", "p(99)", "count"],
};

export default function () {
  let res = http.get("http://localhost:8080", { timeout: "10s" });
  check(res, { "status was 200": (r) => r.status == 200 });
  sleep(1);
}

And the report is.

         /\      Grafana   /‾‾/  
    /\  /  \     |\  __   /  /   
   /  \/    \    | |/ /  /   ‾‾\ 
  /          \   |   (  |  (‾)  |
 / __________ \  |_|\_\  \_____/ 

     execution: local
        script: index.js
        output: -

     scenarios: (100.00%) 1 scenario, 300 max VUs, 1m30s max duration (incl. graceful stop):
              * fixed: 2.00 iterations/s for 1m0s (maxVUs: 150-300, gracefulStop: 30s)

  █ THRESHOLDS 

    http_req_duration
    ✗ 'p(99) < 3000' p(99)=59.82s

    http_req_failed
    ✓ 'rate==0' rate=0.00%

  █ TOTAL RESULTS 

    checks_total.......................: 120     1.935675/s
    checks_succeeded...................: 100.00% 120 out of 120
    checks_failed......................: 0.00%   0 out of 120

    ✓ status was 200

    HTTP
    http_req_duration.............: avg=5.26s min=73µs max=59.82s med=1.5s p(90)=1.5s p(95)=59.82s p(99)=59.82s count=120
      { expected_response:true }..: avg=5.26s min=73µs max=59.82s med=1.5s p(90)=1.5s p(95)=59.82s p(99)=59.82s count=120
    http_req_failed...............: 0.00%  0 out of 120
    http_reqs.....................: 120    1.935675/s

    EXECUTION
    iteration_duration............: avg=2.5s  min=2.5s max=2.5s   med=2.5s p(90)=2.5s p(95)=2.5s   p(99)=2.5s   count=120
    iterations....................: 120    1.935675/s
    vus...........................: 1      min=1        max=5  
    vus_max.......................: 150    min=150      max=150

    NETWORK
    data_received.................: 16 kB  263 B/s
    data_sent.....................: 8.4 kB 136 B/s

running (1m02.0s), 000/150 VUs, 120 complete and 0 interrupted iterations
fixed ✓ [======================================] 000/150 VUs  1m0s  2.00 iters/s
ERRO[0062] thresholds on metrics 'http_req_duration' have been crossed 

It’s just 2 requests per second.

I’m reading the docs, google, internet and using AI and can’t find a way to fix this.

Is something that I’m doing wrong or it’s some kind of bug?

I verified with others tools that the server handles 30 requests/s with a p99 of 8 seconds.

But only with k6 I get this weird p(99)=59.82s.

Thanks.