I have some requests in a group that follows redirects. As I also have static assets in this group, I have created a custom trend
frontPageTrend.add(res1.timings.duration); which is supposed to track the response time of the main request only. It seems like the trend only counts the last redirect, as res1.timings only track the last redirect.
From the docs: Note that in the case of redirects, all the information in the Response object will pertain to the last request (the one that doesn’t get redirected).
How can I add up the response time from all the requests that also redirects to the last redirect? In this case, I have only 1 initial request which has 4 redirects. Do I have to put the request in a separate group and just use the group duration?
You can’t (atm) get the timings for the intermediate redirects in the script (apart from disabling redirects and following them in the code). This could be an interesting addition, feel free to open an issue .
They are still emitted as separate metrics for each redirect so if you want the values separately you can just add a tag to the request you want to filter later on (technically you will have the
url tag which should be sufficient ).
If you want one metric that combines all of them … yeah either add another group (it could just be inside the current group you have) or you can use time.Now():
let start = Date.now();
let res = http.get("https://httpbin.org/redirect/5");
console.log(Date.now() - start);
Thanks for the tips.
Is this also true for http_req_duration? Like the metric http_req_duration og specific groups. Does this only count the last request of the redirect chain?
It seems like i found out:) It is behaving the same of course, as it basically is the same metric.
But what I find a little strange is that there are differences between “similar” metrics.
The http_req_duration of an URL is measured to some ms lower that the custom metric of the same URL.
Also, the group_duration of a group is measured to some more ms than the manual timing (like in your example). This might be expected, as the group includes all code execution maybe?
I am just asking as I want to choose the most accurate timing for my tests. What would you recommend to use? I feel fine with some ms off, but if I have many string operations in a group for instance, that might contribute too much if it is included in the group_duration?
Sorry for the slow reply, @niklasbae
The difference comes from the fact that
http_req_duration actually measures only the time it takes to make the request and get the response. Not also all the things that happen between you calling
http.* and starting to do the request, and then from the finish of the request until we actually return you the response object. The same goes for the
So both are accurate they just measure slightly different things …
No worries, you guys are doing great, I appreciate it!
With all these choices… I wonder what is the most appropriate to use for accurate performance measurements? I guess the answer is that different measurements fits different use cases. But it would be great to have general guidelines to follow for normal use cases.
My goals are the following:
- Measure the time of a transaction of requests, which are representing all requests that constitutes a user action.
- Measure the time of single requests, including redirects. (I have let the script follow redirects this time, but I might change my tactic to create the redirects manually and don’t follow in the future).
The most convenient is to use
group_duration for the transactions, but is there any drawbacks of using this? I just need to build confidence in the numbers.
What is the best way to display each and single http request with its timings? I would also like to group them by their original group for easy analysis. I guess this kind of visualisation is already there in LoadImpact, but I would like to have it in Grafana/Influx as well.