Automating end-to-end load testing of highly dynamic applications

Hi, we have a website / application that is highliy dynamic in nature from two perspectives:

  1. The names of static assets / resources change with every build.
  2. Significant client-side logic that depends on the response from APIs

To come up with a realistic test script, it seems to me that recording is the only way to go. I’ve seen posts about inspecting the response to get a list of static assets to download, but what I’ve seen in practice is that going down this route ends up in the test scripts duplicating the application logic, especially when we take into consideration logic around handling API responses, etc.

So, if we assuming recording is the only way, then the question is, how do we automate the end-to-end process of recording and test generation?

One idea I thought of is using a browser-based test to automate the scenarios. There are many tools out there to do that like RPA or, more recently, agents, but given we’re using k6, can this be done using k6 browser tests?

In other words, is it possible to automate this:

  • Run k6 browser test for a number of scenarios
  • For each scenario, export the requests executed to a HAR file
  • Run the har-to-k6 converter to generate the test scripts
  • Run the scripts as part of a load test

I couldn’t find a way to export the requests executed during a browser test to HAR format. Is this possible?
Is this whole approach misguided?
All input is appreciated.

Thanks!

Hi @tnabil,

I’d like to understand your workflow a bit more.

So lets say you build a new version of your website, so the static assets/resource names change (e.g. from r8gar.css to 0a98e.css). How does the build process affect the DOM itself? Do element attribute names/values also change with every build?

e.g.

<div id="a09sdu">Hello World</div>

After a new build:

<div id="sd231">Hello World</div>

To come up with a realistic test script, it seems to me that recording is the only way to go.

Based on what you say here, it sounds like it’s only the resource names that change between builds. I’m going to paraphrase so that I can make sure I’ve understood your proposed workflow:

  1. You will first need to author manually one or more k6 test scripts using the browser module. For example a login scenario and a checkout scenario.
  2. You will then run these k6 browser based tests once after each build, and after each test run you want it to produce a HAR file of all the requests it made.
  3. Run the har-to-k6 converter to generate the protocol based k6 test scripts.
  4. Run the protocol based k6 test scripts for your load testing needs.

So this flow would need to be repeated after every successful build of the website. Is that correct?

I couldn’t find a way to export the requests executed during a browser test to HAR format. Is this possible?

Take a look at this comment which sounds like it might help in your use case.

Is this whole approach misguided?

Not sure yet. Once you can clarify whether i’ve aligned with you then I can get back to you with some more suggestions… if there are any :sweat_smile:

Best,
Ankur

Hi @tnabil , do you really need to test each specific static asset/resource by its individual URL? Those are going to be network download time only. In other words, do you actually need the k6 protocol test as final output here or could a thorough k6 browser test serve that purpose? The browser test will automatically include all the linked static assets regardless of the resource name changes, and it will run the API(s) as long as you navigate to the page which triggers them. By waiting for specified UI states, this will provide the additional advantage of verifying the client rendering time (after the resource was downloaded). This can cover execution of JavaScript, including the display of the API response.

Hi @ankur,
Thank you for your response.
I haven’t really inspected the DOM elements to verify whether they change from one build to another. What do you have in mind? Are you thinking whether it would be possible to parse the HTML to extract the asset names? I guess it’s possible and one of the test suites that were developed by the team did just that. However, what I found was that it either ended up being different from what the browser would do or it would end up replicating a lot of the frontend code into the tests.
The problem with modern JS applications that use server-side rendering for the first request and client-side functionality afterwards is that how the application behaves depends on whether the navigation happened client-side or server-side.
For example, the first request which gets executed on the server-side downloads markup and a bunch of assets. And then a subsequent navigation, which gets executed on the client-side, will retrieve JSON data and a bunch of additional assets. Those additional assets cannot be extracted from the second response since it did not return HTML markup.
The end result is that it’s almost impossible to emulate the real traffic that the browser is making.
I’m actually surprised I cannot find any articles that delve into this problem, given how prevalent frameworks that behave this way (e.g. Next.js and NuxtJS) are today.
Thanks,
Tarek

Hi @richmarshall,
Thanks for your response.
Developing the whole suite using browser tests would definitely solve the problem. However, my understanding is that it is not possible to scale browser tests to perform thousands of requests per minute, hence my focus on protocol-level tests.
Best regards,
Tarek

@tnabil I see your point. I’m pretty sure that the solution from the vendors that offer full browser testing capability is to use a Cloud account and scale the test on their infrastructure, assuming your non-production/staging environment can be accessed. It might be possible in your organization to make a financial justification for a paid Cloud account for browser test execution, since the script development work can be accelerated to a degree due to the simplification or abstraction. If you need to re-engineer a lot of the frontend code just to have runnable tests that is a labor cost. Besides scaling, Cloud execution has other realism advantages.

You might be able to take the hybrid test approach, limited to 5-10 full browsers, running in parallel with standard k6 protocol tests at a much higher request level, please see the linked example. To do this you would have at least two different functions (if each was complex, perhaps imported from separate scripts) and group them using scenarios. Of course you would still need to create the server-side/back-end test(s), but perhaps it would be sufficient to simply include the initial SPA home page (or whatever your “first request” is) and all of the API calls that are part of the SPA journey. The test report would include both the Core Web Vital type metrics and the standard k6 protocol metrics.

I have not used Cloud testing, but for on-premises load generation, I have successfully tested various SPAs with up to 30 parallel Chrome instances (headless) on a single AWS Windows host with a good deal of memory. Greater than 30 was overloading this host. In my experiments, I had more success with 1) a slower ramp-up than used with standard k6 protocol tests, and 2) keeping each browser open longer by using a loop on the main function, to save on the impact of terminating and starting new browsers. I believe I was able to accomplish at least a few thousand requests per minute but not 100K/minute. You might want to try your main workflow and see how far you can take it, but yes, some significant resources are needed. I have not used it, but distributed testing may be the workaround for scaling on-premises browser tests.

It has been many months since I have used the browser module, but I recall having to extensively log a test to figure out a nested frames situation for an ecommerce site. Take a look at these logging level options to possibly run your browser workflow(s) as 1 user/1 iteration and perhaps within the debug output (set “K6_BROWSER_DEBUG=true”) it will generate a list of the assets and other dynamic requests which you can consume for the actual load test. This site is another reference for K6_BROWSER_ARGS. You would need to turn off the debug logging for the actual load test; it is very verbose and would cause problems with the overhead. Perhaps write a shell script that runs the debug iteration and upon completion runs the main load test.

Hi @tnabil Using this script on quickpizza.grafana.com, I did a test and all the resources (document, CSS, JS, PNG, JSON) were logged with full URL.

set "K6_BROWSER_HEADLESS=false" && set "K6_BROWSER_TIMEOUT=60s" && set "K6_BROWSER_DEBUG=true" && k6 run -e BASE_URL=https://quickpizza.grafana.com test.js 2> browser_debug.txt

From the browser example, I changed the executor to ‘shared-iterations’ and added iterations 1.

time="2025-05-04T09:10:21-04:00" level=debug msg="fmid:1 rurl:https://quickpizza.grafana.com/" category="FrameManager:requestStarted" elapsed="0 ms" iteration_id=4a6a8922e664bc2a source=browser
time="2025-05-04T09:10:21-04:00" level=debug msg="fmid:1 rurl:https://quickpizza.grafana.com/_app/immutable/assets/0.56795cc5.css" category="FrameManager:requestStarted" elapsed="13 ms" iteration_id=4a6a8922e664bc2a source=browser
time="2025-05-04T09:10:21-04:00" level=debug msg="fmid:1 rurl:https://quickpizza.grafana.com/_app/immutable/assets/4.1bdb0f28.css" category="FrameManager:requestStarted" elapsed="0 ms" iteration_id=4a6a8922e664bc2a source=browser
time="2025-05-04T09:10:21-04:00" level=debug msg="fmid:1 rurl:https://quickpizza.grafana.com/_app/immutable/entry/start.45f3b44f.js" category="FrameManager:requestStarted" elapsed="0 ms" iteration_id=4a6a8922e664bc2a source=browser
time="2025-05-04T09:10:21-04:00" level=debug msg="fmid:1 rurl:https://quickpizza.grafana.com/_app/immutable/chunks/index.98b0eb20.js" category="FrameManager:requestStarted" elapsed="0 ms" iteration_id=4a6a8922e664bc2a source=browser
time="2025-05-04T09:10:21-04:00" level=debug msg="fmid:1 rurl:https://quickpizza.grafana.com/_app/immutable/chunks/singletons.85e123fe.js" category="FrameManager:requestStarted" elapsed="0 ms" iteration_id=4a6a8922e664bc2a source=browser
time="2025-05-04T09:10:21-04:00" level=debug msg="fmid:1 rurl:https://quickpizza.grafana.com/_app/immutable/chunks/index.2793c3c9.js" category="FrameManager:requestStarted" elapsed="0 ms" iteration_id=4a6a8922e664bc2a source=browser
time="2025-05-04T09:10:21-04:00" level=debug msg="fmid:1 rurl:https://quickpizza.grafana.com/_app/immutable/entry/app.0cf1a1b7.js" category="FrameManager:requestStarted" elapsed="1 ms" iteration_id=4a6a8922e664bc2a source=browser
time="2025-05-04T09:10:21-04:00" level=debug msg="fmid:1 rurl:https://quickpizza.grafana.com/_app/immutable/chunks/public.1fad5328.js" category="FrameManager:requestStarted" elapsed="0 ms" iteration_id=4a6a8922e664bc2a source=browser
time="2025-05-04T09:10:21-04:00" level=debug msg="fmid:1 rurl:https://quickpizza.grafana.com/_app/immutable/nodes/0.801da58d.js" category="FrameManager:requestStarted" elapsed="0 ms" iteration_id=4a6a8922e664bc2a source=browser
time="2025-05-04T09:10:21-04:00" level=debug msg="fmid:1 rurl:https://quickpizza.grafana.com/_app/immutable/nodes/4.a4cc3fe6.js" category="FrameManager:requestStarted" elapsed="1 ms" iteration_id=4a6a8922e664bc2a source=browser
time="2025-05-04T09:10:21-04:00" level=debug msg="fmid:1 rurl:https://quickpizza.grafana.com/images/pizza.png" category="FrameManager:requestStarted" elapsed="1 ms" iteration_id=4a6a8922e664bc2a source=browser
time="2025-05-04T09:10:21-04:00" level=debug msg="fmid:1 rurl:https://quickpizza.grafana.com/api/config" category="FrameManager:requestStarted" elapsed="8 ms" iteration_id=4a6a8922e664bc2a source=browser
time="2025-05-04T09:10:21-04:00" level=debug msg="fmid:1 rurl:https://quickpizza.grafana.com/_app/immutable/nodes/1.b8774432.js" category="FrameManager:requestStarted" elapsed="0 ms" iteration_id=4a6a8922e664bc2a source=browser
time="2025-05-04T09:10:21-04:00" level=debug msg="fmid:1 rurl:https://quickpizza.grafana.com/api/quotes" category="FrameManager:requestStarted" elapsed="0 ms" iteration_id=4a6a8922e664bc2a source=browser
time="2025-05-04T09:10:21-04:00" level=debug msg="fmid:1 rurl:https://quickpizza.grafana.com/api/tools" category="FrameManager:requestStarted" elapsed="2 ms" iteration_id=4a6a8922e664bc2a source=browser
time="2025-05-04T09:10:22-04:00" level=debug msg="fmid:1 rurl:https://quickpizza.grafana.com/api/pizza" category="FrameManager:requestStarted" elapsed="2 ms" iteration_id=4a6a8922e664bc2a source=browser

Hi Rich,

Sorry for the late reply. I’ll investigate the feasibility of achieving the test outcomes using browser tests alone.

1 Like