Performance testing in continues deployment to prod

I would like to hear the process you follow when you do performance testing using k6 in a continues deployment to production pipeline.

Hi @sahaniperera995,

I’m sorry but I think I cannot share here with you internal details about how we do run performance and load tests as part of our continuous integration and continuous deployment pipelines, I can only tell you that yes, as you can see from our open-source projects, we generally dogfood our own tools.

That said, I’d encourage you to search over the net, cause there’s plenty of blog posts, even a few books that explain different strategies depending the type of services, requirements, resources, etc. I’m pretty sure you’ll easily find some useful resources that are a good fit for your use cases.

Cheers! :bowing_man:

HI @joanlopez . Thank you and I do understand the concerns of sharing the internal details.
I have tried searching over the net, but I couldn’t find some useful insights that fit in our case. Currently we do follow a process to do performance testing for continues development. But as we try to move on to the continues deployment, it arises with several concerns.
In our planned approach, we want go for a leaner(light-weight) perf strategy. We have a bulky system with lot of components(micro services). Once a development work is completed, we want to do all the quality checks including the performance test and deploy the developments to production in a single pipeline. In our current performance tests, considering the high load and the time it takes to ramp up, we usually have performance test for around 4 hours. But in a above mentioned pipeline, having a 4 hour test running for each change would be impossible as we cannot do several performance tests in the same perf env(since we do the perf tests for Back end(BFF)).
So When doing the testing, it should not have any dependancy on the performance env(thinking of on-the-go spawned perf env).
If you have done something similar to this or have idea on this, it would be a great help for us if you can share some insights with us. Some insight would be sufficient as I do understand the concern of sharing internal details.

Thank you in advance :bowing_woman:

Hi @sahaniperera995,

Yeah, I guess something like what you described makes sense.

I suspect a recommendation here might be to try to reuse your k6 code to run different types of tests, from smoke to more stress-like tests, because, as you said, running more intensive load tests isn’t as trivial (at least it requires a production environment capable to support the load, or a development/testing environment capable to handle a decent load, etc).

So, if your system is evolving very often, like multiple commits per day are pushed to the mainline/deployed, you can probably easily run the smoke tests after each change, which should be much faster than what you mentioned, and then just run the intensive tests once every some hours (4/6), or even less frequently like once per day, depending on your requirements.

I hope that helps! :pray:

1 Like