The Renaissance of Performance Testing

Hamlet Consulting Luxembourg I 3:16 pm, 3rd August

Renaissance? Yes, more than ever.

To understand this, let's go back in time : initially, there was a heterogeneous IT system with proprietary protocols. Manufacturers had closed systems, but they guaranteed associated performance, and there were very few unpleasant surprises.


Then came the interconnection with TCP/IP. That's when the real first performance problems of interconnected systems emerged.

After a few production incidents, organizations began to embrace the idea of evaluating system performance, but this discipline was no easy task. It required tools and expertise: Mercury LoadRunner was born.


The solution offered support for multiple proprietary protocols, adapting to various industries. However, there was a major drawback: its significant cost. The license fees were so high that few companies dared to venture into it, not to mention the lack of qualified personnel.

There were some alternatives, but they were limited and incompatible with the technologies of that time.

In the end, those who did not invest in such a testing campaign tried to solve performance problems by increasing hardware capacity to handle operational load, which ultimately cost more and only lasted for a while.


Consequently, performance testing was relegated to the background by most organizations, except those with the necessary budget and whose IT performance was vital to the business. Performance testing was no longer a priority.


And then, a few years later, the digital transformation emerged with a focus on user experience.

New developments thus emphasized simple technologies and portability. Web and Mobile applications quickly became the new standards. As a result, we are now witnessing a hyper-digitalization of services offered by companies. User experience is at the heart of development strategies. However, this is not without constraints, especially on infrastructures.

The load absorbed by the systems quickly skyrocketed, sometimes completely unpredictably. During the COVID-19 lockdown, the forced shift to remote work shifted intranet network traffic to the internet, forcing some companies to reconsider their IT infrastructure (REF-1: BPCE and continuous performance testing).

Some companies chose then to outsource their infrastructure to the cloud, which offers the advantage of being highly scalable. However, performance problems often arise from poor configuration, suboptimal development, and so on.


So, nowadays should one invest in performance testing?

Absolutely. With our ever-evolving relationship with IT, it has become necessary to continuously measure the performance of services under real and exceptional conditions. The good news is that with the convergence towards the HTTP protocol, there is almost no longer a need for a multi-protocol performance testing tool.

JMeter (an opensource tool developed by Apache) is making a strong comeback. The only constant that hasn't changed throughout IT evolution is the need for qualified people. Profiles associated with performance testing are still very rare, and for good reason : performance testing is not just about the tool; it is a project within the project that requires wide technical and functional knowledge.


In conclusion, in the era of "internet anywhere, anytime" companies that prioritize user experience should regularly audit their systems, whether hosted in the cloud or not. Organizations that overlooked performance testing (due to lack of knowledge or cost reasons) unfortunately exposed themselves to serious, sometimes disastrous, consequences on their revenues (REF-2: TSB Bank service interruption).


By Mohamed Reqba, Senior Manager, expert in automation and performance test at Hamlet Consulting Luxembourg


Subscribe to our Newsletters

There are no any top news
Info Message: By continuing to use the site, you agree to the use of cookies. Privacy Policy Accept