How to ensure a high level of software product performance?
In the world of technologies growing rampantly, it is necessary to make sure a software product will perform well under the expected workload. A software product that is compliant with required parameters delivers benefit to a company, saves costs and ensures positive reputation.
However, when the unexpected performance losses occur, they lead to negative consequences, as it happened during Florida’s state election of 2004 when the e-voting system crashed. All the municipal election records were lost, that made the whole voting procedure look questionable. It’s hard to say what was worse in this case: the data loss or the state reputation disruption.
Performance testing is necessary to prevent any issues of this kind. It provides companies with information on the product speed, scalability, stability and other related characteristics. The issues should be uncovered and improved before the product goes to the market.
Performance testing is one of the most extensive parts of quality assurance. It always takes time to find out where the problem is.
How to test performance?
Depending on what the company expects from its product or system, QA engineers run different types of tests. Here at Intetics, we stick to two major types:
1) Performance and load testing. Checks if the system meets performance criteria. Defines how fast some system modules perform under a particular workload and the behavior of the application under a specific expected load as well.
2) Stress-testing. Determines the ability of a computer, network, program or device to maintain a certain level of effectiveness under unfavorable conditions.
Intetics approach to performance testing
Though product owners want to check various product performance parameters, we did our best to rationalize the testing approach. We hold to the following steps:
1. Defining test transactions
Together with the client, QA engineers define what test scripts should be created and what user actions they should imitate. For instance, if we test an e-shop performance to see how it reacts to certain users’ activity, QA engineers create a script that repeats the whole user path. If it’s necessary to check the performance of 10 000 users, the script would imitate their behavior simultaneously under prescribed conditions with various accounts.
2. Defining performance requirements
At this stage, QA engineers define what performance is expected of the system and what parameters need to be checked. For instance, a Client wants the server load to be no more than, with 80% of memory load and with 10 000 users active and a system working under this load the entire day.
3. Testing start
When the requirements are set and the scripts are created, it’s time to begin the testing process. All the computers engaged in the system or product functioning get profiled. QA engineers get the access to them. The access to all system resources allows running wall-to-wall testing. This way it’s possible to see how much processor is loaded, what kind of data a database issues, what time of response is and any other parameters drawn down to the report.
The last part of the performance testing process is a report. This phase is the most complicated and time-consuming. There’re a lot of data to analyze, graphs to build, It also takes time to reveal the connections, define the bottlenecks and run another testing if needed to localize problematic sections. For the client convenience Intetics provides an ultimate report divided into 2 parts. One has basic results and second includes detailed recommendations on product improvement.
How to measure performance?
Reporting is the finale when the product owner can see the level of product performances and gets the overview of the whole system state. Trying to make the system analysis in the report as clear as possible, we use a bunch of metrics that could be clustered into 3 following groups. We chose them leaning on our experience:
The metric counts the number of virtual users engaged in testing and the percent of those who finished the intended transactions.
This includes a summed up set of checked system parameters (agreed with a Client) and the test results on them. Among those can be short term memory available, supervisor engine utilization, the number of successfully made transactions, available Mbytes in physical memory, processor time percentage, and some others.
This one defines average response time for the various types of requests.
When to run performance testing?
The frequency of performance testing depends on the project and can be held at various moments of the system or product development. Let’s say, if QA engineers test the application, that involves many third-party services like Google Maps, then performance testing is run at the stage when those services are connected.
It is also recommended to run testing several times. For instance, after serious changes at the application, or after the server integration. Short tests can be performed every iteration, to compare the results with previous iterations. This allows detecting the new issues and localizes the places of the desynchronization on time.
Performance testing is one of the most extensional and complicated software testing types. It takes time, resources, and requires a careful approach to the process. At Intetics, QA engineers run performance testing to aware product owners of the issues they have. Testing helps to deal with their particular issues and boost the quality of the product or the system.