In the next few post we will focus on aspects of Performance Testing. It will be for example, basic aspects of Performance Testing, Performance measurements, Performance Testing in SDLC and much more.
In this post, we’ve talked about JMeter. Moreover, we’ve also explained that JMeter is a tool for performance testing. But what is performance testing? This and other aspects we will explain ourselves in this post.
What is Performance Testing?
Performance Testing is a software testing process that we use to test the speed, response time, stability, reliability, scalability, and resource usage of a software application under a particular workload. The main purpose of performance testing is to identify and eliminate the performance bottlenecks in the software application. It includes any kind of testing focused on performance (responsiveness) of the system or component under different volumes of load. We will present them below.
Load testing is the ability of a system to handle increasing levels of anticipated realistic loads resulting from transaction requests generated by controlled numbers of concurrent users or processes.
Stress testing is the ability of a system or component to handle peak loads that are at or beyond the limits of its anticipated or specified workloads. In addition, we can use it to evaluate a system’s ability to handle the reduced availability of resources such as memory.
Scalability testing is the ability of a system to meet future efficiency requirements which may be beyond those currently required. The focuses on determining the system’s ability to grow without violating the currently specified performance requirements or failing.
Once we know the limits of scalability, we can set and monitor threshold values in production to provide a warning of problems that may be about to arise. Moreover, we may adjust the production environment with appropriate amounts of hardware.
Spike testing focuses on the ability of a system to respond correctly to sudden bursts of peak loads and return afterward to a steady-state.
Endurance testing is the stability of the system over a time frame specific to the system’s operational context. This type of testing verifies that there are no resource capacity problems. These problems may eventually, degrade performance and/or cause failures at breaking points.
Concurrency testing focuses on the impact of situations where specific actions occur simultaneously. The concurrency issues are notoriously difficult to find and reproduce. This happens particularly when the problem occurs in an environment where testing has little or no control, such as production.
Capacity testing determines how many users and/or transactions a given system will support and still meet the stated performance objectives. We can state these objectives concerning the data volumes resulting from the transactions.
In ISTQB Foundation Level Performance Testing syllabus we can find an information about principles of performance testing. But what are they about? We’ll discuss it below.
The Aspect of Load Generation
In order to carry out the various types of performance testing, representative system loads must be modeled, generated, and submitted to the system under test. The efficient and reliable generation of a specified load is a key success factor when conducting performance tests. There are different options for load generation. Let’s take a look on them.
Load Generation via the UI
Load Generation via the User Interface is an adequate approach if only a small number of users are to be represented and if the required numbers of software clients are available from which to enter required inputs. We may also use this approach in conjunction with functional test execution tools, but may rapidly become impractical as the number of users to be simulated increases. Testing through the UI may be the most representative approach for end-to-end tests.
Load Generation using Crowds
This approach depends on the availability of a large number of testers who will represent real users. In crowd testing, the testers are organized such that the desired load can be generated. This may be a suitable method for testing for example web-based applications and may involve the users generating a load from a wide range of different device types and configurations. Although this approach may enable very large numbers of users to be utilized, the load generated will not be as reproducible and precise as other options and is more complex to organize.
Load Generation via the API
This approach is similar to using the UI for data entry but uses the application’s API instead of the UI to simulate user interaction with the system under test. The approach is, therefore, less sensitive to changes (e.g., delays) in the UI and allows the transactions to be processed in the same way as they would if entered directly by a user via the UI. We can create dedicated scripts, which repeatedly call specific API routines and enable more users to be simulate compared to using UI inputs.
Load Generation using Captured Communication Protocols
Load Generation using Captured Communication Protocols is a tool-based approach that involves capturing user interaction with the system under test at the communications protocol level and then replaying these scripts to simulate potentially very large numbers of users in a repeatable and reliable manner.
Principles of Performance Testing
Let’s take a look on the following principles of performance testing.
- We need to align tests to the defined expectations of different stakeholder groups, in particular users, system designers, and operations staff.
- The tests must be reproducible. Statistically, we can obtain identical results (within a specified tolerance) by repeating the tests on an unchanged system.
- The tests must yield results that are both understandable and can be readily compared to stakeholder expectations.
- We can conduct the tests, where resources allow, either on complete or partial systems or test environments that are representative of the production system.
- The tests must be practically affordable and executable within the timeframe set by the project.
ISTQB Foundation Level Performance Testing syllabus recommends the books which provide a solid background to the principles and practical aspects of performance testing. The books are:
- “The Art of Application Performance Testing: From Strategy to Tools” of Ian Molyneaux
- “Performance Testing Guidance for Web Applications” of Microsoft Corporation
Testing types in Performance Testing
We’ve discussed what are the principles of Performance Testing. We know what is Performance testing and what are its principles. In this section we will talk about the principal testing types used in performance testing include static testing and dynamic testing.
Static testing activities are even more important for performance testing than for functional testing. It’s because a lot of critical performance defects are introduced in the architecture and design of the system. We can introduce these defects by misunderstandings or a lack of knowledge by the designers and architects. We can introduce them because of requirements that did not adequately capture the response time, throughput, etc.
Static testing activities for performance includes the activities such as reviews of requirements with a focus on performance aspects and risks, reviews of database schemas, entity-relationship diagrams, metadata, stored procedures, and queries, reviews of the system and network architecture, or reviews of critical segments of the system code (e.g., complex algorithms).
As the system is built, we should start to perform dynamic performance testing as soon as possible. In higher test levels such as system testing and system integration testing, the use of realistic environments, data, and loads are critical for accurate results. In Agile and other iterative-incremental lifecycles, teams should incorporate static and dynamic performance testing into early iterations rather than waiting for final iterations.
Opportunities for dynamic performance testing include:
- During unit testing, including using profiling information to determine potential bottlenecks and dynamic analysis to evaluate resource utilization
- During component integration testing, across key use cases and workflows. Especially when integrating different use case features or integrating with the “backbone” structure of a workflow
- Throughout system testing of overall end-to-end behaviors under various load conditions
- During system integration testing, especially for data flows and workflows across key inter-system interfaces. In system integration testing is not uncommon for the “user” to be another system or machine (e.g., inputs)
- During acceptance testing, to build user, customer, and operator confidence in the proper performance of the system. Also to fine-tune the system under real-world conditions. Generally in this phase we’re not focusing on finding performance defects in the system
Common Performance Efficiency Failure Modes and Their Causes
While there certainly are many different performance failure modes that we can find during dynamic testing, the following are some examples of common failures (including system crashes), along with typical causes.
Slow response under all load levels
It may cause by underlying performance issues, including, but not limited to, bad database design or implementation, network latency, and other background loads. We can identify such issues during functional and usability testing, not just performance testing, so test analysts should keep an eye open for them and report them.
Slow response under moderate-to-heavy load levels
Response degrades unacceptably with moderate-to-heavy load, even when such loads are entirely within normal, expected, allowed ranges. Underlying defects include saturation of one or more resources and varying background loads.
The degraded response over time
Response degrades gradually or severely over time. Underlying causes include memory leaks, disk fragmentation, increasing network load over time, growth of the file repository, and unexpected database growth.
Inadequate or graceless error handling under heavy or over-limit load
Response time is acceptable but error handling degrades at high and beyond-limit load levels. Underlying defects include insufficient resource pools, undersized queues and stacks, and too rapid time-out settings.
I left for dessert the most interesting part of this article, examples of the general types of failures. Let’s take a look on them.
Yesterday (Thursday, 22th July), a lot of major websites went down because of the DNS (Domain Name System). It meant that visitors couldn’t process the request and they couldn’t access the website. Once I’ve wanted to visit the British Airlines website, the error has appeared. I didn’t know yet about the websites global outage, so I opened the DevTools network and refreshed the website. The 503 error has arrived. The issue has been fixed at about 5:50 pm, but people couldn’t reach the websites for a while. The websites affected by an emerging issue with the Edge DNS service Akamai Technologies are presented below. British Airways, Airbnb, Delta Air Lines, UPS, HSBC Bank, and much more. You can read more about the issues here and here.
A few years ago the UK government made available the results of 1901 on the internet. That was a great opportunity to gather knowledge about the family history. On the first day of release, people opened the application which was quite slow but worked as expected. However, after the next 24 hours, the apologetic message saying that the site was unavailable has appeared. I found this example from the book “The Art of Application Performance Testing”.
That’s why is that important to make sure that we tested and maintained the application performance properly.
In this post we could see what is Performance Testing, and what it includes. We could get the knowledge about the aspects of Load Generation, and principles of Performance Testing. At the end we got the information about the performance efficiency failures and their causes with real-life examples.
I hope you’ve enjoyed the article and found some interesting information.
The next post will arrive soon 🙂
The information in this post has been based on the ISTQB Foundation Level Performance Testing Syllabus.
Graphics with hyperlinks used in this post have a different source, the full URL included in hyperlinks.