As I wrote in my previous post, this one will be about System Testing and Acceptance Testing.
Well, let’s start to talk about the 2 last Test Levels.
We will start with System Testing
System testing focuses on the behavior and capabilities of a whole system or product, often considering the end-to-end tasks the system can perform and the non-functional behaviors it exhibits while performing those tasks.
Objectives:
- Reducing risk
- Verifying whether the functional and non-functional behaviors of the system are as designed and specified
- Validating that the system is complete and will work as expected
- Building confidence in the quality of the system as a whole
- Finding defects
- Preventing defects from escaping to higher test levels or production
We can also verify quality as a purpose. As with component testing and integration testing, in some cases automated system regression tests provide confidence that changes have not broken existing features or end-to-end capabilities.
System testing often produces information that we can use by stakeholders to make release decisions. System testing may also satisfy legal or regulatory requirements or standards.
The test environment should ideally correspond to the final target or production environment.
Test basis (examples of work products, that we can use as a test basis for component testing):
- System and software requirement specifications (functional and non-functional)
- Risk analysis reports
- Use cases
- Epics and user stories
- Models of system behavior
- State diagrams
- System and user manuals
Test objects (typical test objects for component testing):
- Applications
- Hardware/software systems
- Operating systems
- System under test (SUT)
- System configuration and configuration data
Typical defects and failures (examples of typical defects and failures for component testing):
- Incorrect calculations
- Incorrect or unexpected system functional or non-functional behavior
- Failure to properly and completely carry out end-to-end functional tasks
- Failure of the system to work properly in the system environment(s)
- Incorrect control and/or data flows within the system
- Failure of the system to work as described in system and user manuals
Specific approaches and responsibilities:
System testing should focus on the overall, end-to-end behavior of the system as a whole, both functional and non-functional. System testing should use the most appropriate techniques for the aspect(s) of the system to be tested.
Let’s take a look at the following example.
Our programmer C has to execute the system testing of the web application.
He got the customer’s requirements, which included the business rules.
To verify functional behavior he created the decision table to check the Login options in the application is working correctly.
It was a short example, but I wanted to show that the testing techniques can be useful to system testing and how much important system testing is in the SDLC.
System testing is typically carried out by independent testers who rely heavily on specifications.
Defects in specifications (e.g., missing user stories, incorrectly stated business requirements, etc.) can lead to a lack of understanding of, or disagreements about, expected system behavior.
Such situations can cause false positives and false negatives, which waste time and reduce defect detection effectiveness, respectively.
Early involvement of testers in user story refinement or static testing activities, such as reviews, helps to reduce the incidence of such situations.
Let’s analyse the story.
Our programmer C just finished coding the web application and provided it for the tester (Let’s say his name is D) to testing. Tester found three bugs:
- The user cannot send the contact form
- The app visitor cannot to subscribe the newsletter
- The user cannot login to the website
The tester is providing the defect reports to the programmer and gets the answer that the login feature is working correctly. After executing the tests again, the tester sees the functionality is working properly.
This example shows us that the false positives test results happens really often. That is important to prevent it, because it can waste a lot of time and money.
We have a good knowledge so far about almost all test levels.
We have the last one to talk about. It is about Acceptance Testing
Acceptance testing focuses on the behavior and capabilities of a whole system or product.
Objectives:
- Establishing confidence in the quality of the system as a whole
- Validating that the system is complete and will work as expected
- Verifying that functional and non-functional behaviors of the system are as specified
As a result of acceptance testing, we may find the information to rate the readiness of the system to deploy and use by the customer (user). During acceptance testing, we can also discover defects, but their discovering is not that important purpose. At this level, we ought to validate the system, which means we should check if the system fulfills the customer’s business needs. In some cases finding a lot of defects can be a serious project risk. Acceptance testing may also satisfy legal or regulatory requirements or standards.
Common forms of acceptance testing include the following:
- User acceptance testing
- Operational acceptance testing
- Contractual and regulatory acceptance testing
- Alpha and beta testing.
We will describe each one below.
User acceptance testing
UAT is typically focusing on validating the fitness for use of the application by intended users in a real or simulated operational environment. The main objective is building confidence that the users can use the system to meet their needs, fulfill requirements, and perform business processes with minimum difficulty, cost, and risk.
For example, the application is ready for deployment. The company is meeting the customer and the customer (user) tests the application in the simulated operational environment to check if the application fulfills his requirements.
Operational acceptance testing
The acceptance testing of the system by operations or systems administration staff is usually performed in a (simulated) production environment. The tests focus on operational aspects, and may include:
- Testing of backup and restore
- Installing, uninstalling, and upgrading
- Disaster recovery
- User management
- Maintenance tasks
- Data load and migration tasks
- Checks for security vulnerabilities
- Performance testing
The main objective of operational acceptance testing is building confidence that the operators or system administrators can keep the system working properly for the users in the operational environment, even under exceptional or difficult conditions.
For instance, developers coded the software and provided it for the tester to testing. The application should work on desktops, tablets, and mobile devices. Out tester needs to test that in a (simulated) production environment.
Contractual and Regulatory acceptance testing
This test level performs against a contract’s acceptance criteria for producing custom-developed software. We should define acceptance criteria when the parties agree to the contract. Contractual acceptance testing is often performed by users or by independent testers.
Regulatory acceptance testing performs against any regulations that must be adhered to, such as government, legal, or safety regulations. Regulatory acceptance testing is often performed by users or by independent testers, sometimes with the results being witnessed or audited by regulatory agencies.
The main objective of contractual and regulatory acceptance testing is building confidence that contractual or regulatory compliance has been achieved.
For example, our tester D gets the ready software to test. In the contract is the information about the law regulation about to test and fulfilled. During testing it, D is observed by the law department member.
Alpha and beta testing
Developers of commercial off-the-shelf (COTS) software wants to get feedback from potential or existing users, customers, and/or operators before they put software product on the market.
Alpha testing performs at the developing organization’s site, not by the development team, but by potential or existing customers, and/or operators or an independent test team.
For instance, developers created the game and want to deploy it. Before they will do that, they want to get feedback from a small group of users. To get this information they provide them the access and can fix the reported bugs before putting that game on the market.
Beta testing is performed by potential or existing customers, and/or operators at their own locations. That may come after alpha testing or may occur without any preceding alpha testing has occurred.
Let’s take a look at this example.
Developers deploy the created game on the market but they are prepared to get feedback from the users about the game issues. After getting the reports they can fix them and improve the product quality.
Test basis (examples of work products, that we can use as a test basis for component testing):
- Business processes
- User or business requirements
- Regulations, legal contracts, and standards
- Use cases and/or user stories
- System requirements
- System or user documentation
- Installation procedures
- Risk analysis reports
In addition, as a test basis for deriving test cases for operational acceptance testing, one or more of the following work products can be used:
- Backup and restore procedures
- Disaster recovery procedures
- Non-functional requirements
- Operations documentation
- Deployment and installation instructions
- Performance targets
- Database packages
- Security standards or regulations
Test objects (typical test objects for component testing):
- System under test
- System configuration and configuration data
- Business processes for a fully integrated system
- Recovery systems and hot sites (for business continuity and disaster recovery testing)
- Operational and maintenance processes
- Forms
- Reports
- Existing and converted production data
Typical defects and failures (examples of typical defects and failures for component testing):
- System workflows do not meet business or user requirements
- Business rules are not implemented correctly
- The system does not satisfy contractual or regulatory requirements
- Non-functional failures such as security vulnerabilities, inadequate performance efficiency under high loads, or improper operation on a supported platform
Specific approaches and responsibilities:
Acceptance testing is often the responsibility of the customers, business users, product owners, or operators of a system, and other stakeholders may be involved as well. It is often thought of as the last test level in a sequential development lifecycle, but it may also occur at other times, for example:
- Testing of a COTS software product may occur when it is installed or integrated
- Acceptance testing of a new functional enhancement may occur before system testing
In iterative development, project teams can employ various forms of acceptance testing during and at the end of each iteration, such as those focused on verifying a new feature against its acceptance criteria and those focused on validating that a new feature satisfies the users’ needs. In addition, alpha tests and beta tests may occur, either at the end of each iteration, after the completion of each iteration, or after a series of iterations.
User acceptance tests, operational acceptance tests, regulatory acceptance tests, and contractual acceptance tests also may occur, either at the close of each iteration, after the completion of each iteration, or after a series of iterations.
We got to the end of the article.
We had the occasion to get a lot of information about the test levels.
They are important to get the whole Software Development Lifecycle easier, and especially the testing process as well.
It was an extensive post (two parts) but I hope you enjoyed the reading and found some useful information.
The next post will arrive in next week, on Friday 🙂
The part content of this post was based on ISTQB FL Syllabus (v. 2018).