Guidelines: Test CaseTopics
Explanation
Nothing has a greater effect on the end user's satisfaction with the software than a clear view of what the user expects so that those expectations can be verified and validated (i.e., tested). Test cases reflect the requirements that are to be verified in the application-under-test. A requirement that is not verifiable will not have a test case associated with it. There are also unfortunately some requirements that are verifiable only by visual inspection. For example, when you verify the shut-down sequence of a computer system, someone has to "see" that it is shut down, the system can't "tell" you. You may not be able to verify all system requirements, which makes it critical for the success of your project to select the most appropriate or critical ones. Which system requirements you choose to verify will be a balance between the cost of verifying and the necessity of having the system requirement verified. Test cases can be classified into several categories. The most common and important class of test cases is based on the business need(s) the software is intended to serve. This should have been expressed in terms of use cases. Other test cases focus on whether you have built the system correctly based on a given design. Typically, sequence diagrams are used as a basis for this verification. Other test cases are identified by design constraints, operational considerations, standards compliance, and so forth. Identifying the test cases is important for several reasons.
Test Cases Derived from Use Cases
For each use case that traverses your target (system, package, or component) you should consider:
Example: Consider an ATM machine at a bank. The figure below shows actors and use cases defined for the ATM.
Actors and use cases in an ATM machine. In the first iteration, according to the iteration plan, to verify that the Cash Withdrawal use case has been implemented correctly. The whole use case has not yet been implemented, only four of its scenarios and one of its business rules:
Test cases specify input and expected output. Your collection of test cases for system test should entail executing each scenario at least once:
The above test cases verify the four scenarios. Now you need test cases to verify the system enforces the business rule correctly:
Note: the previous four test cases already test what happens when you have inputs within the limits of the business rule. User Interface Behavior The graphical user interface offers many graphical objects that are used for selecting and displaying data, selecting options, and navigating through the software. Most of these graphical objects have a standard expected behavior and a set of attributes that can be verified by using test cases. For example, a test case may require that a given button be labeled in a particular way. Test cases should include checking:
Performance
Criteria
Performance criteria are the requirements that specify the response times needed by users of the software. You will find them specified as special requirements of a use case. They are typically expressed as time per transaction, such as less than five seconds to add a customer account. Performance criteria must also specify conditions that affect response times, including:
In the case performance criteria are incomplete (you do not always have control of input requirements) you should at least make sure you have test cases that help you answer the above listed questions. Operation under Stress
Stress requirements describe the need for the software to operate under abnormal conditions, such as low memory or disk space, or unusually high transaction rates on the network. These requirements also specify what is expected when limits are reached. Some pertinent questions are:
Access Control
Different users of a system may be granted access to different functions, defined by business rules of your organization. The software must control the access (or business rules), based on some information about the user. For example, if the user is a supervisor, then access to personnel records would be allowed. However, if the user were not a supervisor, access would be denied. Access control requirements are critical to the integrity of the software and should be verified. Configurations
In typical distributed systems there can be many allowed combinations of hardware and software that will be supported. Testing needs to be performed on individual components, to verify, for example, that all supported printers produce the correct output. Furthermore, testing also needs to cover combinations of components to uncover defects that come from interactions of the different components, for example, testing whether a given printer driver conflicts with a given network driver. When identifying test cases, you should consider:
Installation Options and Verification
Installation testing needs to verify that the software can be installed under all possible installation scenarios, under normal and abnormal conditions. Abnormal conditions include insufficient disk space and lack of privilege to create directories. Installation testing should also verify that, once installed, the software operates correctly. The test cases should cover installation scenarios for the software including:
Installation programs for client-server software have a specialized set of test cases. Unlike host-based systems, the installation program is typically divided between the server and the client. The client program installation may have to be run hundreds of times and may be run by the end user. Test Cases Derived from Other Sources
Ideally, you should find all necessary input to test cases from the use-case model, the design model, and the Requirements Specification. It is, however, not uncommon that you at this point need to complement what is found there. Examples would be:
In most cases, you can find test cases by creating variants or aggregates of the test cases you derived from the use cases. By variant is meant that the test case would contain the same actions, but have different input data and expected output data. Prioritize
Test Cases Based on Risks
It is impossible to test everything. It is required that you balance the cost (in terms of resources and time) to include a particular test case in your test against the risk imposed on the project if you do not include it. It is not possible to validate all possible test cases, so it is important to focus on the right ones. The most important test cases are those that reflect the highest risks from failure. Risks can be viewed from several perspectives:
This last perspective is particularly valuable in light of the growing number of re-usable software components that go into an application. A growing percentage of the overall application may be acquired from third parties, including application development tool vendors, vendors of custom controls, and middle-ware developers. For example, Windows itself provides much application functionality, including all of the standard Window controls, common dialogs, and other library functions. These third-party components must be identified by the development groups and risk assigned accordingly. There would be little point in identifying a long list of test cases for a Windows common dialog function. Assessing the risk of failed test cases allows them to be prioritized. The priority determines the order in which test procedures are developed to verify the requirements. Just as certain software requirements might be dropped due to lack of resources, so certain test cases might be left un-addressed. Build Test Cases for Regression Test
Regression testing compares two revisions of the same software and identifies differences as potential defects. It thus assumes that a new software revision should behave like an earlier revision. Regression testing is done after changes are made to the software to ensure that defects have not been introduced as a result of the changes. Regression testing does not introduce new test cases. Its purpose is to verify existing test cases after a software change is made. That means a regression test case will be used at least once in every iteration. For each new test case you create and specify, you need to decide whether it is going to be used for regression testing. All test cases are potential regression test cases, but all test cases are not suitable for that. To be suitable, they should be built in such a way that they do not break on minor changes of the target for test, such as slightly changing the layout of the graphical user interface. These test cases, as well as their design and implementation, also have to be built to be easy to change and maintain, and they should be put under configuration management. All types of functional requirements should be covered by the regression test cases. For example, a performance criterion may be re-tested to ensure that the software has not slowed down after a change. Ideally, you would like all test cases in one iteration to become regression test cases in the next iterations. However, there is a cost involved in maintaining and executing the test cases that need to be balanced. The use of tools to automate test highly improves the return on investment for regression testing, and minimizes the cost of regression test to the execution of the test. Select Test Cases for Acceptance Test
Acceptance testing includes a set of test cases that has been mutually agreed upon between supplier and acquirer. This selection is made based on at which quality level (or levels) acceptance (and therefore payment) of the system will be made. Acceptance test cases are a subset of all the test cases, a subset that is formally reviewed by the parties, and run under special conditions (expert witnesses, and so forth). |
|
|