Test Case
A test case is a set of test inputs, execution conditions, and expected results developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.
Topics

Explanation To top of page

Nothing has a greater effect on the end user's satisfaction with the software than a clear view of what the user expects so that those expectations can be verified and validated (i.e., tested).

Test cases reflect the requirements that are to be verified in the application-under-test. A requirement that is not verifiable will not have a test case associated with it. There are also unfortunately some requirements that are verifiable only by visual inspection. For example, when you verify the shut-down sequence of a computer system, someone has to "see" that it is shut down, the system can't "tell" you. You may not be able to verify all system requirements, which makes it critical for the success of your project to select the most appropriate or critical ones. Which system requirements you choose to verify will be a balance between the cost of verifying and the necessity of having the system requirement verified.

Test cases can be classified into several categories. The most common and important class of test cases is based on the business need(s) the software is intended to serve. This should have been expressed in terms of use cases. Other test cases focus on  whether you have built the system correctly based on a given design. Typically, sequence diagrams are used as a basis for this verification.  Other test cases are identified by design constraints, operational considerations, standards compliance, and so forth.

Identifying the test cases is important for several reasons.

  • Test cases form the foundation on which to design and develop test procedures.
  • A principal measure of the completeness of test is test coverage, based on the of the number test cases identified and implemented. (representing the different conditions   being tested).  When asked how much testing has been completed, a test manager can provide answers such as 95 percent of the test cases have been verified.
  • The scale of a testing effort is proportional to the number of test cases. With a comprehensive breakdown of test cases, the timing of succeeding stages of the test cycle can be more accurately estimated.
  • The kinds of test design and development and the resources needed are largely governed by the test cases.

Test Cases Derived from Use Cases To top of page

For each use case that traverses your target (system, package, or component) you should consider:

  • The flow of events of the use case as defined during Requirements Capture. Every possible path through the flow of
    events is a candidate for a test case, however you should limit the selection to a set of test cases that are not too
    overlapping in the functionality they test. Because the information you find in the flow of events focuses on the behavior of
    the system as seen from the user's view, this type of test cases will be used mainly for black box testing. The following
    factors should be considered when you select your test cases:
  • Make sure that you at least have test cases that cover the normal flow and all alternative flows of the use case (the ones
    that traverse your target) when the system runs under "normal" conditions.
  • That you can handle both valid and invalid input data.
  • That any business rules implied by the use case are handled correctly.
  • That the sequencing of events in the scenario is managed correctly.
  • That the user interface has the right appearance (when applicable).
  • The use case realizations as defined in sequence diagrams in the design model. You can use the set of test cases you identified from the flow of events as input here. The information in the sequence diagrams allows you to formulate variants of those test cases that focus on the collaboration between components (white box test). Another starting point could be what instances of the use case you have expressed in sequence diagrams.
  • Special requirements defined for the use case. These types of requirements typically specify minimum/maximum performance, sometimes combined with minimum/maximum loads or data volumes during the execution of the use cases. You can define test cases for this by using the test cases you derived from the flow of events and the sequence diagrams as a starting point.
  • Any system requirement with a traceability relation to the use case. Ideally, these requirements should already all be covered by the test cases you defined in previous steps. However, if your development organization considers traceability from original system requirements to the detail of the implementation as important, it might still be wise to verify that there are test cases for each system requirement.

Example:

Consider an ATM machine at a bank. The figure below shows actors and use cases defined for the ATM.

 

Actors and use cases in an ATM machine.

In the first iteration, according to the iteration plan,  to verify that the Cash Withdrawal use case has been implemented correctly. The whole use case has not yet been implemented, only four of its scenarios and one of its business rules:

  • Withdrawal of a pre-set amount ($20, $50, $100).
  • Withdrawal of custom amount (any amount that is an increment of the lowest available bill in the machine).
  • Withdrawal of pre-set amount, but not enough money in the account.
  • Withdrawal of custom amount, but customer requests amount that is not an increment of lowest available bill in the machine (there are only $10 bills and the customer asks for $55).
  • Customers cannot withdraw more than a pre-set amount per day from an account ($ 350). This limit is individually set for
    each account.

Test cases specify input and expected output. Your collection of test cases for system test should entail executing each scenario at least once:

  1. Withdrawal of pre-set amount. Choose $20; $100 is available in the account. Expected output is a successful withdrawal, meaning the cash is dispensed and the Bank System confirms it has deducted $20 from the account and that the new balance is $80.
  2. Withdrawal of custom amount. Choose $70; $100 is available in the account. Expected output is a successful withdrawal, meaning the cash is dispensed and the Bank System confirms it has deducted $70 from the account and that the new balance is $30.
  3. Withdrawal of pre-set amount. Choose $50; $20 is available in the account. Expected output is an unsuccessful withdrawal, meaning the Bank System rejects the deduction, and the ATM system displays a message to the Customer that the withdrawal has been rejected.
  4. Withdrawal of a custom amount. Choose $55; $10 is the smallest available bill in the ATM machine, $100 available in the account. Expected output is an unsuccessful withdrawal, meaning the ATM rejects it and displays a message to the customer.

The above test cases verify the four scenarios. Now you need test cases to verify the system enforces the business rule correctly:

  1. Withdrawal of a custom amount. Choose $400, available in the account is $500, pre-set limit is $350. Expected result is an unsuccessful withdrawal. The ATM system displays a message to the Customer that the withdrawal has been rejected.

Note: the previous four test cases already test what happens when you have inputs within the limits of the business rule.

User Interface Behavior

The graphical user interface offers many graphical objects that are used for selecting and displaying data, selecting options, and navigating through the software. Most of these graphical objects have a standard expected behavior and a set of attributes that can be verified by using test cases. For example, a test case may require that a given button be labeled in a particular way.

Test cases should include checking:

  • Consistency in the look and operation of the user interface, for example, use of mnemonic or accelerator keys and tab ordering
  • Compliance with user interface standards, such as, size of push buttons, labels, and so on.

Performance Criteria To top of page

Performance criteria are the requirements that specify the response times needed by users of the software. You will find them specified as special requirements of a use case. They are typically expressed as time per transaction, such as less than five seconds to add a customer account.

Performance criteria must also specify conditions that affect response times, including:

  • Size of the database - how many customer account records exist? How big is the database in general?
  • Database load - what is the transaction rate? What is the profile of the transactions?
  • Client/server system - how do different configurations of the client affect performance? How fast is the client? How much memory does it have?  What other software is loaded?
  • Network load - what is the average number of packets per second?
  • Number of simultaneous users - how will it affect response times for critical user actions?

In the case performance criteria are incomplete (you do not always have control of input requirements) you should at least make sure you have test cases that help you answer the above listed questions.

Operation under Stress To top of page

Stress requirements describe the need for the software to operate under abnormal conditions, such as low memory or disk space, or unusually high transaction rates on the network. These requirements also specify what is expected when limits are reached. Some pertinent questions are: 

  • What happens when the disk space has been depleted?
  • Under load, does the software just slow down or is there a point at which it ceases to be functional? It is advisable to test this for different levels of  load, such as nominal load, maximum load according to requirements, and extreme load.
  • Any stress conditions that pose significant risks. These include:
  • Available memory
  • Available disk space (at both the client and server)
  • High transaction rates

Access Control To top of page

Different users of a system may be granted access to different functions, defined by business rules of your organization. The software must control the access (or business rules), based on some information about the user. For example, if the user is a supervisor, then access to personnel records would be allowed. However, if the user were not a supervisor, access would be denied.

Access control requirements are critical to the integrity of the software and should be verified.

Configurations To top of page

In typical distributed systems there can be many allowed combinations of hardware and software that will be supported. Testing needs to be performed on individual components, to verify, for example, that all supported printers produce the correct output. Furthermore, testing also needs to cover combinations of components to uncover defects that come from interactions of the different components, for example, testing whether a given printer driver conflicts with a given network driver.

When identifying test cases, you should consider:

  • The required hardware and software configurations for the application. In general there are too many possible configurations to test them all, therefore identify and test the configuration(s) that will be constitute the majority of the deployed systems.
  • The target configurations that are most likely to have problems should be identified. These may be:
  • Hardware with the lowest performance.
  • Co-resident software that has a history of compatibility problems.
  • Clients accessing the server over slowest possible LAN/WAN connection.
  • Does the client have sufficient speed, installed memory, video resolution, disk space.
  • Printer support.
  • Network connections - local and wide area networks.
  • Server configurations - server drivers, server hardware.
  • Other installed software in the client.
  • Software versions for all components.

Installation Options and Verification To top of page

Installation testing needs to verify that the software can be installed under all possible installation scenarios, under normal and abnormal conditions. Abnormal conditions include insufficient disk space and lack of privilege to create directories. Installation testing should also verify that, once installed, the software operates correctly.

The test cases should cover installation scenarios for the software including:

  • Distribution media, for example, diskettes, CD-ROM, or file server.
  • New installation.
  • Complete installation.
  • Custom installations.
  • Upgrade installations.

Installation programs for client-server software have a specialized set of test cases. Unlike host-based systems, the installation program is typically divided between the server and the client. The client program installation may have to be run hundreds of times and may be run by the end user.

Test Cases Derived from Other Sources To top of page

Ideally, you should find all necessary input to test cases from the use-case model, the design model, and the Requirements Specification. It is, however, not uncommon that you at this point need to complement what is found there.

Examples would be:

  • You should verify that the software works when in use for a "long time"(operation test), where long time is in relation to how long one user normally would be active. An example of a type of defect that can be found in this manner is memory leakage. 
  • You should identify test cases that investigate actual performance and volume capabilities of the system. Sometimes this is referred to as measuring the "sweet spot" of the system, or to perform "negative tests". If the requirements say the system should handle 3-10 users, test what happens if you have 100 users. Does it break? Does it still perform nicely? Other factors to test in this manner would be network capacity, server capacity, server memory, database size, and transaction frequency.

In most cases, you can find test cases by creating variants or aggregates of the test cases you derived from the use cases. By variant is meant that the test case would contain the same actions, but have different input data and expected output data.

Prioritize Test Cases Based on Risks To top of page

It is impossible to test everything. It is required that you balance the cost (in terms of resources and time) to include a particular test case in your test against the risk imposed on the project if you do not include it. It is not possible to validate all possible test cases, so it is important to focus on the right ones. The most important test cases are those that reflect the highest risks from failure.

Risks can be viewed from several perspectives:

  • One perspective is looking at the consequences of a failed requirement. If this requirement is not met, what are the possible outcomes? A requirement failure that leads to a safety risk is of higher priority than one that results in extra key strokes or prevents an infrequently-used report from printing.
  • Another perspective comes from looking at possible outcomes and determining which failed requirement would cause which response. If a safety risk or data corruption is possible, then what software failures could result in safety being compromised or data being corrupt? 
  • Finally, risks can be viewed from the perspective of how likely failures are to occur. The likelihood of failure can be measured by frequency-of-use statistics, the underlying complexity of the software and the experience of the project team. 

This last perspective is particularly valuable in light of the growing number of re-usable software components that go into an application. A growing percentage of the overall application may be acquired from third parties, including application development tool vendors, vendors of custom controls, and middle-ware developers. For example, Windows itself provides much application functionality, including all of the standard Window controls, common dialogs, and other library functions. These third-party components must be identified by the development groups and risk assigned accordingly. There would be little point in identifying a long list of test cases for a Windows common dialog function. 

Assessing the risk of failed test cases allows them to be prioritized. The priority determines the order in which test procedures are developed to verify the requirements. Just as certain software requirements might be dropped due to lack of resources, so certain test cases might be left un-addressed.

Build Test Cases for Regression Test To top of page

Regression testing compares two revisions of the same software and identifies differences as potential defects. It thus assumes that a new software revision should behave like an earlier revision. Regression testing is done after changes are made to the software to ensure that defects have not been introduced as a result of the changes.

Regression testing does not introduce new test cases. Its purpose is to verify existing test cases after a software change is made. That means a regression test case will be used at least once in every iteration.

For each new test case you create and specify, you need to decide whether it is going to be used for regression testing. All test cases are potential regression test cases, but all test cases are not suitable for that. To be suitable, they should be built in such a way that they do not break on minor changes of the target for test, such as slightly changing the layout of the graphical user interface. These test cases, as well as their design and implementation, also have to be built to be easy to change and maintain, and they should be put under configuration management.

All types of functional requirements should be covered by the regression test cases. For example, a performance criterion may be re-tested to ensure that the software has not slowed down after a change. Ideally, you would like all test cases in one iteration to become regression test cases in the next iterations. However, there is a cost involved in maintaining and executing the test cases that need to be balanced. The use of tools to automate test highly improves the return on investment for regression testing, and minimizes the cost of regression test to the execution of the test.

Select Test Cases for Acceptance Test To top of page

Acceptance testing includes a set of test cases that has been mutually agreed upon between supplier and acquirer. This selection is made based on at which quality level (or levels) acceptance (and therefore payment) of the system will be made. Acceptance test cases are a subset of all the test cases, a subset that is formally reviewed by the parties, and run under special conditions (expert witnesses, and so forth). 

Display Rational Unified Process using frames

 

© Rational Software Corporation 1998 Rational Unified Process 5.1 (build 43)