Activity: Design Test
Purpose
- To identify a set of verifiable test cases for each build.
- To identify test procedures that show how the test cases will be realized.
|
Steps
|
Input Artifacts:
|
Resulting Artifacts:
|
Worker: Test
Designer |
Purpose
- To identify and describe the different variables that affect system use and performance
- To identify the sub-set of use cases to be used for performance testing
|
Workload Analysis is performed to generate a workload model that can be implemented for
performance testing. The primary inputs to a workload model include:
- Software development plan
- Use Case Model
- Design Model
- Supplemental Specifications
Workload analysis includes performing the following:
- Clarify the objectives of performance testing and all the use case (business functions).
- Identify the targeted system usage interval and the number of users and user classes to
be simulated / emulated.
- Identify the use cases to be implemented in the model.
- Determine the performance test criteria (completion and acceptance).
- Review the use cases to be implemented and identify the execution frequency.
- Select the most frequently invoked use cases and those that generate the greatest load
on the system.
- Generate test cases for each of the use cases identified in the previous step.
- Identify the work profile for each user class.
- Identify test data for each test case.
- Identify the critical measurement points for each test case.
Purpose
- To identify and describe the test conditions to be used for testing
- To identify the specific data necessary for testing
- To identify the expected results of test
|
For each requirement for test:
The purpose of this step is to identify and describe the actions and / or steps of the
actor when interacting with the system. These test procedure descriptions are then used to
identify and describe the test cases necessary to test the application.
Note: These early test procedure descriptions should be high-level, that is, the
actions should be described as generic as possible without specific references to actual
components or objects.
For each use case or requirement,
- review the use case flow of events, or
- walk through and describing the actions / steps the actor takes when interacting with
the system
The purpose of this step is to establish what test cases are appropriate for the
testing of each requirement for test.
Note: If testing of a previous version has already been implemented, there will
be existing test cases. These test cases should be reviewed for use and design as for
regression testing. Regression test cases should be included in the current iteration and
combined with the new test cases that address new behavior.
The primary input for identifying test cases is:
- The use cases that, at some point, traverse your target for test (system, subsystem or
component).
- The design model
- Any technical or supplemental requirements
Describe the test cases by stating:
- The test condition (or object or application state) being tested.
- The use case, use-case scenario, or technical or supplemental requirement the test cases
is derived from.
- The expected result in terms of the output state, condition, or data value(s).
Note: it is not necessary for all use cases, use-case scenarios, and technical or
supplemental requirements to be tested.
The result of this step is a matrix that identifies the test conditions, the objects,
data, or influences on the system that create the condition being tested, and the expected
result.
Using the matrix created above, review the test cases and identify the actual values
that support the test cases. Data for three purposes will be identified during this step:
- data values used as input
- data values for the expected results
- data needed to support the test case, but is neither used as input or output for a
specific test case
See Artifact: Test Case.
Purpose
- To analyze use case workflows and test cases to identify test procedures
- To identify, in the test model, the relationship(s) between test cases and test
procedures creating the test model
|
Tool Mentors:
|
Perform the following:
Review the application workflow(s) and the previously described test procedures to
determine if any changes have been made to the use case workflow that affects the
identification and structuring of test procedures.
The review is done in a similar fashion as the analysis done previously:
- review the use case flow of events, and
- review of the described test procedures, and
- walk through the steps the actor takes when interacting with the system
The purpose of the test model is to
communicate what will be tested, how it will be tested, and how the tests will be
implemented. For each described test procedure, the following is done to
create the test model:
- identify the relationship or sequence of
the test procedure to other test procedures
- identify the start condition or state and
the end condition or state for the test procedure
- indicate the test cases to be executed by
the test procedure
The following should be considered while
developing the test model:
- Many test cases are variants of one another, which might mean that they can be satisfied
by the same test procedure.
- Many test cases may require overlapping behavior to be executed. To be able to reuse the
implementation of such behavior, you can choose to structure you test procedures so that
one test procedure can be used for several test cases.
- Many test procedures may include actions or steps that are common to many test cases or
other test procedures. In these instances, it should be determined if a separate
structured test procedures (for those common steps) should be created, while the test case
specific steps remain in a separate structured test procedure.
See Artifact: Test Model.
The previously described test procedures are insufficient for the implementation and
execution of test. Proper structuring of the test procedures includes revising and
modifying the described test procedures to include, at a minimum, the following
information:
- Set-up: how to create the
condition(s) for the test case(s) that is (are) being tested and what data is needed
(either as input or within the test database).
- Starting condition, state, or action for
the structured test procedure
- Instructions for execution: the detailed exact steps / actions taken by the tester to
implement / execute the tests (to the degree
of stating the object or component)
- Data values entered (or referenced test
case)
- Expected result (condition or data, or
referenced test case) for each action / step
- Evaluation of results: the method and steps used to analyze the actual results
obtained comparing them with the expected results
- Ending condition, state, or action for the structured test procedure
Note: a described test procedure, when structured may become several structured
test procedures which must be executed in sequence. This is done to maximize reuse and
minimize test procedure maintenance.
Test procedures can be manually executed or implemented (for automated execution).
When a test procedure is automated, the resulting computer readable file is knows
as a test script.
See Artifact: Test Procedure.
See Artifact: Test Script.
Purpose
- To identify and describe the measures of test that will be used to identify the
completeness of testing
|
Tool Mentors:
|
Perform the following:
Test coverage measures are used to identify how complete the
testing is or will be.
There are two methods of determining test coverage:
- Requirements based coverage.
- Code based coverage.
Both identify the percentage of the total testable items
that will be (or have been) tested, but they are collected or calculated differently.
- Requirements based coverage is based upon using use cases,
requirements, use case flows, or test conditions as the measure of total test items and
can be used during test design.
- Code based coverage uses the code generated as the total test item
and measures a characteristic of the code that has been executed during testing (such as
lines of code executed or the number of branches traversed). This type of coverage
measurement can only be implemented after the code has been generated.
Identify the method to be used and state how the measurement will
be collected, how the data should be interpreted, and how the metric will be used in the
process.
Identified in the test plan is the schedule of when test coverage reports are generated
and distributed. These reports should be distributed to, at least, the following workers:
- all test workers
- developer representative
- share holder representative
- stakeholder representative
|