Guidelines:
Test Script

Test
Script |
Test scripts are the computer readable
instructions that automate the execution of a test procedure (or portion of a test
procedure). Test scripts may be created (recorded) using a test automation tool,
programmed using a programming language, or a combination of recording and programming. |
Topics
To increase the maintainability and reusability of your test scripts, they should have
been structured before they are implement. You will probably find that there are
actions that will appear in several test procedures. A goal should be to identify these
actions so that you can reuse their implementation.
For example, you may have test procedures that are combinations of different actions
you can perform to a record. These test procedures may be combinations of the addition,
modification, and the deletion of a record:
- Add, Modify, Delete (the obvious one)
- Add, Delete, Modify
- Add, Delete, Add, Delete,...
- Add, Add, Add,...
If you identify these actions as separate test procedures , implement them separately
in different test scripts, and reuse them in other test procedures you will achieve a
higher level of reuse.
Another goal would be to structure your test procedures in such a way that a change in
the target software causes a localized and controllable change in your test procedures.
This will make your test procedures and test scripts more resilient to changes in the
target software. For example, say the login portion of the software has changed. For all
test cases that traverses the login portion, only the test procedure and test script
pertaining to login will have to change.
To achieve higher maintainability of your test scripts, you should record them in a way
that is least vulnerable to changes in the target software. For example, for a test
procedure that fills in dialog box fields, there are choices for how to proceed from one
field to the next:
- Use the TAB key
- Use the mouse
- Use the keyboard accelerator keys
Of these choices, some are more vulnerable to design changes than others. If a new
field is inserted on the screen the TAB key approach will not be reliable. If accelerator
keys are reassigned, they will not provide a good recording. If the method that the mouse
uses to identify a field is subject to change, that may not be a reliable method either.
However, some test automation tools have test script recorders that can be instructed to
identify the field by a more reliable method, such as its Object Name assigned by the
development tool (PowerBuilder, SQLWindows, or Visual Basic). In this way, a recorded test
script is not effected by minor changes to the user interface (e.g., layout changes, field
label changes, formatting changes, etc.)
Many test procedures involve entering several sets of field data in a given data entry
screen to check field validation functions, error handling, and so on. The procedural
steps are the same; only the data is different. Rather than recording a test script for
every set of input data, a single recording should be made and then modified to handle
multiple data sets. For example, all the data sets that produce the same error because of
invalid data can share the same recorded test script. The test script is modified to
address the data as variable information, to read the data sets from a file or other
external source, and to loop through all of the relevant data sets.
If test scripts or test code have been developed to loop through sets of input and
output data the data sets must be established. The usual format to use for these data sets
is records of comma-separated fields in a text file. This format is easy to read from test
scripts and test code, and is easy to create and maintain.
Most database and spreadsheet packages can produce comma-separated textual output.
Using these packages to organize or capture data sets has two important benefits. First,
they provide a more structured environment for entering and editing the data than simply
using a text editor or word processor. Second, most have the ability to query existing
databases and capture the returned data, allowing an easy way to extract data sets from
existing sources.
The recorded test script is sequential in its execution. There are no branch points.
Robust error handling in the test scripts requires additional logic to respond to error
conditions. Decision logic that can be employed when errors occur includes:
- Branching to a different test script.
- Calling a script that attempts to clean up the error condition.
- Exiting the script and starting the next one.
- Exiting the script and the software, re-starting, and resuming testing at the next test
script after the one that failed.
Each error-handling technique requires program logic added to the test script. As much
as possible, this logic should be confined to the high-level test scripts that control the
sequencing of lower-level test scripts. This allows the lower-level test scripts to be
created completely from recording.
When doing stress testing, it is often desirable to synchronize test scripts so that
they start at predefined times. Test scripts can be modified to start at a particular time
by comparing the desired start time with the system time. In networked systems each test
station will share, via the network, the same clock. In the following example (from a
script written in Visual Basic) statements have been inserted at the start of a script to
suspend the execution of the script until the required time is reached.
InputFile$ = "\TIME.DAT"
Open InputFile$ For Input As 1
Input #1, StartTime$
Close #1
Do While TimeValue(StartTime$) > Time
DoEvents
Loop
[Start script]
In this example, the required start time is stored in a file. This allows the start
time to be changed without changing the test script. The time is read and stored in a
variable called StartTime$. The Do While loop continues until the starting time is
reached. The DoEvents statement is very significant. It allows background tasks to execute
while the test script is suspended and waiting to start. Without the DoEvents statement,
the system would be unresponsive until the start time had been reached.
When the newly recorded test scripts are executed on the same software on which they
were recorded, there should be no errors. The environment and the software are identical
to when it was recorded. There may be instances where the test script does not run
successfully. Testing the test scripts uncovers these cases, and allows the scripts to be
corrected before being used in a real test. Three typical kinds of problems are discussed
here:
- Ambiguity in the methods used for selecting items in a user interface can make test
scripts operate differently upon playback. For example, two items recognized by their text
(or caption) may have identical text. There will be ambiguity when the script is executed.
- Test run/session specific data is recorded (i.e., a pointer, date/timestamp or some
other system generated data value), but is different upon playback.
Timing differences in recording and playback can lead to problems. Recording a test
script is inherently a slower process than executing it. Sometimes this time difference
results in the test script running ahead of the software. In these cases, Wait States can
be inserted to throttle the test script to the speed of the software.
| |

|