Checkpoints:
Software Architecture (General)
General points - Models
- Is the model at an appropriate level of detail given the model objectives?
- If it is an early prototype model or requirements model, is there an over emphasis on
detailed behavior and coding?
- If the model is a detailed design, is there an overall spareness of behavior and coding
in various actors? This is often good (simple designs built up from low-level components)
but sometimes indicates an over use of too fine-grained component actors.
- Does the model demonstrate that the developer has familiarity and competence with the
full breadth of modeling concepts (e.g., concurrent objects, replication, sequence
diagrams, state charts, class diagrams, layering, structural and behavioral hierarchy,
dynamic structure, importation, etc.)?
- Are the concepts modeled in the simplest way possible? There 's a tendency when
learning any modeling technique to do things the complicated way - extra care needs to be
taken to guard against this.
- Is the model easily evolveable? Can a likely change to the model be effected easily by
the reviewers (this proves understandability)? Note, however, that in an iterative design
process, one must balance the development of a model to meet the requirements laid out for
the current increment versus spending too much time examining future requirements.
- To assist in evaluating this criteria, has the designer documented the key assumptions
behind the model as part of the design document, to allow reviewers to either accept or
counter those assumptions? If the assumptions are accepted for a given iteration, then the
model should be evolvable within those assumptions, but not necessarily outside of those
assumptions. Documenting assumptions is a way of indemnifying designers from not looking
at "all possible requirements". In an iterative process, it is impossible to
analyze all possible requirements, and to define a model which will handle every future
requirement.
General - Graphical Layout of Diagrams
- Is the purpose of the diagram clearly stated and clearly understood?
- Is the graphical layout of structure and behavior clean and conducive to design intent
communication?
- Is there too much on any one diagram? Complex diagrams are more difficult to understand.
Also, a complex diagram suggests that encapsulation is not being used to hide details.
This can lead to fragile designs, where a change in one component (class, interface, etc.)
affects many others. It may also indicate a lack of abstraction, if many 'similar' items
are represented as discrete units in a diagram. Can some of the discrete elements in the
diagram by replaced by a more generic abstraction?
- Are the placement of the model elements such that relationships among them are evident
(i.e., grouping similar or closely coupled ones together)?
- Are the relationships among model elements easy to follow?
- Are all model elements labeled and are their labels placed so it is obvious what is
being labeled?
General - Documentation and Justification
- Does each model element have a distinct purpose?
- Is it possible to justify the existence of each model element?
General - Architecture
- Does the architecture provide clear roles and interfaces to enable partitioning for
parallel team development?
- Would a designer of a subsystem know enough from the architecture to develop the
subsystem in that context?
- Does the packaging scheme reduce complexity and improve understanding?
- Have packages been defined in such a way that they are highly cohesive within the
package and so that packages themselves are loosely coupled?
- Have similar solutions within the common application domain been considered?
- Is the proposed solution intuitively obvious? (The aha reaction)
- Do other people on the team share the same view of the architecture as the one presented
by the architect(s)?
- Are there more than 7 layers? More than 15 "important classes"? More than 1
error handling mechanisms? More than 1 programming language? More than 1 database? (the
too many heuristics)
- Is the Software Architecture Document current?
- Have the Design Guidelines been followed?
- Have you mitigated the technical risks? Have you taken care of them or put a contingency
plan in place? Have you discovered any new risks?
- Have you satisfied the key performance requirements (established budgets)?
- Have you identified the appropriate test cases, test harnesses, and test configurations?
- Is the architecture "over-designed"?
- Are the mechanisms in place too sophisticated to be used?
- Are there too many mechanisms? The system can then be too complex or too expensive.
- Do you have too few mechanisms? The client classes can then suffer some performance
issue; for example, storing 3 bytes in a DBMS, or conversely, storing 700 Mb in a flat
binary file.
- Can you run all the use cases and scenarios for the current iteration on the
architecture, as demonstrated by views showing:
- Interactions between objects?
- Interactions between tasks and processes?
- Interaction between physical nodes?
Error recovery
- How is the system corrected when an error or exception occurs?
- How is the system protected against input or wrong data from the user or from external
systems?
- Is there a project-wide policy for handling exceptional situations? (Or is it left to
individual designers?)
- What happens when data in the database is corrupted?
- What happens if the database is not available? Can data still be entered into the system
and stored later?
- How is the system protected against input or wrong data, from the user or from external
systems?
- Is data being exchanged with other system complete, self contained units? (risk of loss
of overall integrity)
- In a redundant system, can 2 or more systems think that they are primary?
What happens then, and how is this situation resolved? Conversely, can no systems be
primary?
- In a distributed systems, what are the failure modes? Are they documented?
- Are there external processes/programs to clean up the mess when things are left in an
inconsistent state?
- What happens when I/O queues or buffers are full?
- Can the system be reverted back to a known state?
Transition, installation
- How is the system installed if it replaces an existing system (such as a previous
release)? What impact does it have on the operations?
- How is data converted from any existing system (such as a previous release)? How long
does this take?
- Can you activate the system's functionality one use case at a time? How do you
dynamically or statically assemble use cases? Resolve conflicts between use cases?
- How is the system installed when replacing an existing system? How much impact does it
have on the operations?
- Same questions for when a new release is installed. Must the system be taken down?
- Can the functionality be activated one feature at a time? How do you dynamically or
statically assemble features? Resolve conflicts between features?
Administration
- Can disk space be recovered while the system is active?
- Who maintains the system configuration and with which tool?
- Can you recover disk space while the system is active?
- Who maintains the system configuration and with which tool?
- Does the file system or the database require periodic reorganization or compaction?
- How is this need determined?
- How is the reorganization or compaction done?
- What impact does it have on the system's availability?
- Is access to the operating system restricted? Does the access control prevent the system
from performing correctly?
- What is the licensing strategy?
- Can diagnostics routines be run on an on-line system?
- Does the system monitor itself (capacity threshold, critical performance threshold,
resource exhaustion)?
- What actions are being taken?
- Where do alarms go?
- Is there a single alarm mechanism?
- Can it be tuned to prevent false or redundant alarms?
- How is the network (LAN, WAN) administered? Monitored? Can faults be isolated?
- Is there some tracing facility that can be turned on or off to help troubleshooting?
- What overhead is added?
- What special tools or training are required?
- Can a malicious user:
- enter the system?
- destroy critical data?
- consume all resources?
Performance
- Are the nominal and maximal performance threshold specified?
- Are there test sets that represents them?
- How testing will assess that the requirements are met?
- Is there a performance model (back of the envelope, queuing model, discrete event
simulation) to determine that performance will be met? Is the model fed with realistic or
measured data?
- Are tests and model only taking care of the steady state mode, or are startup or
catastrophes also taken into account?
- Are there known performance bottlenecks?
- Can the system be easily spread on a multi-CPU/share memory processor? Can certain
process be distributed? Can certain processing be delayed for later (night operations)?
- How much headroom is allowed in the CPU utilization? How is it assessed?
- Are the load or performance requirements reasonable? Can a user really enter that many
schmoldus per minute? Does the user really need to see the result in less than
50 millisecond?
Memory
- Are there memory budgets?
- What is done to detect or prevent memory leaks?
- How is the virtual memory system used? Monitored? Tuned?
Cost and schedule
- Use the Implementation Model, identify where software is being developed
- for each subsystem or layer:
- determine the total SLOC count, and how much is already available
- examine the hypothesis for the estimate.
- Obtain data on past productivity performance. Use a model such as COCOMO to re-compute
the development cost and schedule.
Portability
- Use the Implementation Model; identify bindings to platform or OS specific elements, or
parts that exploit vendor specific features
- examine coding standards:
- Do they contain specific clauses to guarantee portability?;
- How is the standard enforced?
- In some cases, portability is best assessed by doing a test port.
Reliability
- Are required measures of quality (MTBF, number of outstanding defects, etc.) met?
Security
- Have security requirements been met?
Performance
- Use the process view. Examine the core, critical scenarios, and how they
unroll onto the process view. Allocate time in each process, and overhead for
interprocess communication, based on the physical view. Some crude envelope
estimate can be done using simple arithmetic's. For a better assessment consider the use
of a tool that allows the definition of a queuing model, for example.
- Avoid bottleneck objects (too many messages going through them).
- Does a scenario involve a very high message count (potential indication of bad or
inefficient design)?
- Is executable start-up (initialization) is too long?
Organizational Issues
- Are the teams well-structured? Are responsibilities well-partitioned between
teams?
- Are there political, organizational or administrative issues that restrict the
effectiveness of the teams?
- Are there personality conflicts?
| |

|