Purpose
  • To uncover some unknown or perceived risks in the schedule or budget.
  • To detect some architectural design flaw. Architectural flaws are known to be the hardest to fix, the most damaging in the long run.
  • To detect a potential mismatch between the requirements and the architecture: over-design, unrealistic requirements, or missing requirements. In particular the assessment may examine some aspects often neglected in the areas of operation, administration and maintenance. How is the system installed? Updated? How do we transition the current databases?
  • To evaluate one or more specific architectural qualities: performance, reliability, modifiability, security, safety.
  • To identify reuse opportunities
Steps
Input Artifacts: Resulting Artifacts:
Worker: Architecture Reviewer
Participants: Architect, Designer
Work Guidelines:
  • Conduct reviews in a meeting format, although the participants of the meetings might prepare some reviews on their own.
  • Continuously monitor quality during design to prevent large numbers of defects from remaining hidden until the reviews. In each activity in design, the checkpoints listed below are referenced to reinforce this; use them for informal review meetings or in daily work.
  • Checklists:

Seen from 20,000 feet there is not much that distinguishes a software architecture assessment from any other assessment or review.

However, one important characteristic of the software architecture is the lack of specific measurements for many architectural quality attributes—only a few architectural qualities can be objectively measured. Performance is an example where measurement is possible. Other qualities are more qualitative or subjective: conceptual integrity for example. Moreover, it is often hard to decide what a metric means in absence of other data or reference for comparison. If a reference system is available and understood by the target audience, it is often convenient to express some of the results of the review relative to this reference system. This may happen in a context where the system under design can be compared to an earlier design.

When in the life-cycle this assessment takes place also affects its purpose or usefulness.

  1. At the end of the inception phase in an initial development cycle, there is usually little of a concrete architecture in place. But a review may uncover some unrealistic objectives, missing pieces, missed opportunity for reusing existing products, etc.
  2. The most natural place for a software architecture assessment is at the end of the elaboration phase. This phase is primarily focused on exploring the requirements in details, and baselining an architecture. An architecture review is mandated by our process at this milestone. This is the case where a broad range of architectural qualities are examined.
  3. More focused assessments may take place during the construction phase to examine specific quality attributes, such as performance or safety, and at the end of the construction phase for any lingering issues that may make the product unfit to be put in the hands of its end-users.
  4. Damage-control assessments may take place late in the construction or even transition phases, when things have gone really wrong: construction does not complete, or an unacceptable level of problems arise in the installed base during the transition.
  5. Finally as assessment may take place at the end of the transition phase, in particular to inventory reusable assets for an eventual new product or evolution cycle.

Plan the review  To top of page

Purpose
  • To determine the focus of the review
  • To determine the scope of the review

Prior to the review, define the scope of the review by determining the question that will be asked; define what will be assessed and why?  See the Check-points referenced above for the types of questions that could be asked. The exact questions will depend on the phase in the project: earlier reviews will be concerned with broader architectural issues, later reviews will be more specific.

Once the scope of the review has been determined, define the review participants, the agenda, the information that will be required to perform the review. In selecting the participants, establish balance between software architecture expertise and domain expertise. Clearly and unambiguously designate an assessment leader (the Architecture Reviewer) who will coordinate the review. If necessary, draw upon other teams or other parts of the organization to supply domain or technical expertise.

Reviewers should be experienced in either the software architecture or the domain; inexperienced reviewers may learn something about the architecture by participating, but they will contribute little to the review.

The number of reviewers should be approximately seven or less.   If chosen appropriately, they will be more than capable of identifying problems in the architecture. More reviewers actually reduce the quality of the review by making the meetings longer, making participation more difficult, and by injecting side issues and discussion into the review. Fewer than 4 reviewers increases the risk of architectural myopia, as the diversity of concerns is reduced.

Prepare for the review  To top of page

Purpose
  • To gather and distribute background material for the assessment prior to the Review sessions, so that Reviewers have sufficient time to understand the architecture and form comments.
Guidelines:

The primary source of information for the review is the Software Architecture Document, supplemented with additional details of architecturally interesting parts of the design model, design notes, or additional explanatory documentation. In addition, review checklists should be circulated to stimulate questions and raise issues.

Reviewers should study the documentation, forming questions and identifying issues to discuss, prior to the review. Given normal workload of reviewers, a few working days is usually the minimum time needed to prepare for the review.

Conduct the review  To top of page

Purpose
  • To assess the overall health of the Software Architecture
  • To identify major "holes" in the architecture.

In general, the review process follows a repetitive cycle:

  • An issues is raised by a reviewer
  • The issue is discussed, and potentially confirmed
  • A defect is identified (something is identified as needing to be addressed)
  • Continue until no more issues are identified

Diverse approaches can be used to do the review:

  • representation driven
  • information driven
  • scenario driven
Representation-driven review

Obtain (or build) a representation of the architecture, then ask questions and reason based on this representation.

There is a wide range of situations here, from the organization that are very architecture-literate and will provide some intelligible description to start with, to organizations where you need to identify who is the architect (even hidden under some other name), and need to extract the information from that person, to the place where software architecture is a totally unknown concept. This process is then called "mining the architecture," and in practice looks literally like that: digging it out the software or its documentation with a pickax, looking at source code, interfaces, configuration data, etc.

One model that can be used to organize the representation is in the format of the architectural views presented in the Software Architecture Document: the logical view organizes the main classes (the object model), the process view describes the main threads of control and how they communicate, the development view shows the various subsystems and their dependencies, the physical view describes the mapping of elements of the other views onto one or several physical configuration. Organize issues alongside the various views.

Information-driven review

Establish the list of information—data, measurements—that is needed for the reasoning, get the information, and compare this information to either the requirements or some accepted reference standard. This applies well for investigating certain quality attributes, such as performance, or robustness.

Scenario-driven review

This is the systematic "what if" approach. Transform the general questions being asked into a set of scenarios the system should go through and ask questions based on the scenarios. Example of such scenarios are:

  • The system runs on platforms X and Y. (The real quality attribute probed is portability.)
  • The system does this (additional) function F. (The real quality attribute is extensibility.)
  • The system processes 200 requests per hour. (The real quality attribute is scalability.)
  • The system is being installed on this kind of site by the end user. (The real quality attribute is completeness or usability.)

The advantage of such an approach is that it puts the task in a very concrete perspective, understandable by all parties. It also allows to probe into omissions or flaws into the requirements, especially when the requirements are informal or unwritten or very general and terse. The disadvantage is that it does not grab the architecture itself as the object being reviewed, but takes the system as a black box into which we are only sending some probes.

In practice, things are not so clearly separated, and we end up doing a bit of all three approaches.

Identifying issues

Uncovering potential issues is mostly done by human judgment based upon knowledge and experience. Certain failure patterns are repeated from project to project, from organization to organization. Certain heuristics can be used to uncover problem areas. Check-lists can be useful (some very generic ones are proposed later), as well as results from previous reviews, if any.

Capture potential issues as they appear, describing them in a neutral tone—no finger pointing, no "catastrophism’. You may use little cardboard cards as do AT&T reviewers, or as we do with CRC cards, to help prioritizing, organizing, eliminating.

Later, sort the candidate issues by decreasing scope or impact, and if there are many, tackle first the ones that are directly related to the question at hand, leaving the "other suggestions" for later if time permits. Then assert the reality of the problem: very often one can perceive a problem, but it may not be. We just have not spoken to the right person, looked at the right piece of information. Sort again. Ensure multiple data points to verify the reality of a problem. (Inexperienced assessors tend to be too single-threaded.)

When the problem has been confirmed, rapidly examine what could eliminate the problem, without necessary trying to do on-the-fly redesign of the system. Write down potential simplifications, reuse, alternatives: buy vs. build.

Allocate defect resolution responsibilities To top of page

Purpose
  • To take action on the defects identified.

After the review, allocate responsibility for each defect identified.   "Responsibility" in this case may not be to fix the defect, but to coordinate additional investigation of alternatives, or to coordinate the resolution of the defect if it is far-reaching or broad in scope.

Display Rational Unified Process using frames

 

© Rational Software Corporation 1998 Rational Unified Process 5.1 (build 43)