Activity: Describe Concurrency
Identify
concurrency requirements
Concurrency requirements are driven by:
As with many architectural problems, these requirements may be somewhat mutually exclusive. It is not uncommon to have, at least initially, conflicting requirements. Ranking requirements in terms of importance will help resolve the conflict. Identify
processes
For each separate flow of control needed by the system, create a process or a thread (lightweight process). A thread should be used in cases where there is a need for nested flow of control (i.e. within a process, there is a need for independent flow of control at the sub-task level). Separate threads of control may be needed to:
Example In the Automated Teller Machine, asynchronous events must be handled coming from three different sources: the user of the system, the ATM devices (in the case of a jam in the cash dispenser, for example), or the ATM Network (in the case of a shutdown directive from the network). To handle these asynchronous events, we can define three separate threads of execution within the ATM itself, as shown below using active classes in UML. Processes and Threads within the ATM Identify process lifecycles
Each process or thread of control must be created and destroyed. In a uni-process architecture, process creation occurs when the application is started and process destruction occurs when the application ends. In multi-process architectures, new processes (or threads) are typically spawned or forked from the initial process created by the operating system when the application is started. These processes must be explicitly destroyed as well. The sequence of events leading up to process creation and destruction must be determined and documented, as well as the mechanism for creation and deletion. Example In the Automated Teller Machine, one main process is started which is responsible for coordinating the behavior of the entire system. It in turn spawns a number of subordinate threads of control to monitor various parts of the system: the devices in the system, and events emanating from the customer and from the ATM Network. The creation of these processes and threads can be shown with active classes in UML, and the creation of instances of these active classes can be shown in a sequence diagram, as shown below: Creation of processes and threads during system start-up Identify inter-process
communication mechanisms
Inter-process communication (IPC) mechanisms enable messages to be sent between objects executing in separate processes. Typical inter-process communications mechanisms include:
The choice of IPC mechanism will change the way the system is modeled; in a "message bus architecture", for example, there is no need for explicit associations between objects to send messages. Allocate inter-process
coordination resources
Inter-process communication mechanisms are typically scarce. Semaphores, shared memory, and mailboxes are typically fixed in size or number and cannot be increased without significant cost. RPC, messages and event broadcasts soak up increasingly scarce network bandwidth. When the system exceeds a resource threshold, it typically experiences non-linear performance degradation: once a scarce resource is used up, subsequent requests for it are likely to have an unpleasant effect. If scarce resources are over-subscribed, there are several strategies to consider:
Regardless what the strategy chosen, the system should degrade gracefully (rather than crashing), and should provide adequate feedback to a system administrator to allow the problem to be resolved (if possible) in the field once the system is deployed. If the system requires special configuration of the run-time environment in order to increase the availability of a critical resource (often control by re-configuring the operating system kernel), the system installation needs to either do this automatically, or instruct a system administrator to do this before the system can become operational. For example, the system may need to be re-booted before the change will take effect. Map processes onto
the implementation environment
Conceptual processes must be mapped onto specific constructs in the operating environment. In many environments, there are choices of types of process, at the very least usually process and threads. The choices will be base on the degree of coupling (processes are stand-alone, whereas threads run in the context of an enclosing process) and the performance requirements of the system (inter-process communication between threads is generally faster and more efficient than that between processes). In many systems, there may be a maximum number of threads per process or processes per node. These limits may not be absolute, but may be practical limits imposed by the availability of scarce resources. The threads and processes already running on a target node need to be considered along with the threads and processes proposed in the process architecture. The results of the earlier step,Allocate Inter-Process Coordination Resources, need to be considered when the mapping is done to make sure that a new performance problem is not being created. Distribute model
elements among processes
Instances of a given class or subsystem must execute within at least one process; they may in fact execute in several different processes. The process provides an "execution environment for the class or subsystem. Using two different strategies simultaneously, we determine the "right" amount of concurrency and define the "right" set of processes: Inside-out
Outside-in
This is not a linear, deterministic process leading to an optimal process view; it requires a few iterations to reach an acceptable compromise. Example The following diagram illustrates how classes within the ATM are distributed among the processes and threads in the system. Mapping of classes onto processes for the ATM |
|
|