Algorithm Structure

Home Up Finding Concurrency Algorithm Structure Supporting Structures Implementation Mechanisms

After analyzing the concurrency in a problem, perhaps by using the patterns in the Finding Concurrency design space, the next task is to  refine the design and move it closer to a program that can execute tasks concurrently by mapping the concurrency onto multiple units of execution (UEs) running on a parallel computer.

Of the countless ways to define an algorithm structure, most follow one of six basic design patterns. These patterns make up the Algorithm Structure  design space.  The figure shows the patterns in the designs space and the relationship to the other spaces. 

  \includegraphics[clip]{Figures/algorithm-structures.eps}

The key issue at this stage is to decide which pattern or patterns are most appropriate for the problem.  In making this decision, various forces such as simplicity, portability, scalability, and efficiency may pull the design in different directions.  The features of the target platform must also be taken into account.

There is usually a major organizing principle implied by the concurrency that helps choose a pattern. This usually falls into one of three categories: 
Organization by tasks
Task Parallelism
How can an algorithm be organized around a collection of tasks that can execute concurrently?
Divide and Conquer
Suppose the problem is formulated using the sequential divide and conquer strategy. How can the potential concurrency be exploited?

 

Organization by data decomposition
Geometric Decomposition 
How can an algorithm be organized around a data structure that has been decomposed into concurrently updateable ``chunks''? 
Recursive Data
Suppose the problem involves an operation on a recursive data structure (such as a list, tree, or graph) that appears to require sequential processing. How can operations on these data structures be performed in parallel?

 

Organization by flow of data
Pipeline 
Suppose that the overall computation involves performing a calculation on many sets of data, where the calculation can be viewed in terms of data flowing through a sequence of stages. How can the potential concurrency be exploited?
Event-based Coordination
Suppose the application can be decomposed into groups of semi-independent tasks interacting in an irregular fashion. The interaction is determined by the flow of data between them which implies ordering constraints between the tasks. How can these tasks and their interaction be implemented so they can execute concurrently?

The most effective parallel algorithm design may make use of multiple algorithm structures (combined hierarchically, compositionally, or in sequence). For example, it often happens that the very top level of the design is a sequential composition of one or more Algorithm Structure patterns. Other designs may be organized hierarchically, with one pattern used to organize the interaction of the major task groups and other patterns used to organize tasks within the groups -- for example, an instance of Pipeline in which individual stages are instances of Task Parallelism.

02 June, 2004