Pattern Name: GroupTasks

FindingConcurrency Design Space


Intent:

This pattern addresses the question "How can you group the tasks that make up a problem decomposition in a way that simplifies the job of managing dependencies?"

Motivation:

This pattern constitutes the first step in analyzing dependencies among the tasks of a problem decomposition. In developing the problem's task decomposition, we urged the designer to think of the problem in terms of tasks that can execute concurrently. While we did not emphasize it during the task decomposition, it is clear that these tasks do not constitute a flat set. For example, tasks derived from the same high-level operation in the algorithm are naturally grouped together. Other tasks may not be related in terms of the original problem but have similar constraints on their concurrent execution and can thus be grouped together.

In short, there is considerable structure to the set of tasks. These structures -- these groupings of tasks -- simplify a problem's dependency analysis. If a group shares a temporal constraint, you can satisfy that constraint once for the whole group. If a group of tasks must work together on a shared data structure, the required synchronization can be worked out once for the whole group. If the tasks in a group are independent in every way, it may simplify the design and increase the available concurrency (thereby letting you scale to more processing elements) to group them together.

In each case, the idea is to define groups of tasks that share constraints and simplify the problem of managing constraints by dealing with groups rather than individual tasks.

Applicability:

Use this pattern when

Implementation:

Constraints among tasks fall into a few major categories, as follows.

The goal of this design pattern is to group tasks based on these constraints, because

For a given problem and decomposition, there may be many ways to group tasks. The goal is to pick a grouping of tasks that simplifies the dependency analysis. To clarify this point, think of the dependency analysis as finding and satisfying constraints on the concurrent execution of a program. When tasks share a set of constraints, it simplifies the dependency analysis to group them together.

There is no single way to find task groups. We suggest the following approach, keeping in mind that while you cannot think about task groups without considering the constraints themselves, at this point in the design it is best to do so as abstractly as possible -- identify the constraints and group tasks to help resolve them, but try not to get bogged down in the details.

Examples:

Molecular dynamics.

In the Examples section of the DependencyAnalysis pattern we described the problem of designing a parallel molecular dynamics program. In that discussion, we defined the following tasks:

Let's consider how these can be grouped together. As a first pass, each item in the above list corresponds to a high-level operation in the original problem and defines a task group. If you were to dig deeper into the involved functions, you'd see that in each case, the updates implied in the function are independent except for the need to manage the sum into the force array.

We next want to see if we can merge any of these groups. Going down the list, the tasks in first two groups are independent, but share the same constraints. In both cases, coordinates for a small neighborhood of atoms are read and local contributions are made to the force array, so we can merge these into a single group for bonded interactions. The other groups have distinct temporal or ordering constraints and therefore can't be merged.