Namespaces
Variants
Actions

Parallel programming

From Encyclopedia of Mathematics
Revision as of 03:17, 5 June 2016 by Kvale (talk | contribs) (Texify done)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

concurrent programming

The branch of programming concerned with the study and development of methods and means for: a) an adequate description by means of programs of natural parallelism occurring in computer-modelling and computer-control systems and processes; and b) a parallelization of information processing in multi-processor and multi-programming computers, with the aim of accelerating the calculations and of effectively using the resources of the computer.

In contrast to sequential programming, in which the basic concept is that of an algorithm realizing a step-wise sequence in time, in parallel programming a program generates a family of parallel-performed processes of information processing, which are completely independent or are related by statistical or dynamical space-time or cause-effect relations.

The computational parallelism emerges in various concrete forms, depending on the stage of programming, the complexity of the parallel fragments and the character of the relations between them.

In texts describing problems and programs one can indicate levels of complexity of fragments, for which the parallelization problem, i.e. the compilation of parallel programs, is solved differently: expressions with scalar data; expressions over arrays (vectors, matrices, trees, etc.), which can in algorithmic languages be described by loop operators, subproblems and subprograms; and independent problems and programs in multi-processor systems.

A premise for an expression to be parallelizable is the fact that the operations and functions occurring in it satisfy certain relations which induce on the set of expressions an equivalence relation with respect to results (e.g. for the arithmetic operations these are the relations of commutativity, associativity and distributivity). The parallelization problem is to construct for a given expression $E$ an expression $E_1$, equivalent to it, that can be performed in a small number of parallel steps, where a parallel step is a set of actions simultaneously executed on distinct calculators (processors). E.g., the expression $$ a + b + \frac{c \times d}{e \times f} + g $$

can be transformed into the equivalent expression $$ \left( \left( a + b \right) + g \right) + \left( \frac{c \times d}{e \times f} \right), $$

whose performance can be realized in 3 parallel steps. The ratio of the number of parallel steps of the performance to the number of sequential steps is called the acceleration of parallelization, or speed-up. Any arithmetic expression containing $n$ variables can be calculated parallelly in $O(\log n)$ steps using $n$ processors. The relative simplicity of parallelization algorithms for expressions allows one to realize them automatically on a computer using special programs or hardware means.

The largest acceleration can be obtained by parallelizing the processing of arrays. In Algol- or Fortran-type algorithmic languages expressions over arrays can be programmed by loop operators of the form FOR $I = A,B,C$ DO $S$, where $I$ is an integer loop parameter, $A$ is its initial value, $B$ is a finite value, $C$ is the parameter of step change, and $S$ is the body of the loop, giving the action to be executed in one iteration step. In order to parallelize a system of loops included in one another one considers the $n$-dimensional integer space of iterations with coordinate axes $I_1,I_2,\ldots,I_n$. Executing the iteration $K_1$ with parameter $I_1,\ldots,$ iteration $K_n$ with parameter $I_n$, gives a point $(K_1,\ldots,K_n))$ in this space. One looks in this space for the family of surfaces satisfying the condition: All iterations $(K_1,\ldots,K_n)$, lying on any of these surfaces, can be executed in parallel. In order to represent by a program the parallel processing of arrays one needs special linguistic means. To this aim there have been developed modifications of the existing programming languages (basically, of Fortran), introducing parallel loop operators in them, e.g. of the form $$ \text{FOR ALL } (I,J,K) \, / \, [1:N,1:L,1:M], $$

under whose execution the body of a loop is copied by definite rules into parallel-executed iterations. These languages can also be endowed with more involved means for describing arrays and with means of controlling their storage in the memory in order to ensure a fast parallel access to the arrays. A further raise in the level of programming languages is in the use of group operations over arrays, such as component-wise multiplication and addition of vectors and matrices, scalar and vector products, inversion of matrices, etc. The use of such languages allows one to replace automatic parallelization of sequential loops, which is practically realizable but relatively complicated, by an immediate specification of parallel group operations.

Parallel expressions may be executed asynchronous or synchronous. In the first case the relation between the execution times of parallel operations is not fixed, whereas in the second case these times must lie within the rigid frame of the description given. If the operations have fixed duration and if the number of processors participating at any moment of execution is known, then it is expedient to use a synchronous computing method, and in the opposite case an asynchronous method is more flexible. The control of asynchronous execution of an expression is based on the flow principle: Operations can be performed at any moment of time later than when their operands have been prepared.

Parallelization on the level of subproblems and subprograms is substantially more difficult. In this case the parallel processes can have a complicated internal structure; the duration of their execution is not fixed; the processes interact with each other, exchange data and turn to common resources (common data and programs in the memory, external devices, etc.). On this level, automatic parallelization requires complex algorithms for the analysis of problems and has to take into account dynamic situations in the system. Related to this, the creation of parallel programming languages acquires special significance, allowing the programmer to immediately describe complex interactions of parallel processes.

In the majority of parallel programming languages and systems a partial asynchronous organization of the calculations is adopted. In parallel programming languages there are means for extracting (generating) parallel processes and means for synchronizing them, which are represented in the hardware by interrupt mechanisms — forced halting of processes with preservation of their current states and with subsequent activation or resumption of other processes.

The simplest and most widespread program synchronization mechanisms are semaphores and events. A semaphore is a special control variable taking integer values. It is usually related to conflicting resources. Only two operations, $P$ and $V$, are applicable to a semaphore. If in the process of execution one comes across the operation $P(s)$ with $s$ a semaphore, then the process can be continued with $s$ decreased 1 in value if and only if the value of $s$ is positive; otherwise the execution halts and takes its place in the queue of $q(s)$ processes waiting for the respective resource. The operation $V$ increases the value of $s$ by 1 and renews the first of the $q(s)$ processes in the queue. The semaphore mechanism is extensively used in languages for the control of processes in operational computer systems and in a number of universal computer languages (e.g. Algol-68). The mechanism of events includes control variables whose current values outline the occurrence of some program or system event (termination of processes, halting, etc.), and special operators of expectation of events.

Semaphores and events are universal synchronization means; however, they are very primitive and incorrect usage of them may lead to break-down situations such as mutual blocking of processes (e.g. two processes requiring for their continuation two resources each, while each is "captured" by the other). The attempt to increase the reliability of programming leads to the appearance of more complex synchronization mechanisms: "post boxes" — special structures for the exchange of messages, for which rules for the parallel processes operating with them are fixed; "monitors" — sets of procedures and data to which the procedures may turn only sequentially and which contain rules for organizing interactions, given by the programmer, distributed channels for message-passing, etc.

Pure asynchronous parallel programming is used for organizing calculations in distributed computational systems, in which resource conflicts have been completely excluded. The attempt to simplify the organization of interactions between processes with common resources draws attention to asynchronous computing methods in which unrestricted access of parallel processes to common resources is allowed. E.g. asynchronous algorithms have been developed in which parallel processes change data in a common memory, and where unrestricted access to the memory does not disturb that a unique result be obtained.

In the theory of parallel programming one has developed formal models of parallel programs, processes and systems, and with their use one has studied various aspects of parallel programming: automatic parallelization of sequential algorithms and programs; the elaboration of parallel computation methods for various classes of problems and for various classes of parallel computational systems; optimal planning of parallel computations and distribution of resources among parallel processes; and the formal description of the semantics of programs and parallel programming languages. Among such models there are: schemes of parallel programs, reflecting to various degrees of detail the structure properties of programs; graph models and asynchronous nets (cf. Petri net), which are mathematical models of discrete processes and systems; process algebras and calculi; etc.

References

[1] G.R. Andrews, F.B. Schneider, "Concepts and notions for concurrent programming" ACM Comp. Surveys , 15 : 1 (March 1983) pp. 3–43
[2] G. Rodrigue (ed.) , Parallel computations , Acad. Press (1982)
[3] J. Miklosko (ed.) V. Kotov (ed.) , Algorithms, software and hardware of parallel computers , Springer (1984)
[4] M. Hennessy, "Algebraic theory of processes" , M.I.T. (1988)
[a1] V.I. [V.I. Varshavskii] Varshavsky, "Self-timed control of concurrent processes" , Kluwer (1990) (Translated from Russian)
[a2] D. Bustard, J. Elder, J. Welsch, "Concurrent program structures" , Prentice-Hall (1988)
[a3] J.W. de Bakker (ed.) W.P. de Roever (ed.) G. Rozenberg (ed.) , Current trends in concurrency. Overviews and tutorials , Springer (1986)
[a4] K.M. Chandy, J. Misra, "Parallel program design" , Addison-Wesley (1988)
How to Cite This Entry:
Parallel programming. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Parallel_programming&oldid=38932
This article was adapted from an original article by V.E. Kotov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article