# Linear programming

2010 Mathematics Subject Classification: Primary: 90C05 [MSN][ZBL]

The mathematical discipline devoted to the theory and methods of solving problems about extrema of linear functions on sets of an $n$-dimensional vector space specified by systems of linear inequalities and equations; linear programming is one of the branches of mathematical programming. A typical linear programming problem is the following: Find the maximum of a linear function $$\label{eq:1} \sum_{j=1}^{n}c_j x_j$$ under the conditions $$\label{eq:2} \sum_{j=1}^{n}a_{ij} x_j\leq b_i,\quad i=1,\ldots,m,$$ $$\label{eq:3} x_j\geq 0,\quad j=1,\ldots,n,$$ where $c_j$, $a_{ij}$ and $b_i$ are given numbers.

Linear programming problems also occur as subsidiary problems in many methods for solving non-linear mathematical programming problems. Thus, in the method of possible (feasible) directions (see Mathematical programming) for finding the direction of incline in each iteration it is necessary to solve a corresponding linear programming problem.

The meaning of the term "linear programming" is that in linear programming one solves problems of forming an optimal program (plan) of action. In this connection linear programming can be regarded as one of the mathematical models in operations research.

In linear programming problems of a more general form than \eqref{eq:1}–\eqref{eq:3}, some (or all) of the conditions \eqref{eq:2} may be equations, and on some (or all) the variables $x_j$ one does not impose the condition of being non-negative. Any linear programming problem can be reduced to an equivalent problem of the form \eqref{eq:1}–\eqref{eq:3}.

## Duality

The basis for studying properties of linear programming problems is the theory of duality. The dual problem to the problem \eqref{eq:1}–\eqref{eq:3} is that of minimizing the function $$\label{eq:4} \sum_{i=1}^{m}b_i y_i$$ under the conditions $$\label{eq:5} \sum_{i=1}^{m}a_{ij} y_i\geq c_j,\quad j=1,\ldots,n,$$ $$\label{eq:6} y_i\geq 0,\quad i=1,\ldots,m,$$ The problems \eqref{eq:1}–\eqref{eq:3} and \eqref{eq:4}–\eqref{eq:6} either both have solutions or they are both unsolvable. The values of the objective functions \eqref{eq:1} and \eqref{eq:4} coincide at the solutions.

## Algorithms

One of the main methods for solving linear programming problems is the simplex method. Geometrically this idea consists of the following. The feasible set \eqref{eq:2} and \eqref{eq:3} is a convex polyhedral set (if it is bounded, it is a convex multi-dimensional polyhedron). If the linear programming problem has a solution, then there is a vector $x^*$ in the polyhedral set that is an optimal plan. The simplex method consists of a direct sorting out of the vertices such that the value of the objective function increases from one vertex to another. To each vertex corresponds a system of equations, chosen in a special way from the system of inequalities \eqref{eq:2}, \eqref{eq:3}, so the computational procedure of the simplex method consists of the successive solution of systems of linear algebraic equations. The simplicity of the algorithm makes this method convenient for implementation on a computer. In linear programming a whole series of other finite methods have been developed, for example the dual simplex method, and then the solution of the primal problem \eqref{eq:1}–\eqref{eq:3} is obtained as a result of applying the simplex method to the dual problem \eqref{eq:4}–\eqref{eq:6}.

In modelling real phenomena, linear programming problems of large dimension often arise. The content of this term depends on the possibilities of computational algorithms and on the characteristics of the computer employed, such as the amount of memory, the speed, etc. To solve problems of large dimensions, methods have been created that take account of the specific structure of the constraint matrices. At the heart of many of them lies the idea of decomposition of the system of restrictions into subsystems for each of which it is necessary to solve a linear programming subproblem of lower dimension than for to the original problem.

To solve linear programming problems, along with finite methods one also uses iterative methods for constructing a sequence of approximations with as limit the optimal plan. Methods based on the application of penalty functions relate to this, as well as iterative methods of game theory, the application of which is based on the equivalence of any matrix game to a corresponding pair of dual linear programming problems, and a number of other methods.

An important place in linear programming is taken by the problem of stability. In real problems (especially those of technical-economic content) the original information is usually known only to a certain degree of accuracy, and even small perturbations (errors) in the original data may cause substantial deviations of the perturbed solution from the true solution. In the numerical implementation of some finite methods rounding errors arise, and the accumulation of these, particularly in problems of large dimension, may lead to significant deviations of the resulting approximate solution from the true solution. Both these situations characterize the property of instability and are inherent in ill-posed problems. Methods for solving ill-posed linear programming problems have been developed. Thus, in the method of regularization, using additional information about the solution, one approximates the original problem by a parametric sequence of stable problems in such a way that the sequence of their solutions converges to the solution of the original problem.

Along with finite-dimensional linear programming problems one also considers linear programming problems in infinite-dimensional spaces. For example, linear problems of optimal control with mixed constraints are related to these problems.

## Example

Linear programming problems are mathematical models of numerous problems of a technical-economic content. An example is the following problem of planning the work in a business firm. In the production of homogeneous products it is necessary to use various production factors: raw material, work force, machinery, fuel, transport, etc. Usually there are several technical modes of production that have been worked out, and in these modes the expenditures of production factors in a unit of time for the production of articles are different. The number of production factors expended and the number of manufactured articles depend on the time devoted to work in one or other technical mode. The problem is to make a rational distribution of the time spent in working in the various technical modes, that is, the distribution under which the maximum number of articles is produced for a given limited expenditure of each production factor. The problem is formalized as follows. Suppose there are $n$ technical modes for producing articles and $m$ production factors. Let $c_j$ be the number of articles produced in a unit of time by the $j$-th technical mode, let $a_{ij}$ be the expenditure of the $i$-th production factor in a unit of time while working in the $j$-th technical mode, let $b_i$ be the resources of the $i$-th production factor, and let $x_j$ be the planning time of work in the $j$-th technical mode. The quantity $$\sum_{j=1}^{n}a_{ij} x_j,$$ denotes the total expenditure of the $i$-th production factor under the plan $x=(x_1,\ldots,x_n)$. Since the resources are restricted by the quantities $b_i$, the natural conditions \eqref{eq:2} and \eqref{eq:3} arise. The problem is posed of finding a distribution of time (an optimal plan) $x^*=(x_1^*,\ldots,x_n^*)$ of the work in each technical mode for which the total volume of production $\sum_{j=1}^{n}c_j x_j$ is maximal, that is, the problem \eqref{eq:1}–\eqref{eq:3}. Another characteristic example of applied linear programming problems is the transport problem.

The founding fathers of linear programming are the Soviet mathematician L.V. Kantorovich, who received a Nobel Prize in economics for his work in the area, and the American mathematician G.B. Dantzig.

The computational complexity of the linear programming problem has been an open question for many years. All variants of the simplex method that have been investigated require exponential time in the worst case; see, e.g., [a5]. In 1979, L.G. Khachiyan [a4] settled the question by proposing a polynomial-time algorithm for linear programming; this ellipsoid method was originally developed for non-linear programming by N.Z. Shor and D.B. Yudin and A.S. Nemirovskii. A number of other polynomial-time algorithms for linear programming followed. The most notable of these is the projective method of N. Karmarkar [a3], which may be practically competitive with the simplex method. It remains an open question if the linear programming problem is solvable by a strongly polynomial algorithm, the running time of which is polynomially bounded in the dimensions $n$ and $m$ and not influenced by the size of the numbers $c_j$, $a_{ij}$ and $b_i$. N. Megiddo [a6] designed an algorithm that, for fixed $n$, runs in linear time.

These new methods motivated further investigations into the behavior of the simplex algorithm. K.H. Borgwardt [a1] and others [a8] have shown that, in certain natural probabilistic models, several variants of the simplex method run in polynomial expected time.