Namespaces
Variants
Actions

Linear algebraic equation

From Encyclopedia of Mathematics
Revision as of 20:58, 17 November 2011 by Ulf Rehmann (talk | contribs)
Jump to: navigation, search

An algebraic equation of the first $X=\begin{pmatrix} x_1\\ \vdots\\ x_n\end{pmatrix},\quad Y=\begin{pmatrix} y_1\\ \vdots\\ y_n\end{pmatrix}.$degree in all unknowns, that is, an equation of the form $$a_1 x_1+\cdots +a_nx_n = b.$$ Every system of linear algebraic equations can be written in the form \begin{equation*} \left.\begin{aligned} a_{11} x_1+\cdots +a_{1n}x_n = b_1\\ \cdots\cdots\qquad \\ a_{m1} x_1+\cdots +a_{mn}x_n = b_m \end{aligned}

\right\}\label{1}

\end{equation*} where $m$ and $n$ are natural numbers; the $a_{ij}$ ($i=1,\dots,m$; $j=1,\dots,n$) are called the coefficients at the unknowns and are given; the $b_i$ ($i=1,\dots,m$) are called the free terms and are also given; the $x_i$ ($i=1,\dots,n$) are called the unknowns and need to be found. A solution of the system of linear algebraic equations (1) is a set of values $c_1,\dots,c_n$ such that each equation of the system becomes an identity when the $c_i$ are substituted for the corresponding unknowns. For applications the most important case is that in which the coefficients at the unknowns, the free terms and the values of the unknowns are numbers (complex, real or integers), but one can also consider the case when they belong to an arbitrary field $P$.

According to the number of solutions, systems of linear algebraic equations split into the following types:

a compatible system is a system of linear equations having at least one solution;

an incompatible (or contradictory) system is a system having no solution;

a determinate system is a system having a unique solution;

an indeterminate system is a system having more than one solution.

If one considers solutions of a system with values of the unknowns in a given number field (or in an arbitrary infinite field), then every indeterminate system of linear equations has infinitely many solutions. In contrast to equations of degree exceeding one, the type of a system of linear algebraic equations does not change when the given field $P$ is extended. Thus, under a field extension an incompatible system cannot become compatible, and a determinate system cannot become indeterminate. However, the set of solutions of an indeterminate system is increased by this.

The simplest way of determining the type of the system (1) and calculating its solutions is given by the Gauss method of eliminating the unknowns. In the case $n=m$ the system (1) is determinate if and only if the determinant formed by the coefficients of the unknowns is non-zero. In this case the unique solution of the system is found by Cramer's formulas (see Cramer rule).

To solve a system of linear equations whose coefficients contain parameters, instead of Gauss' method it is more convenient to use the general theory of linear equations, associated with the rank of a matrix. The rank of a matrix can be defined as the maximal number of linearly independent rows or columns. By the theorem on the rank of a matrix, the rank of the system of rows of a matrix is equal to the rank of the system of columns and is also equal to the highest order of the non-zero minors of the matrix (cf. Minor). With the system of linear equations (1) one associates two matrices: the matrix $$A=(a_{ij}),$$ formed by the coefficients of the unknowns, and the extended matrix $$B=(a_{ij},b_j),$$ obtained from $A$ by adjoining the column of free terms. A criterion for the system (1) to be compatible is given by the Kronecker–Capelli theorem: A system of linear equations (1) is compatible if and only if the rank of $A$ is equal to the rank of $B$.

A system of linear equations (1) has a unique solution if and only if the rank of $A$ is equal to the rank of $B$ and equal to $n$.

The unknowns of a compatible system of linear equation subdivide into principal and free unknowns. For any values of the free unknowns there are uniquely determined values of the principal unknowns, which together give a solution of the given system. The subdivision into principal and free unknowns is usually not unique. Namely, if the rank of $A$ is equal to the rank of $B$ and equal to $r$, then any $r$ unknowns from the coefficients of which one can form a determinant $D\ne 0$ can be regarded as principal, and the others as free. In this case the determinant $D$ is said to be a principal (or basic) minor of the system. It can be found by the bordering method, beginning with minors of lowest orders. It is also not always defined uniquely. In the calculation of solutions one needs to take only the $r$ equations containing the principal minor $D$ and to express the principal unknowns in general form (for example, by using Cramer's formulas) in terms of the free unknowns. These expressions are called the general solution. The free unknowns play the role of free parameters in it. Giving any values to them, one finds the values of principal unknowns that give a solution of the system with the chosen values of free unknowns. Any solution of a compatible system can be obtained in this way for suitable values of the free unknowns. If all coefficients at the unknowns and the free terms belong to a field $P$, then by choosing for the free unknowns values from the same field $P$ one obtains a solution with the values of all unknowns from the same field $P$. For $r=n$ all unknowns are principal and there is no general solution.

The system of linear equations \begin{equation*} \left.\begin{aligned} a_{11} x_1+\cdots +a_{1n}x_n = 0\\ \cdots\cdots\qquad \\ a_{m1} x_1+\cdots +a_{mn}x_n = 0 \end{aligned}

\right\}\label{2}

\end{equation*} obtained from (1) by replacing the free terms by zeros, is called the homogeneous system of linear equations corresponding to (1). The system (2) is always compatible (since the zero solution satisfies it). For it to have a non-zero solution it is necessary and sufficient that the rank of its matrix $A$ should be less than the number of unknowns $n$. In particular, for $m=n$ a square homogeneous system of linear algebraic equations has a non-zero solution if and only if its determinant is equal to zero.

The solutions of a compatible system of linear algebraic equations (1) and the corresponding homogeneous system (2) are connected as follows: The sum of a solution of (1) and a solution of (2) is a solution of (1); the difference between two solutions of (1) is a solution of (2). All solutions of (1) can be obtained by adding to every solution of (2) the same particular solution of (1).

==Geometric interpretation of solutions of a system of linear equations.== Any row of $n$ elements of a field $P$ can be regarded as the row of coordinates of a vector $x$ of an $n$-dimensional vector space $V$ over $P$ in some fixed basis. For brevity the vector $x$ is identified with the row of its coordinates. All solutions of the homogeneous system (2) form a subspace $U$ of $V$. Its dimension is equal to $n-r$, where $n$ is the number of unknowns and $r$ is the rank of the matrix $A$ of the system. If $r<n$, the subspace $U$ is non-zero and a basis of it is also called a fundamental system of solutions of the system of linear equations (2). Conversely, for every subspace $U$ of $V$ there is a homogeneous system of linear equations whose solutions form $U$. The set of vectors $Z=U+x_0$ obtained by adding to each vector of $U$ the same vector $x_0$ is called a linear variety (or linear manifold) of $V$. In case $\dim U = n-1$ the phrase hyperplane is also used. All solutions of a compatible system of linear equations (1) form a linear variety $X$; conversely, for any linear variety $Z$ there is a compatible system whose solutions form the given variety (see [3]).

Solution of systems of linear equations in integers.

Suppose a system of equations (1) is given in which all $a_{ij}$ and $B_i$ are integers. Let $d_k$, $k=1,\dots,\min(m,n)$, denote the greatest common divisor of all minors of order $k$ of the matrix $A$ formed by the coefficients of the unknowns, and let $d_k'$, $k=1,\dots,\min(m,n+1)$, denote the analogous number for the extended matrix $B$. If all minors of order $k$ in $A$ (or $B$) are zero, one puts $d_k=0$ (respectively, $d_k'=0$). For a system of linear equations in integers (1) to have an integer solution it is necessary and sufficient that $d_k'$ is divisible by $d_k$, $k=1,\dots,\min(m,n)$, and that $d_{n+1} = 0'$ for $m>n$.

To state a method of calculating all integer solutions of a system of linear equations one introduces the so-called elementary transformations of integer matrices: 1) adding to the $i$-th row the $j$-th row $(i\ne j$ multiplied by an integer $c$; 2) multiplying the $i$-th row by $-1$; 3) interchanging the $i$-th and $j$-th rows; and similar transformations of columns. Under elementary transformations of the rows the system (1) is transformed into an equivalent system, and hence the set of integer solutions is unchanged. Among the elementary transformations of the columns of the matrix $A$ formed by the coefficients of the unknowns, the following transformations of unknowns occur: If $y_1,\dots,y_n$ are new unknowns, then under a transformation 1), $$x_k = y_k\quad (k\ne j),\quad x_j = y_j + cy_i,$$ under a transformation 2), $$x_k = y_k\quad (k\ne i),\quad x_i = -y_i,$$ under a transformation 3),

$$x_k = y_k\quad (k\ne i,j),\quad x_i = y_j,\quad x_j=y_i$$ Under such transformations of the unknowns, integer solutions correspond to integer solutions.

A system (1) for which the rank of $A$ is equal to the rank of $B$ and equal to $r$ can be reduced, by elementary transformations of the rows of $B$ and the columns of $A$, and also by discarding zero equations, to the canonical form $$e_1 y = b_1',\dots, e_r y_r = b_r'.$$ The numbers $e_i $ satisfy the following additional conditions: $$e_i > 0, \quad i=1,\dots,r,$$ and $$e_i {\rm\ divides\ } e_{i+1},\quad i=1,\dots,r-1.$$ These additional conditions are not essential for the problem of solving the system in integers.

For integer solutions of

to exist it is necessary and sufficient that the numbers $$c_i = \frac{b_i'}{e_i},\quad i=1,\dots,r,$$ be integers. The unknowns $y_1,\dots,y_r$ are uniquely determined, and for $r<n$ the unknowns $y_{r+1},\dots,y_n$ can take any integer values.

To calculate the solutions of the original system (1) one must apply all transformations of columns of $A$ in the same order to the unit matrix $E$ of order $n$. The resulting integer matrix $Q$ gives the relation between the old and new unknowns: $$X=QY,$$ where $$X=\begin{pmatrix} x_1\\ \vdots\\ x_n\end{pmatrix},\quad Y=\begin{pmatrix} y_1\\ \vdots\\ y_n\end{pmatrix}.$$ One must then put $y_i=c_i$, $i=1,\dots,r$, and for $r<n$ one can assign any integer values to the unknowns $y_{r+1},\dots,y_n$, which play the role of parameters.

This method of solving (1) over the ring of integers can be generalized to any Euclidean ring or principal ideal ring.

The investigation of integer solutions and systems in the general case is the subject of the theory of Diophantine equations.

References

[1] A.G. Kurosh, "Higher algebra" , MIR (1972) (Translated from Russian)
[2] A.I. Kostrikin, "Introduction to algebra" , Springer (1982) (Translated from Russian)
[3] A.P. Mishina, I.V. Proskuryakov, "Higher algebra. Linear algebra, polynomials, general algebra" , Pergamon (1965) (Translated from Russian)


Comments

References

[a1] N. Bourbaki, "Elements of mathematics. Algebra: Algebraic structures. Linear algebra" , 1 , Addison-Wesley (1974) pp. Chapt.1;2 (Translated from French)
[a2] S. Lang, "Algebra" , Addison-Wesley (1984)
[a3] S. Lang, "Linear algebra" , Addison-Wesley (1966)
[a4] S. Lang, "Introduction to linear algebra" , Addison-Wesley (1970)
How to Cite This Entry:
Linear algebraic equation. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Linear_algebraic_equation&oldid=19645
This article was adapted from an original article by I.V. Proskuryakov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article