Namespaces
Variants
Actions

Difference between revisions of "Linear algebraic equation"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m
Line 1: Line 1:
An [[Algebraic equation|algebraic equation]] of the first degree in all unknowns, that is, an equation of the form
+
An
 +
[[Algebraic equation|algebraic equation]] of the first $X=\begin{pmatrix} x_1\\ \vdots\\ x_n\end{pmatrix},\quad
 +
Y=\begin{pmatrix} y_1\\ \vdots\\ y_n\end{pmatrix}.$degree in all
 +
unknowns, that is, an equation of the form  
 +
$$a_1 x_1+\cdots +a_nx_n = b.$$
 +
Every system of
 +
linear algebraic equations can be written in the form
 +
\begin{equation*}
 +
\left.\begin{aligned}
 +
    a_{11} x_1+\cdots +a_{1n}x_n = b_1\\
 +
    \cdots\cdots\qquad \\
 +
    a_{m1} x_1+\cdots +a_{mn}x_n = b_m
 +
      \end{aligned}
 +
\right\}\label{1}
 +
\end{equation*}
 +
where $m$
 +
and $n$ are natural numbers; the $a_{ij}$ ($i=1,\dots,m$; $j=1,\dots,n$) are called the
 +
coefficients at the unknowns and are given; the $b_i$ ($i=1,\dots,m$) are called
 +
the free terms and are also given; the $x_i$ ($i=1,\dots,n$) are called the
 +
unknowns and need to be found. A solution of the system of linear
 +
algebraic equations (1) is a set of values $c_1,\dots,c_n$ such that each equation
 +
of the system becomes an identity when the $c_i$ are substituted for the
 +
corresponding unknowns. For applications the most important case is
 +
that in which the coefficients at the unknowns, the free terms and the
 +
values of the unknowns are numbers (complex, real or integers), but
 +
one can also consider the case when they belong to an arbitrary
 +
[[Field|field]] $P$.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l0590601.png" /></td> </tr></table>
+
According to the number of solutions, systems of linear algebraic
 +
equations split into the following types:
  
Every system of linear algebraic equations can be written in the form
+
a compatible system is a system of linear equations having at least
 +
one solution;
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l0590602.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
an incompatible (or contradictory) system is a system having no
 
+
solution;
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l0590603.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l0590604.png" /> are natural numbers; the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l0590605.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l0590606.png" />; <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l0590607.png" />) are called the coefficients at the unknowns and are given; the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l0590608.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l0590609.png" />) are called the free terms and are also given; the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906010.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906011.png" />) are called the unknowns and need to be found. A solution of the system of linear algebraic equations (1) is a set of values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906012.png" /> such that each equation of the system becomes an identity when the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906013.png" /> are substituted for the corresponding unknowns. For applications the most important case is that in which the coefficients at the unknowns, the free terms and the values of the unknowns are numbers (complex, real or integers), but one can also consider the case when they belong to an arbitrary [[Field|field]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906014.png" />.
 
 
 
According to the number of solutions, systems of linear algebraic equations split into the following types:
 
 
 
a compatible system is a system of linear equations having at least one solution;
 
 
 
an incompatible (or contradictory) system is a system having no solution;
 
  
 
a determinate system is a system having a unique solution;
 
a determinate system is a system having a unique solution;
Line 19: Line 40:
 
an indeterminate system is a system having more than one solution.
 
an indeterminate system is a system having more than one solution.
  
If one considers solutions of a system with values of the unknowns in a given number field (or in an arbitrary infinite field), then every indeterminate system of linear equations has infinitely many solutions. In contrast to equations of degree exceeding one, the type of a system of linear algebraic equations does not change when the given field <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906015.png" /> is extended. Thus, under a field extension an incompatible system cannot become compatible, and a determinate system cannot become indeterminate. However, the set of solutions of an indeterminate system is increased by this.
+
If one considers solutions of a system with values of the unknowns in
 
+
a given number field (or in an arbitrary infinite field), then every
The simplest way of determining the type of the system (1) and calculating its solutions is given by the [[Gauss method|Gauss method]] of eliminating the unknowns. In the case <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906016.png" /> the system (1) is determinate if and only if the determinant formed by the coefficients of the unknowns is non-zero. In this case the unique solution of the system is found by Cramer's formulas (see [[Cramer rule|Cramer rule]]).
+
indeterminate system of linear equations has infinitely many
 
+
solutions. In contrast to equations of degree exceeding one, the type
To solve a system of linear equations whose coefficients contain parameters, instead of Gauss' method it is more convenient to use the general theory of linear equations, associated with the [[Rank|rank]] of a matrix. The rank of a matrix can be defined as the maximal number of linearly independent rows or columns. By the theorem on the rank of a matrix, the rank of the system of rows of a matrix is equal to the rank of the system of columns and is also equal to the highest order of the non-zero minors of the matrix (cf. [[Minor|Minor]]). With the system of linear equations (1) one associates two matrices: the matrix
+
of a system of linear algebraic equations does not change when the
 
+
given field $P$ is extended. Thus, under a field extension an
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906017.png" /></td> </tr></table>
+
incompatible system cannot become compatible, and a determinate system
 
+
cannot become indeterminate. However, the set of solutions of an
formed by the coefficients of the unknowns, and the extended matrix
+
indeterminate system is increased by this.
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906018.png" /></td> </tr></table>
 
 
 
obtained from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906019.png" /> by adjoining the column of free terms. A criterion for the system (1) to be compatible is given by the Kronecker–Capelli theorem: A system of linear equations (1) is compatible if and only if the rank of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906020.png" /> is equal to the rank of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906021.png" />.
 
  
A system of linear equations (1) has a unique solution if and only if the rank of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906022.png" /> is equal to the rank of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906023.png" /> and equal to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906024.png" />.
+
The simplest way of determining the type of the system (1) and
 +
calculating its solutions is given by the
 +
[[Gauss method|Gauss method]] of eliminating the unknowns. In the case
 +
$n=m$ the system (1) is determinate if and only if the determinant
 +
formed by the coefficients of the unknowns is non-zero. In this case
 +
the unique solution of the system is found by Cramer's formulas (see
 +
[[Cramer rule|Cramer rule]]).
  
The unknowns of a compatible system of linear equation subdivide into principal and free unknowns. For any values of the free unknowns there are uniquely determined values of the principal unknowns, which together give a solution of the given system. The subdivision into principal and free unknowns is usually not unique. Namely, if the rank of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906025.png" /> is equal to the rank of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906026.png" /> and equal to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906027.png" />, then any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906028.png" /> unknowns from the coefficients of which one can form a determinant <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906029.png" /> can be regarded as principal, and the others as free. In this case the determinant <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906030.png" /> is said to be a principal (or basic) minor of the system. It can be found by the [[Bordering method|bordering method]], beginning with minors of lowest orders. It is also not always defined uniquely. In the calculation of solutions one needs to take only the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906031.png" /> equations containing the principal minor <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906032.png" /> and to express the principal unknowns in general form (for example, by using Cramer's formulas) in terms of the free unknowns. These expressions are called the general solution. The free unknowns play the role of free parameters in it. Giving any values to them, one finds the values of principal unknowns that give a solution of the system with the chosen values of free unknowns. Any solution of a compatible system can be obtained in this way for suitable values of the free unknowns. If all coefficients at the unknowns and the free terms belong to a field <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906033.png" />, then by choosing for the free unknowns values from the same field <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906034.png" /> one obtains a solution with the values of all unknowns from the same field <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906035.png" />. For <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906036.png" /> all unknowns are principal and there is no general solution.
+
To solve a system of linear equations whose coefficients contain
 +
parameters, instead of Gauss' method it is more convenient to use the
 +
general theory of linear equations, associated with the
 +
[[Rank|rank]] of a matrix. The rank of a matrix can be defined as the
 +
maximal number of linearly independent rows or columns. By the theorem
 +
on the rank of a matrix, the rank of the system of rows of a matrix is
 +
equal to the rank of the system of columns and is also equal to the
 +
highest order of the non-zero minors of the matrix (cf.
 +
[[Minor|Minor]]). With the system of linear equations (1) one
 +
associates two matrices: the matrix
 +
$$A=(a_{ij}),$$
 +
formed by the coefficients
 +
of the unknowns, and the extended matrix
 +
$$B=(a_{ij},b_j),$$
 +
obtained from $A$ by
 +
adjoining the column of free terms. A criterion for the system (1) to
 +
be compatible is given by the Kronecker–Capelli theorem: A system of
 +
linear equations (1) is compatible if and only if the rank of $A$ is
 +
equal to the rank of $B$.
  
The system of linear equations
+
A system of linear equations (1) has a unique solution if and only if
 +
the rank of $A$ is equal to the rank of $B$ and equal to $n$.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906037.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
The unknowns of a compatible system of linear equation subdivide into
 +
principal and free unknowns. For any values of the free unknowns there
 +
are uniquely determined values of the principal unknowns, which
 +
together give a solution of the given system. The subdivision into
 +
principal and free unknowns is usually not unique. Namely, if the rank
 +
of $A$ is equal to the rank of $B$ and equal to $r$, then any $r$
 +
unknowns from the coefficients of which one can form a determinant $D\ne 0$
 +
can be regarded as principal, and the others as free. In this case the
 +
determinant $D$ is said to be a principal (or basic) minor of the
 +
system. It can be found by the
 +
[[Bordering method|bordering method]], beginning with minors of lowest
 +
orders. It is also not always defined uniquely. In the calculation of
 +
solutions one needs to take only the $r$ equations containing the
 +
principal minor $D$ and to express the principal unknowns in general
 +
form (for example, by using Cramer's formulas) in terms of the free
 +
unknowns. These expressions are called the general solution. The free
 +
unknowns play the role of free parameters in it. Giving any values to
 +
them, one finds the values of principal unknowns that give a solution
 +
of the system with the chosen values of free unknowns. Any solution of
 +
a compatible system can be obtained in this way for suitable values of
 +
the free unknowns. If all coefficients at the unknowns and the free
 +
terms belong to a field $P$, then by choosing for the free unknowns
 +
values from the same field $P$ one obtains a solution with the values
 +
of all unknowns from the same field $P$. For $r=n$ all unknowns are
 +
principal and there is no general solution.
  
obtained from (1) by replacing the free terms by zeros, is called the homogeneous system of linear equations corresponding to (1). The system (2) is always compatible (since the zero solution satisfies it). For it to have a non-zero solution it is necessary and sufficient that the rank of its matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906038.png" /> should be less than the number of unknowns <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906039.png" />. In particular, for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906040.png" /> a square homogeneous system of linear algebraic equations has a non-zero solution if and only if its determinant is equal to zero.
+
The system of linear equations
 +
\begin{equation*}
 +
\left.\begin{aligned}
 +
    a_{11} x_1+\cdots +a_{1n}x_n = 0\\
 +
    \cdots\cdots\qquad \\
 +
    a_{m1} x_1+\cdots +a_{mn}x_n = 0
 +
      \end{aligned}
 +
\right\}\label{2}
 +
\end{equation*}
 +
obtained from (1) by replacing
 +
the free terms by zeros, is called the homogeneous system of linear
 +
equations corresponding to (1). The system (2) is always compatible
 +
(since the zero solution satisfies it). For it to have a non-zero
 +
solution it is necessary and sufficient that the rank of its matrix
 +
$A$ should be less than the number of unknowns $n$. In particular, for
 +
$m=n$ a square homogeneous system of linear algebraic equations has a
 +
non-zero solution if and only if its determinant is equal to zero.
  
The solutions of a compatible system of linear algebraic equations (1) and the corresponding homogeneous system (2) are connected as follows: The sum of a solution of (1) and a solution of (2) is a solution of (1); the difference between two solutions of (1) is a solution of (2). All solutions of (1) can be obtained by adding to every solution of (2) the same particular solution of (1).
+
The solutions of a compatible system of linear algebraic equations (1)
 +
and the corresponding homogeneous system (2) are connected as follows:
 +
The sum of a solution of (1) and a solution of (2) is a solution of
 +
(1); the difference between two solutions of (1) is a solution of
 +
(2). All solutions of (1) can be obtained by adding to every solution
 +
of (2) the same particular solution of (1).
  
==Geometric interpretation of solutions of a system of linear equations.==
+
==Geometric interpretation of solutions of a system of linear
Any row of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906041.png" /> elements of a field <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906042.png" /> can be regarded as the row of coordinates of a vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906043.png" /> of an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906044.png" />-dimensional vector space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906045.png" /> over <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906046.png" /> in some fixed basis. For brevity the vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906047.png" /> is identified with the row of its coordinates. All solutions of the homogeneous system (2) form a subspace <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906048.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906049.png" />. Its dimension is equal to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906050.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906051.png" /> is the number of unknowns and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906052.png" /> is the rank of the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906053.png" /> of the system. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906054.png" />, the subspace <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906055.png" /> is non-zero and a basis of it is also called a fundamental system of solutions of the system of linear equations (2). Conversely, for every subspace <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906056.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906057.png" /> there is a homogeneous system of linear equations whose solutions form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906058.png" />. The set of vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906059.png" /> obtained by adding to each vector of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906060.png" /> the same vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906061.png" /> is called a linear variety (or linear manifold) of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906062.png" />. In case <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906063.png" /> the phrase hyperplane is also used. All solutions of a compatible system of linear equations (1) form a linear variety <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906064.png" />; conversely, for any linear variety <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906065.png" /> there is a compatible system whose solutions form the given variety (see [[#References|[3]]]).
+
equations.==
 +
Any row of $n$ elements of a field $P$ can be regarded as
 +
the row of coordinates of a vector $x$ of an $n$-dimensional vector
 +
space $V$ over $P$ in some fixed basis. For brevity the vector $x$ is
 +
identified with the row of its coordinates. All solutions of the
 +
homogeneous system (2) form a subspace $U$ of $V$. Its dimension is
 +
equal to $n-r$, where $n$ is the number of unknowns and $r$ is the rank
 +
of the matrix $A$ of the system. If $r<n$, the subspace $U$ is non-zero
 +
and a basis of it is also called a fundamental system of solutions of
 +
the system of linear equations (2). Conversely, for every subspace $U$
 +
of $V$ there is a homogeneous system of linear equations whose
 +
solutions form $U$. The set of vectors $Z=U+x_0$ obtained by adding to each
 +
vector of $U$ the same vector $x_0$ is called a linear variety (or
 +
linear manifold) of $V$. In case $\dim U = n-1$ the phrase hyperplane is also
 +
used. All solutions of a compatible system of linear equations (1)
 +
form a linear variety $X$; conversely, for any linear variety $Z$
 +
there is a compatible system whose solutions form the given variety
 +
(see
 +
[[#References|[3]]]).
  
 
==Solution of systems of linear equations in integers.==
 
==Solution of systems of linear equations in integers.==
Suppose a system of equations (1) is given in which all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906066.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906067.png" /> are integers. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906068.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906069.png" />, denote the greatest common divisor of all minors of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906070.png" /> of the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906071.png" /> formed by the coefficients of the unknowns, and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906072.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906073.png" />, denote the analogous number for the extended matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906074.png" />. If all minors of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906075.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906076.png" /> (or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906077.png" />) are zero, one puts <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906078.png" /> (respectively, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906079.png" />). For a system of linear equations in integers (1) to have an integer solution it is necessary and sufficient that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906080.png" /> is divisible by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906081.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906082.png" />, and that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906083.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906084.png" />.
+
Suppose a
 
+
system of equations (1) is given in which all $a_{ij}$ and $B_i$ are
To state a method of calculating all integer solutions of a system of linear equations one introduces the so-called elementary transformations of integer matrices: 1) adding to the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906085.png" />-th row the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906086.png" />-th row <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906087.png" /> multiplied by an integer <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906088.png" />; 2) multiplying the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906089.png" />-th row by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906090.png" />; 3) interchanging the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906091.png" />-th and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906092.png" />-th rows; and similar transformations of columns. Under elementary transformations of the rows the system (1) is transformed into an equivalent system, and hence the set of integer solutions is unchanged. Among the elementary transformations of the columns of the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906093.png" /> formed by the coefficients of the unknowns, the following transformations of unknowns occur: If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906094.png" /> are new unknowns, then under a transformation 1),
+
integers. Let $d_k$, $k=1,\dots,\min(m,n)$, denote the greatest common divisor of all
 
+
minors of order $k$ of the matrix $A$ formed by the coefficients of
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906095.png" /></td> </tr></table>
+
the unknowns, and let $d_k'$, $k=1,\dots,\min(m,n+1)$, denote the analogous number for the
 
+
extended matrix $B$. If all minors of order $k$ in $A$ (or $B$) are
under a transformation 2),
+
zero, one puts $d_k=0$ (respectively, $d_k'=0$). For a system of linear
 
+
equations in integers (1) to have an integer solution it is necessary
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906096.png" /></td> </tr></table>
+
and sufficient that $d_k'$ is divisible by $d_k$, $k=1,\dots,\min(m,n)$, and that $d_{n+1} = 0'$ for
 +
$m>n$.
  
 +
To state a method of calculating all integer solutions of a system of
 +
linear equations one introduces the so-called elementary
 +
transformations of integer matrices: 1) adding to the $i$-th row the
 +
$j$-th row $(i\ne j$ multiplied by an integer $c$; 2) multiplying the $i$-th
 +
row by $-1$; 3) interchanging the $i$-th and $j$-th rows; and similar
 +
transformations of columns. Under elementary transformations of the
 +
rows the system (1) is transformed into an equivalent system, and
 +
hence the set of integer solutions is unchanged. Among the elementary
 +
transformations of the columns of the matrix $A$ formed by the
 +
coefficients of the unknowns, the following transformations of
 +
unknowns occur: If $y_1,\dots,y_n$ are new unknowns, then under a transformation
 +
1),
 +
$$x_k = y_k\quad (k\ne j),\quad x_j = y_j + cy_i,$$
 +
under a transformation 2),
 +
$$x_k = y_k\quad (k\ne i),\quad x_i = -y_i,$$
 
under a transformation 3),
 
under a transformation 3),
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906097.png" /></td> </tr></table>
+
$$x_k = y_k\quad (k\ne i,j),\quad x_i = y_j,\quad x_j=y_i$$
 +
Under such transformations of the unknowns, integer solutions
 +
correspond to integer solutions.
  
Under such transformations of the unknowns, integer solutions correspond to integer solutions.
+
A system (1) for which the rank of $A$ is equal to the rank of $B$ and
 
+
equal to $r$ can be reduced, by elementary transformations of the rows
A system (1) for which the rank of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906098.png" /> is equal to the rank of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l05906099.png" /> and equal to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l059060100.png" /> can be reduced, by elementary transformations of the rows of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l059060101.png" /> and the columns of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l059060102.png" />, and also by discarding zero equations, to the canonical form
+
of $B$ and the columns of $A$, and also by discarding zero equations,
 
+
to the canonical form  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l059060103.png" /></td> </tr></table>
+
$$e_1 y = b_1',\dots, e_r y_r = b_r'.$$
 
+
The numbers $e_i $ satisfy the following
The numbers <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l059060104.png" /> satisfy the following additional conditions:
+
additional conditions:  
 
+
$$e_i > 0, \quad i=1,\dots,r,$$
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l059060105.png" /></td> </tr></table>
+
and  
 
+
$$e_i {\rm\ divides\ } e_{i+1},\quad i=1,\dots,r-1.$$
and
+
These additional conditions are
 
+
not essential for the problem of solving the system in integers.
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l059060106.png" /></td> </tr></table>
 
 
 
These additional conditions are not essential for the problem of solving the system in integers.
 
  
 
For integer solutions of
 
For integer solutions of
  
to exist it is necessary and sufficient that the numbers
+
to exist it is necessary and sufficient that the numbers  
 
+
$$c_i = \frac{b_i'}{e_i},\quad i=1,\dots,r,$$
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l059060107.png" /></td> </tr></table>
+
be
 
+
integers. The unknowns $y_1,\dots,y_r$ are uniquely determined, and for $r<n$ the
be integers. The unknowns <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l059060108.png" /> are uniquely determined, and for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l059060109.png" /> the unknowns <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l059060110.png" /> can take any integer values.
+
unknowns $y_{r+1},\dots,y_n$ can take any integer values.
 
 
To calculate the solutions of the original system (1) one must apply all transformations of columns of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l059060111.png" /> in the same order to the unit matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l059060112.png" /> of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l059060113.png" />. The resulting integer matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l059060114.png" /> gives the relation between the old and new unknowns:
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l059060115.png" /></td> </tr></table>
 
 
 
where
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l059060116.png" /></td> </tr></table>
 
  
One must then put <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l059060117.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l059060118.png" />, and for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l059060119.png" /> one can assign any integer values to the unknowns <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059060/l059060120.png" />, which play the role of parameters.
+
To calculate the solutions of the original system (1) one must apply
 +
all transformations of columns of $A$ in the same order to the unit
 +
matrix $E$ of order $n$. The resulting integer matrix $Q$ gives the
 +
relation between the old and new unknowns:
 +
$$X=QY,$$
 +
where
 +
$$X=\begin{pmatrix} x_1\\ \vdots\\ x_n\end{pmatrix},\quad
 +
Y=\begin{pmatrix} y_1\\ \vdots\\ y_n\end{pmatrix}.$$
 +
One must
 +
then put $y_i=c_i$, $i=1,\dots,r$, and for $r<n$ one can assign any integer values to
 +
the unknowns $y_{r+1},\dots,y_n$, which play the role of parameters.
  
This method of solving (1) over the ring of integers can be generalized to any [[Euclidean ring|Euclidean ring]] or [[Principal ideal ring|principal ideal ring]].
+
This method of solving (1) over the ring of integers can be
 +
generalized to any
 +
[[Euclidean ring|Euclidean ring]] or
 +
[[Principal ideal ring|principal ideal ring]].
  
The investigation of integer solutions and systems in the general case is the subject of the theory of [[Diophantine equations|Diophantine equations]].
+
The investigation of integer solutions and systems in the general case
 +
is the subject of the theory of
 +
[[Diophantine equations|Diophantine equations]].
  
 
====References====
 
====References====
<table><TR><TD valign="top">[1]</TD> <TD valign="top"> A.G. Kurosh,   "Higher algebra" , MIR (1972) (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> A.I. Kostrikin,   "Introduction to algebra" , Springer (1982) (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> A.P. Mishina,   I.V. Proskuryakov,   "Higher algebra. Linear algebra, polynomials, general algebra" , Pergamon (1965) (Translated from Russian)</TD></TR></table>
+
<table><TR><TD valign="top">[1]</TD>
 +
<TD valign="top"> A.G. Kurosh, "Higher algebra" , MIR (1972) (Translated from Russian)</TD>
 +
</TR><TR><TD valign="top">[2]</TD>
 +
<TD valign="top"> A.I. Kostrikin, "Introduction to algebra" , Springer (1982) (Translated from Russian)</TD>
 +
</TR><TR><TD valign="top">[3]</TD>
 +
<TD valign="top"> A.P. Mishina, I.V. Proskuryakov, "Higher algebra. Linear algebra, polynomials, general algebra" , Pergamon (1965) (Translated from Russian)</TD>
 +
</TR></table>
  
  
Line 110: Line 243:
  
 
====References====
 
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> N. Bourbaki,   "Elements of mathematics. Algebra: Algebraic structures. Linear algebra" , '''1''' , Addison-Wesley (1974) pp. Chapt.1;2 (Translated from French)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> S. Lang,   "Algebra" , Addison-Wesley (1984)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top"> S. Lang,   "Linear algebra" , Addison-Wesley (1966)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top"> S. Lang,   "Introduction to linear algebra" , Addison-Wesley (1970)</TD></TR></table>
+
<table><TR><TD valign="top">[a1]</TD>
 +
<TD valign="top"> N. Bourbaki, "Elements of mathematics. Algebra: Algebraic structures. Linear algebra" , '''1''' , Addison-Wesley (1974) pp. Chapt.1;2 (Translated from French)</TD>
 +
</TR><TR><TD valign="top">[a2]</TD>
 +
<TD valign="top"> S. Lang, "Algebra" , Addison-Wesley (1984)</TD>
 +
</TR><TR><TD valign="top">[a3]</TD>
 +
<TD valign="top"> S. Lang, "Linear algebra" , Addison-Wesley (1966)</TD>
 +
</TR><TR><TD valign="top">[a4]</TD>
 +
<TD valign="top"> S. Lang, "Introduction to linear algebra" , Addison-Wesley (1970)</TD>
 +
</TR></table>

Revision as of 20:58, 17 November 2011

An algebraic equation of the first $X=\begin{pmatrix} x_1\\ \vdots\\ x_n\end{pmatrix},\quad Y=\begin{pmatrix} y_1\\ \vdots\\ y_n\end{pmatrix}.$degree in all unknowns, that is, an equation of the form $$a_1 x_1+\cdots +a_nx_n = b.$$ Every system of linear algebraic equations can be written in the form \begin{equation*} \left.\begin{aligned} a_{11} x_1+\cdots +a_{1n}x_n = b_1\\ \cdots\cdots\qquad \\ a_{m1} x_1+\cdots +a_{mn}x_n = b_m \end{aligned}

\right\}\label{1}

\end{equation*} where $m$ and $n$ are natural numbers; the $a_{ij}$ ($i=1,\dots,m$; $j=1,\dots,n$) are called the coefficients at the unknowns and are given; the $b_i$ ($i=1,\dots,m$) are called the free terms and are also given; the $x_i$ ($i=1,\dots,n$) are called the unknowns and need to be found. A solution of the system of linear algebraic equations (1) is a set of values $c_1,\dots,c_n$ such that each equation of the system becomes an identity when the $c_i$ are substituted for the corresponding unknowns. For applications the most important case is that in which the coefficients at the unknowns, the free terms and the values of the unknowns are numbers (complex, real or integers), but one can also consider the case when they belong to an arbitrary field $P$.

According to the number of solutions, systems of linear algebraic equations split into the following types:

a compatible system is a system of linear equations having at least one solution;

an incompatible (or contradictory) system is a system having no solution;

a determinate system is a system having a unique solution;

an indeterminate system is a system having more than one solution.

If one considers solutions of a system with values of the unknowns in a given number field (or in an arbitrary infinite field), then every indeterminate system of linear equations has infinitely many solutions. In contrast to equations of degree exceeding one, the type of a system of linear algebraic equations does not change when the given field $P$ is extended. Thus, under a field extension an incompatible system cannot become compatible, and a determinate system cannot become indeterminate. However, the set of solutions of an indeterminate system is increased by this.

The simplest way of determining the type of the system (1) and calculating its solutions is given by the Gauss method of eliminating the unknowns. In the case $n=m$ the system (1) is determinate if and only if the determinant formed by the coefficients of the unknowns is non-zero. In this case the unique solution of the system is found by Cramer's formulas (see Cramer rule).

To solve a system of linear equations whose coefficients contain parameters, instead of Gauss' method it is more convenient to use the general theory of linear equations, associated with the rank of a matrix. The rank of a matrix can be defined as the maximal number of linearly independent rows or columns. By the theorem on the rank of a matrix, the rank of the system of rows of a matrix is equal to the rank of the system of columns and is also equal to the highest order of the non-zero minors of the matrix (cf. Minor). With the system of linear equations (1) one associates two matrices: the matrix $$A=(a_{ij}),$$ formed by the coefficients of the unknowns, and the extended matrix $$B=(a_{ij},b_j),$$ obtained from $A$ by adjoining the column of free terms. A criterion for the system (1) to be compatible is given by the Kronecker–Capelli theorem: A system of linear equations (1) is compatible if and only if the rank of $A$ is equal to the rank of $B$.

A system of linear equations (1) has a unique solution if and only if the rank of $A$ is equal to the rank of $B$ and equal to $n$.

The unknowns of a compatible system of linear equation subdivide into principal and free unknowns. For any values of the free unknowns there are uniquely determined values of the principal unknowns, which together give a solution of the given system. The subdivision into principal and free unknowns is usually not unique. Namely, if the rank of $A$ is equal to the rank of $B$ and equal to $r$, then any $r$ unknowns from the coefficients of which one can form a determinant $D\ne 0$ can be regarded as principal, and the others as free. In this case the determinant $D$ is said to be a principal (or basic) minor of the system. It can be found by the bordering method, beginning with minors of lowest orders. It is also not always defined uniquely. In the calculation of solutions one needs to take only the $r$ equations containing the principal minor $D$ and to express the principal unknowns in general form (for example, by using Cramer's formulas) in terms of the free unknowns. These expressions are called the general solution. The free unknowns play the role of free parameters in it. Giving any values to them, one finds the values of principal unknowns that give a solution of the system with the chosen values of free unknowns. Any solution of a compatible system can be obtained in this way for suitable values of the free unknowns. If all coefficients at the unknowns and the free terms belong to a field $P$, then by choosing for the free unknowns values from the same field $P$ one obtains a solution with the values of all unknowns from the same field $P$. For $r=n$ all unknowns are principal and there is no general solution.

The system of linear equations \begin{equation*} \left.\begin{aligned} a_{11} x_1+\cdots +a_{1n}x_n = 0\\ \cdots\cdots\qquad \\ a_{m1} x_1+\cdots +a_{mn}x_n = 0 \end{aligned}

\right\}\label{2}

\end{equation*} obtained from (1) by replacing the free terms by zeros, is called the homogeneous system of linear equations corresponding to (1). The system (2) is always compatible (since the zero solution satisfies it). For it to have a non-zero solution it is necessary and sufficient that the rank of its matrix $A$ should be less than the number of unknowns $n$. In particular, for $m=n$ a square homogeneous system of linear algebraic equations has a non-zero solution if and only if its determinant is equal to zero.

The solutions of a compatible system of linear algebraic equations (1) and the corresponding homogeneous system (2) are connected as follows: The sum of a solution of (1) and a solution of (2) is a solution of (1); the difference between two solutions of (1) is a solution of (2). All solutions of (1) can be obtained by adding to every solution of (2) the same particular solution of (1).

==Geometric interpretation of solutions of a system of linear equations.== Any row of $n$ elements of a field $P$ can be regarded as the row of coordinates of a vector $x$ of an $n$-dimensional vector space $V$ over $P$ in some fixed basis. For brevity the vector $x$ is identified with the row of its coordinates. All solutions of the homogeneous system (2) form a subspace $U$ of $V$. Its dimension is equal to $n-r$, where $n$ is the number of unknowns and $r$ is the rank of the matrix $A$ of the system. If $r<n$, the subspace $U$ is non-zero and a basis of it is also called a fundamental system of solutions of the system of linear equations (2). Conversely, for every subspace $U$ of $V$ there is a homogeneous system of linear equations whose solutions form $U$. The set of vectors $Z=U+x_0$ obtained by adding to each vector of $U$ the same vector $x_0$ is called a linear variety (or linear manifold) of $V$. In case $\dim U = n-1$ the phrase hyperplane is also used. All solutions of a compatible system of linear equations (1) form a linear variety $X$; conversely, for any linear variety $Z$ there is a compatible system whose solutions form the given variety (see [3]).

Solution of systems of linear equations in integers.

Suppose a system of equations (1) is given in which all $a_{ij}$ and $B_i$ are integers. Let $d_k$, $k=1,\dots,\min(m,n)$, denote the greatest common divisor of all minors of order $k$ of the matrix $A$ formed by the coefficients of the unknowns, and let $d_k'$, $k=1,\dots,\min(m,n+1)$, denote the analogous number for the extended matrix $B$. If all minors of order $k$ in $A$ (or $B$) are zero, one puts $d_k=0$ (respectively, $d_k'=0$). For a system of linear equations in integers (1) to have an integer solution it is necessary and sufficient that $d_k'$ is divisible by $d_k$, $k=1,\dots,\min(m,n)$, and that $d_{n+1} = 0'$ for $m>n$.

To state a method of calculating all integer solutions of a system of linear equations one introduces the so-called elementary transformations of integer matrices: 1) adding to the $i$-th row the $j$-th row $(i\ne j$ multiplied by an integer $c$; 2) multiplying the $i$-th row by $-1$; 3) interchanging the $i$-th and $j$-th rows; and similar transformations of columns. Under elementary transformations of the rows the system (1) is transformed into an equivalent system, and hence the set of integer solutions is unchanged. Among the elementary transformations of the columns of the matrix $A$ formed by the coefficients of the unknowns, the following transformations of unknowns occur: If $y_1,\dots,y_n$ are new unknowns, then under a transformation 1), $$x_k = y_k\quad (k\ne j),\quad x_j = y_j + cy_i,$$ under a transformation 2), $$x_k = y_k\quad (k\ne i),\quad x_i = -y_i,$$ under a transformation 3),

$$x_k = y_k\quad (k\ne i,j),\quad x_i = y_j,\quad x_j=y_i$$ Under such transformations of the unknowns, integer solutions correspond to integer solutions.

A system (1) for which the rank of $A$ is equal to the rank of $B$ and equal to $r$ can be reduced, by elementary transformations of the rows of $B$ and the columns of $A$, and also by discarding zero equations, to the canonical form $$e_1 y = b_1',\dots, e_r y_r = b_r'.$$ The numbers $e_i $ satisfy the following additional conditions: $$e_i > 0, \quad i=1,\dots,r,$$ and $$e_i {\rm\ divides\ } e_{i+1},\quad i=1,\dots,r-1.$$ These additional conditions are not essential for the problem of solving the system in integers.

For integer solutions of

to exist it is necessary and sufficient that the numbers $$c_i = \frac{b_i'}{e_i},\quad i=1,\dots,r,$$ be integers. The unknowns $y_1,\dots,y_r$ are uniquely determined, and for $r<n$ the unknowns $y_{r+1},\dots,y_n$ can take any integer values.

To calculate the solutions of the original system (1) one must apply all transformations of columns of $A$ in the same order to the unit matrix $E$ of order $n$. The resulting integer matrix $Q$ gives the relation between the old and new unknowns: $$X=QY,$$ where $$X=\begin{pmatrix} x_1\\ \vdots\\ x_n\end{pmatrix},\quad Y=\begin{pmatrix} y_1\\ \vdots\\ y_n\end{pmatrix}.$$ One must then put $y_i=c_i$, $i=1,\dots,r$, and for $r<n$ one can assign any integer values to the unknowns $y_{r+1},\dots,y_n$, which play the role of parameters.

This method of solving (1) over the ring of integers can be generalized to any Euclidean ring or principal ideal ring.

The investigation of integer solutions and systems in the general case is the subject of the theory of Diophantine equations.

References

[1] A.G. Kurosh, "Higher algebra" , MIR (1972) (Translated from Russian)
[2] A.I. Kostrikin, "Introduction to algebra" , Springer (1982) (Translated from Russian)
[3] A.P. Mishina, I.V. Proskuryakov, "Higher algebra. Linear algebra, polynomials, general algebra" , Pergamon (1965) (Translated from Russian)


Comments

References

[a1] N. Bourbaki, "Elements of mathematics. Algebra: Algebraic structures. Linear algebra" , 1 , Addison-Wesley (1974) pp. Chapt.1;2 (Translated from French)
[a2] S. Lang, "Algebra" , Addison-Wesley (1984)
[a3] S. Lang, "Linear algebra" , Addison-Wesley (1966)
[a4] S. Lang, "Introduction to linear algebra" , Addison-Wesley (1970)
How to Cite This Entry:
Linear algebraic equation. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Linear_algebraic_equation&oldid=16383
This article was adapted from an original article by I.V. Proskuryakov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article