Namespaces
Variants
Actions

Lyapunov equation

From Encyclopedia of Mathematics
Revision as of 16:52, 1 July 2020 by Maximilian Janisch (talk | contribs) (AUTOMATIC EDIT (latexlist): Replaced 54 formulas out of 54 by TEX code with an average confidence of 2.0 and a minimal confidence of 2.0.)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Usually, the matrix equation

\begin{equation} \tag{a1} A ^ { * } X + X A + C = 0, \end{equation}

where the star denotes transposition for matrices with real entries and transposition and complex conjugation for matrices with complex entries; $C$ is symmetric (or Hermitian in the complex case; cf. Hermitian matrix; Symmetric matrix). In fact, this is a special case of the matrix Sylvester equation

\begin{equation} \tag{a2} X A + B X + C = 0. \end{equation}

The main result concerning the Sylvester equation is the following: If $A$ and $- B$ have no common eigenvalues, then the Sylvester equation has a unique solution for any $C$.

When $B = A ^ { * }$ and there are no eigenvalues $\lambda_j$ of $A$ such that $\lambda _ { j } + \overline { \lambda } _ { k } = 0$ whatever $j$ and $k$ are (in the numbering of eigenvalues of $A$), then (a1) has a unique Hermitian solution for any $C$. Moreover if $A$ is a Hurwitz matrix (i.e. having all its eigenvalues in the left half-plane, thus having strictly negative real parts), then this unique solution is

\begin{equation} \tag{a3} x = - \int _ { 0 } ^ { \infty } e ^ { A ^ { * } t } C e ^ { A t } d t, \end{equation}

and if $C \leq 0$, then $X \leq 0$. From this one may deduce that if $A$ and $P$ satisfy $A ^ { * } P + P A = 0$, than a necessary and sufficient condition for $A$ to be a Hurwitz matrix is that $P > 0$. In fact, this last property justifies the assignment of Lyapunov's name to (a1); in Lyapunov's famous monograph [a1], Chap. 20, Thm. 2, one finds the following result: Consider the partial differential equation

\begin{equation} \tag{a4} \sum _ { i = 1 } ^ { m } \left( \sum _ { j = 1 } ^ { m } a _ { i j } x _ { j } \right) \frac { \partial _ { v } } { \partial x _ { i } } = U. \end{equation}

If $A$ has eigenvalues with strictly negative real parts and $U$ is a form of definite sign and even degree, then the solution, $V$, of this equation will be a form of the same degree that is sign definite (with sign opposite to that of $U$. Now, if $U = - x ^ { * } C x < 0$ with $x = \operatorname { col } ( x _ { 1 } \ldots x _ { n } )$, then $V = x ^ { * } P x$, with $P > 0$, is a solution of (a1). In fact, $V$ is a Lyapunov function for the system

\begin{equation} \tag{a5} \dot { x } = A x. \end{equation}

These facts and results have a straightforward extension to the discrete-time case: for the system

\begin{equation} \tag{a6} x _ { k + 1 } = A x _ { k } \end{equation}

one may consider the quadratic Lyapunov function as above (i.e. $V = x ^ { * } P x$) and obtain that $P$ has to be a solution of the discrete-time Lyapunov equation

\begin{equation} \tag{a7} A ^ { * } X A - X + C = 0, \end{equation}

whose solution has the form

\begin{equation} \tag{a8} x = - \sum _ { k = 0 } ^ { \infty } ( A ^ { * } ) ^ { k } C ( A ) ^ { k } \end{equation}

provided the eigenvalues of $A$ are inside the unit disc.

The equation may be defined for the time-varying case also. For the system

\begin{equation} \tag{a9} \dot { x } = A ( t ) x \end{equation}

one may consider the quadratic Lyapunov function $V ( t , x ) = x ^ { * } P ( t ) x$ and obtain that $P ( t )$ has to be the unique solution, bounded on the whole real axis, of the matrix differential equation

\begin{equation} \tag{a10} \dot{X} + A ^ { * } ( t ) X + X A ( t ) + C ( t ) = 0. \end{equation}

This solution is

\begin{equation} \tag{a11} X = - \int _ { - \infty } ^ { t } X _ { A } ( t , z ) C ( z ) X _ { A } ( t , z ) \,d z, \end{equation}

$X _ { A } ( t , z )$ being the matrix solution of $\dot { X } = A ( t ) X$, $- X _ { A } ( z , z ) = I$. The solution is well defined if $A ( t )$ defines an exponentially stable evolution ($| X _ { A } ( t , z ) | \leq \beta e ^ { - \alpha ( t - z ) }$, $\alpha , \beta > 0$). It is worth mentioning that if $A ( t )$ and $C ( t )$ are periodic or almost periodic, then $X ( t )$ defined by (a11) is periodic or almost periodic, respectively. Extensions of this result to a discrete-time or infinite dimensional (operator) case are widely known. Actually, the Lyapunov equation has many applications in stability and control theory; efficient numerical algorithms for solving it are available.

References

[a1] A.M. Lyapunov, "General problem of stability of motion" , USSR Acad. Publ. House (1950) (In Russian)
[a2] R.E. Bellman, "Introduction to matrix-analysis" , McGraw-Hill (1960)
[a3] A. Halanay, "Differential equations: stability, oscillations time lags" , Acad. Press (1966)
[a4] A. Halanay, D. Wexler, "Qualitative theory of pulse systems" , Nauka (1971) (In Russian)
[a5] A. Halanay, V. Räsvan, "Applications of Lyapunov methods in stability" , Kluwer Acad. Publ. (1993)
How to Cite This Entry:
Lyapunov equation. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Lyapunov_equation&oldid=50042
This article was adapted from an original article by Vladimir Räsvan (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article