Namespaces
Variants
Actions

Difference between revisions of "Anti-eigenvalue"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
Line 1: Line 1:
The theory of anti-eigenvalues is a [[Spectral theory|spectral theory]] based upon the turning angles of a matrix or operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110630/a1106301.png" />. (See [[Eigen value|Eigen value]] for the spectral theory of stretchings, rather than turnings, of a matrix or operator.)
+
<!--
 +
a1106301.png
 +
$#A+1 = 32 n = 0
 +
$#C+1 = 32 : ~/encyclopedia/old_files/data/A110/A.1100630 Anti\AAheigenvalue
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
  
For a strongly accretive operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110630/a1106302.png" />, i.e., <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110630/a1106303.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110630/a1106304.png" />, the first anti-eigenvalue <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110630/a1106305.png" /> is defined by
+
{{TEX|auto}}
 +
{{TEX|done}}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110630/a1106306.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a1)</td></tr></table>
+
The theory of anti-eigenvalues is a [[Spectral theory|spectral theory]] based upon the turning angles of a matrix or operator  $  A $.  
 +
(See [[Eigen value|Eigen value]] for the spectral theory of stretchings, rather than turnings, of a matrix or operator.)
  
From (a1) one has immediately the notion of the angle <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110630/a1106307.png" />: the largest angle through which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110630/a1106308.png" /> may turn a vector. Any corresponding vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110630/a1106309.png" /> which is turned by that angle is called a first anti-eigenvector. It turns out that, in general, the first anti-eigenvectors come in pairs. Two important early results were the minmax theorem and the Euler equation.
+
For a strongly accretive operator  $  A $,
 +
i.e.,  $  { \mathop{\rm Re} } \langle  {Ax,x } \rangle \geq  m \| x \|  ^ {2} $,
 +
$  m > 0 $,
 +
the first anti-eigenvalue  $  \mu $
 +
is defined by
 +
 
 +
$$ \tag{a1 }
 +
\mu \equiv  \cos  A =  \inf  _ {x \in D ( A ) } {
 +
\frac{ { \mathop{\rm Re} } \left \langle  {Ax, x } \right \rangle }{\left \| {Ax } \right \| \left \| x \right \| }
 +
} .
 +
$$
 +
 
 +
From (a1) one has immediately the notion of the angle $  \phi ( A ) $:  
 +
the largest angle through which $  A $
 +
may turn a vector. Any corresponding vector $  x $
 +
which is turned by that angle is called a first anti-eigenvector. It turns out that, in general, the first anti-eigenvectors come in pairs. Two important early results were the minmax theorem and the Euler equation.
  
 
==Minmax theorem.==
 
==Minmax theorem.==
For any strongly accretive bounded operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110630/a11063010.png" /> on a [[Hilbert space|Hilbert space]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110630/a11063011.png" />,
+
For any strongly accretive bounded operator $  A $
 +
on a [[Hilbert space|Hilbert space]] $  X $,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110630/a11063012.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a2)</td></tr></table>
+
$$ \tag{a2 }
 +
\sup  _ {\left \| x \right \| = 1 }  \inf  _ {- \infty < \epsilon < \infty } \left \| {( \epsilon A - I ) x } \right \|  ^ {2} =
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110630/a11063013.png" /></td> </tr></table>
+
$$
 +
=  
 +
\inf  _ {- \infty < \epsilon < \infty }  \sup  _ {\left \| x \right \| = 1 } \left \| {( \epsilon A - I ) x } \right \|  ^ {2} .
 +
$$
  
 
Using the minmax theorem, the right-hand side of (a2) is seen to define
 
Using the minmax theorem, the right-hand side of (a2) is seen to define
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110630/a11063014.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a3)</td></tr></table>
+
$$ \tag{a3 }
 +
\nu = \sin  A = \inf  _ {\epsilon > 0 } \left \| {\epsilon A - I } \right \|
 +
$$
  
in such a way that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110630/a11063015.png" />. This implies an operator trigonometry (see [[#References|[a1]]]).
+
in such a way that $  \cos  ^ {2} A +  \sin  ^ {2} A = 1 $.  
 +
This implies an operator trigonometry (see [[#References|[a1]]]).
  
 
==Euler equation.==
 
==Euler equation.==
For any strongly accretive bounded operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110630/a11063016.png" /> on a Hilbert space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110630/a11063017.png" />, the Euler equation for the anti-eigenvalue functional <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110630/a11063018.png" /> in (a1) is
+
For any strongly accretive bounded operator $  A $
 +
on a Hilbert space $  X $,  
 +
the Euler equation for the anti-eigenvalue functional $  \mu $
 +
in (a1) is
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110630/a11063019.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a4)</td></tr></table>
+
$$ \tag{a4 }
 +
2 \left \| {Ax } \right \|  ^ {2} \left \| x \right \|  ^ {2} ( { \mathop{\rm Re} } A ) x - \left \| x \right \|  ^ {2} { \mathop{\rm Re} } \left \langle  {Ax,x } \right \rangle A  ^ {*} Ax +
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110630/a11063020.png" /></td> </tr></table>
+
$$
 +
- \left \| {Ax } \right \|  ^ {2} { \mathop{\rm Re} } \left \langle  {Ax,x } \right \rangle x = 0.
 +
$$
  
When <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110630/a11063021.png" /> is a [[Normal operator|normal operator]], (a4) is satisfied not only by the first anti-eigenvectors of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110630/a11063022.png" />, but by all eigenvectors of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110630/a11063023.png" />. Therefore the Euler equation may be viewed as a significant extension of the Rayleigh–Ritz theory for the variational characterization of eigenvalues of a self-adjoint or normal operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110630/a11063024.png" />. The eigenvectors maximize the variational quotient (a1). The anti-eigenvectors minimize it. See [[#References|[a2]]], [[#References|[a3]]].
+
When $  A $
 +
is a [[Normal operator|normal operator]], (a4) is satisfied not only by the first anti-eigenvectors of $  A $,  
 +
but by all eigenvectors of $  A $.  
 +
Therefore the Euler equation may be viewed as a significant extension of the Rayleigh–Ritz theory for the variational characterization of eigenvalues of a self-adjoint or normal operator $  A $.  
 +
The eigenvectors maximize the variational quotient (a1). The anti-eigenvectors minimize it. See [[#References|[a2]]], [[#References|[a3]]].
  
The theory of anti-eigenvalues has been applied recently (from 1990 onward) to gradient and iterative methods for the solution of linear systems <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110630/a11063025.png" />; see [[#References|[a5]]], [[#References|[a6]]]. For example, the Kantorovich convergence rate for steepest descent,
+
The theory of anti-eigenvalues has been applied recently (from 1990 onward) to gradient and iterative methods for the solution of linear systems $  Ax = b $;  
 +
see [[#References|[a5]]], [[#References|[a6]]]. For example, the Kantorovich convergence rate for steepest descent,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110630/a11063026.png" /></td> </tr></table>
+
$$
 +
E _ {A} ( x _ {k + 1 }  ) \leq  \left ( 1 -4 \lambda _ {1} \lambda _ {n} ( \lambda _ {1} + \lambda _ {n} ) ^ {-2 } \right ) E _ {A} ( x _ {k} ) ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110630/a11063027.png" /> denotes the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110630/a11063028.png" />-inner-product error <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110630/a11063029.png" />, becomes
+
where $  E _ {A} $
 +
denotes the $  A $-
 +
inner-product error $  \langle  {( x - x  ^ {*} ) , A ( x - x  ^ {*} ) } \rangle $,  
 +
becomes
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110630/a11063030.png" /></td> </tr></table>
+
$$
 +
E _ {A} ( x _ {k + 1 }  ) \leq  (  \sin  ^ {2} A ) E _ {A} ( x _ {k} ) .
 +
$$
  
Thus, the Kantorovich error rate is trigonometric. Similar trigonometric convergence bounds hold for conjugate gradient and related more sophisticated algorithms [[#References|[a4]]]. Even the basic Richardson method <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110630/a11063031.png" /> (cf. also [[Richardson extrapolation|Richardson extrapolation]]) may be seen to have optimal convergence rate <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110630/a11063032.png" />. For further information, see [[#References|[a5]]], [[#References|[a6]]].
+
Thus, the Kantorovich error rate is trigonometric. Similar trigonometric convergence bounds hold for conjugate gradient and related more sophisticated algorithms [[#References|[a4]]]. Even the basic Richardson method $  x _ {k + 1 }  = x _ {k} + \alpha ( b - Ax _ {k} ) $(
 +
cf. also [[Richardson extrapolation|Richardson extrapolation]]) may be seen to have optimal convergence rate $  \rho _ {\textrm{ opt  }  } = \sin  A $.  
 +
For further information, see [[#References|[a5]]], [[#References|[a6]]].
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  K. Gustafson,  "Operator trigonometry"  ''Linear Multilinear Alg.'' , '''37'''  (1994)  pp. 139–159</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  K. Gustafson,  "Antieigenvalues"  ''Linear Alg. &amp; Its Appl.'' , '''208/209'''  (1994)  pp. 437–454</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  K. Gustafson,  "Matrix trigonometry"  ''Linear Alg. &amp; Its Appl.'' , '''217'''  (1995)  pp. 117–140</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  K. Gustafson,  "Operator trigonometry of iterative methods"  ''Numerical Linear Alg. Appl.'' , '''to appear'''  (1997)</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top">  K. Gustafson,  "Lectures on computational fluid dynamics, mathematical physics, and linear algebra" , Kaigai &amp; World Sci.  (1996/7)</TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top">  K. Gustafson,  D. Rao,  "Numerical range" , Springer  (1997)</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  K. Gustafson,  "Operator trigonometry"  ''Linear Multilinear Alg.'' , '''37'''  (1994)  pp. 139–159</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  K. Gustafson,  "Antieigenvalues"  ''Linear Alg. &amp; Its Appl.'' , '''208/209'''  (1994)  pp. 437–454</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  K. Gustafson,  "Matrix trigonometry"  ''Linear Alg. &amp; Its Appl.'' , '''217'''  (1995)  pp. 117–140</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  K. Gustafson,  "Operator trigonometry of iterative methods"  ''Numerical Linear Alg. Appl.'' , '''to appear'''  (1997)</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top">  K. Gustafson,  "Lectures on computational fluid dynamics, mathematical physics, and linear algebra" , Kaigai &amp; World Sci.  (1996/7)</TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top">  K. Gustafson,  D. Rao,  "Numerical range" , Springer  (1997)</TD></TR></table>

Latest revision as of 18:47, 5 April 2020


The theory of anti-eigenvalues is a spectral theory based upon the turning angles of a matrix or operator $ A $. (See Eigen value for the spectral theory of stretchings, rather than turnings, of a matrix or operator.)

For a strongly accretive operator $ A $, i.e., $ { \mathop{\rm Re} } \langle {Ax,x } \rangle \geq m \| x \| ^ {2} $, $ m > 0 $, the first anti-eigenvalue $ \mu $ is defined by

$$ \tag{a1 } \mu \equiv \cos A = \inf _ {x \in D ( A ) } { \frac{ { \mathop{\rm Re} } \left \langle {Ax, x } \right \rangle }{\left \| {Ax } \right \| \left \| x \right \| } } . $$

From (a1) one has immediately the notion of the angle $ \phi ( A ) $: the largest angle through which $ A $ may turn a vector. Any corresponding vector $ x $ which is turned by that angle is called a first anti-eigenvector. It turns out that, in general, the first anti-eigenvectors come in pairs. Two important early results were the minmax theorem and the Euler equation.

Minmax theorem.

For any strongly accretive bounded operator $ A $ on a Hilbert space $ X $,

$$ \tag{a2 } \sup _ {\left \| x \right \| = 1 } \inf _ {- \infty < \epsilon < \infty } \left \| {( \epsilon A - I ) x } \right \| ^ {2} = $$

$$ = \inf _ {- \infty < \epsilon < \infty } \sup _ {\left \| x \right \| = 1 } \left \| {( \epsilon A - I ) x } \right \| ^ {2} . $$

Using the minmax theorem, the right-hand side of (a2) is seen to define

$$ \tag{a3 } \nu = \sin A = \inf _ {\epsilon > 0 } \left \| {\epsilon A - I } \right \| $$

in such a way that $ \cos ^ {2} A + \sin ^ {2} A = 1 $. This implies an operator trigonometry (see [a1]).

Euler equation.

For any strongly accretive bounded operator $ A $ on a Hilbert space $ X $, the Euler equation for the anti-eigenvalue functional $ \mu $ in (a1) is

$$ \tag{a4 } 2 \left \| {Ax } \right \| ^ {2} \left \| x \right \| ^ {2} ( { \mathop{\rm Re} } A ) x - \left \| x \right \| ^ {2} { \mathop{\rm Re} } \left \langle {Ax,x } \right \rangle A ^ {*} Ax + $$

$$ - \left \| {Ax } \right \| ^ {2} { \mathop{\rm Re} } \left \langle {Ax,x } \right \rangle x = 0. $$

When $ A $ is a normal operator, (a4) is satisfied not only by the first anti-eigenvectors of $ A $, but by all eigenvectors of $ A $. Therefore the Euler equation may be viewed as a significant extension of the Rayleigh–Ritz theory for the variational characterization of eigenvalues of a self-adjoint or normal operator $ A $. The eigenvectors maximize the variational quotient (a1). The anti-eigenvectors minimize it. See [a2], [a3].

The theory of anti-eigenvalues has been applied recently (from 1990 onward) to gradient and iterative methods for the solution of linear systems $ Ax = b $; see [a5], [a6]. For example, the Kantorovich convergence rate for steepest descent,

$$ E _ {A} ( x _ {k + 1 } ) \leq \left ( 1 -4 \lambda _ {1} \lambda _ {n} ( \lambda _ {1} + \lambda _ {n} ) ^ {-2 } \right ) E _ {A} ( x _ {k} ) , $$

where $ E _ {A} $ denotes the $ A $- inner-product error $ \langle {( x - x ^ {*} ) , A ( x - x ^ {*} ) } \rangle $, becomes

$$ E _ {A} ( x _ {k + 1 } ) \leq ( \sin ^ {2} A ) E _ {A} ( x _ {k} ) . $$

Thus, the Kantorovich error rate is trigonometric. Similar trigonometric convergence bounds hold for conjugate gradient and related more sophisticated algorithms [a4]. Even the basic Richardson method $ x _ {k + 1 } = x _ {k} + \alpha ( b - Ax _ {k} ) $( cf. also Richardson extrapolation) may be seen to have optimal convergence rate $ \rho _ {\textrm{ opt } } = \sin A $. For further information, see [a5], [a6].

References

[a1] K. Gustafson, "Operator trigonometry" Linear Multilinear Alg. , 37 (1994) pp. 139–159
[a2] K. Gustafson, "Antieigenvalues" Linear Alg. & Its Appl. , 208/209 (1994) pp. 437–454
[a3] K. Gustafson, "Matrix trigonometry" Linear Alg. & Its Appl. , 217 (1995) pp. 117–140
[a4] K. Gustafson, "Operator trigonometry of iterative methods" Numerical Linear Alg. Appl. , to appear (1997)
[a5] K. Gustafson, "Lectures on computational fluid dynamics, mathematical physics, and linear algebra" , Kaigai & World Sci. (1996/7)
[a6] K. Gustafson, D. Rao, "Numerical range" , Springer (1997)
How to Cite This Entry:
Anti-eigenvalue. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Anti-eigenvalue&oldid=45192
This article was adapted from an original article by K. Gustafson (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article