Namespaces
Variants
Actions

Anti-eigenvalue

From Encyclopedia of Mathematics
Jump to: navigation, search


The theory of anti-eigenvalues is a spectral theory based upon the turning angles of a matrix or operator $ A $. (See Eigen value for the spectral theory of stretchings, rather than turnings, of a matrix or operator.)

For a strongly accretive operator $ A $, i.e., $ { \mathop{\rm Re} } \langle {Ax,x } \rangle \geq m \| x \| ^ {2} $, $ m > 0 $, the first anti-eigenvalue $ \mu $ is defined by

$$ \tag{a1 } \mu \equiv \cos A = \inf _ {x \in D ( A ) } { \frac{ { \mathop{\rm Re} } \left \langle {Ax, x } \right \rangle }{\left \| {Ax } \right \| \left \| x \right \| } } . $$

From (a1) one has immediately the notion of the angle $ \phi ( A ) $: the largest angle through which $ A $ may turn a vector. Any corresponding vector $ x $ which is turned by that angle is called a first anti-eigenvector. It turns out that, in general, the first anti-eigenvectors come in pairs. Two important early results were the minmax theorem and the Euler equation.

Minmax theorem.

For any strongly accretive bounded operator $ A $ on a Hilbert space $ X $,

$$ \tag{a2 } \sup _ {\left \| x \right \| = 1 } \inf _ {- \infty < \epsilon < \infty } \left \| {( \epsilon A - I ) x } \right \| ^ {2} = $$

$$ = \inf _ {- \infty < \epsilon < \infty } \sup _ {\left \| x \right \| = 1 } \left \| {( \epsilon A - I ) x } \right \| ^ {2} . $$

Using the minmax theorem, the right-hand side of (a2) is seen to define

$$ \tag{a3 } \nu = \sin A = \inf _ {\epsilon > 0 } \left \| {\epsilon A - I } \right \| $$

in such a way that $ \cos ^ {2} A + \sin ^ {2} A = 1 $. This implies an operator trigonometry (see [a1]).

Euler equation.

For any strongly accretive bounded operator $ A $ on a Hilbert space $ X $, the Euler equation for the anti-eigenvalue functional $ \mu $ in (a1) is

$$ \tag{a4 } 2 \left \| {Ax } \right \| ^ {2} \left \| x \right \| ^ {2} ( { \mathop{\rm Re} } A ) x - \left \| x \right \| ^ {2} { \mathop{\rm Re} } \left \langle {Ax,x } \right \rangle A ^ {*} Ax + $$

$$ - \left \| {Ax } \right \| ^ {2} { \mathop{\rm Re} } \left \langle {Ax,x } \right \rangle x = 0. $$

When $ A $ is a normal operator, (a4) is satisfied not only by the first anti-eigenvectors of $ A $, but by all eigenvectors of $ A $. Therefore the Euler equation may be viewed as a significant extension of the Rayleigh–Ritz theory for the variational characterization of eigenvalues of a self-adjoint or normal operator $ A $. The eigenvectors maximize the variational quotient (a1). The anti-eigenvectors minimize it. See [a2], [a3].

The theory of anti-eigenvalues has been applied recently (from 1990 onward) to gradient and iterative methods for the solution of linear systems $ Ax = b $; see [a5], [a6]. For example, the Kantorovich convergence rate for steepest descent,

$$ E _ {A} ( x _ {k + 1 } ) \leq \left ( 1 -4 \lambda _ {1} \lambda _ {n} ( \lambda _ {1} + \lambda _ {n} ) ^ {-2 } \right ) E _ {A} ( x _ {k} ) , $$

where $ E _ {A} $ denotes the $ A $- inner-product error $ \langle {( x - x ^ {*} ) , A ( x - x ^ {*} ) } \rangle $, becomes

$$ E _ {A} ( x _ {k + 1 } ) \leq ( \sin ^ {2} A ) E _ {A} ( x _ {k} ) . $$

Thus, the Kantorovich error rate is trigonometric. Similar trigonometric convergence bounds hold for conjugate gradient and related more sophisticated algorithms [a4]. Even the basic Richardson method $ x _ {k + 1 } = x _ {k} + \alpha ( b - Ax _ {k} ) $( cf. also Richardson extrapolation) may be seen to have optimal convergence rate $ \rho _ {\textrm{ opt } } = \sin A $. For further information, see [a5], [a6].

References

[a1] K. Gustafson, "Operator trigonometry" Linear Multilinear Alg. , 37 (1994) pp. 139–159
[a2] K. Gustafson, "Antieigenvalues" Linear Alg. & Its Appl. , 208/209 (1994) pp. 437–454
[a3] K. Gustafson, "Matrix trigonometry" Linear Alg. & Its Appl. , 217 (1995) pp. 117–140
[a4] K. Gustafson, "Operator trigonometry of iterative methods" Numerical Linear Alg. Appl. , to appear (1997)
[a5] K. Gustafson, "Lectures on computational fluid dynamics, mathematical physics, and linear algebra" , Kaigai & World Sci. (1996/7)
[a6] K. Gustafson, D. Rao, "Numerical range" , Springer (1997)
How to Cite This Entry:
Anti-eigenvalue. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Anti-eigenvalue&oldid=45192
This article was adapted from an original article by K. Gustafson (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article