Namespaces
Variants
Actions

Approximation of functions, direct and inverse theorems

From Encyclopedia of Mathematics
Revision as of 03:51, 25 February 2022 by Liuyao (talk | contribs) (fixing dots)
Jump to: navigation, search


Theorems and equalities establishing a connection between the difference-differential properties of the function to be approximated and the magnitude (and behaviour) of the error of approximation, by various methods. Direct theorems give an estimate of the error of approximation of a function in terms of its smoothness properties (the existence of derivatives of a given order, the modulus of continuity of $ f $ or of some of its derivatives, etc.). In the case of best approximation by polynomials, direct theorems are also known as Jackson-type theorems [1], together with their many generalizations and refinements (see Jackson inequality and Jackson theorem). Inverse theorems characterize difference-differential properties of functions depending on the rapidity with which the errors of best, or any other, approximations tend to zero. The problem of obtaining inverse theorems in the approximation of functions was first stated, and in some cases solved, by S.N. Bernstein [S.N. Bernshtein] [2]. A comparison of direct and inverse theorems allows one sometimes to characterize completely the class of functions having specific smoothness properties using, for instance, sequences of best approximations.

The connection between direct and inverse theorems is most simple in the periodic case. Let $ \widetilde{C} $ be the space of $ 2 \pi $-periodic continuous functions on the whole real axis with norm

$$ \| f \| _ {\widetilde{C} } = \ \max _ { t } \ | f (t) | ; $$

let

$$ E ( f , T _ {n} ) = \ \inf _ {\phi \in T _ {n} } \ \| f - \phi \| _ {\widetilde{C} } $$

be the best approximation of a function $ f $ in $ \widetilde{C} $ by the subspace $ T _ {n} $ of trigonometric polynomials of degree at most $ n $, let $ \omega ( f , \delta ) $ be the modulus of continuity of $ f $, and let $ \widetilde{C} {} ^ {r} $, $ r = 1 , 2, \dots $ be the set of functions in $ \widetilde{C} $ ($ \widetilde{C} {} ^ {0} = \widetilde{C} $) that are $ r $ times continuously differentiable on the whole real axis. A direct theorem states: If $ f \in \widetilde{C} {} ^ {r} $, then

$$ \tag{1 } E ( f , T _ {n-1} ) \leq \ \frac{M}{n ^ {r} } \omega \left ( f ^ { (r) } , \frac{1}{n} \right ) , $$

$$ n = 1 , 2, \dots \ r = 0 , 1, \dots $$

where the constant $ M $ does not depend on $ n $. A stronger assertion is: It is possible to indicate a sequence of linear methods $ U _ {n} $, $ n = 0 , 1, \dots $ associating a polynomial $ U _ {n} ( f , t ) \in T _ {n} $ to a function $ f (t) $ in $ \widetilde{C} $ and such that for $ f \in \widetilde{C} {} ^ {r} $ the error $ \| f - U _ {n} (f) \| _ {\widetilde{C} } $ is bounded by the right-hand side of (1). An inverse theorem states that for $ f \in \widetilde{C} $

$$ \tag{2 } \omega ( f , \delta ) \leq \ M \delta \sum _ { n=1 } ^ { {[ } 1 / \delta ] } E ( f , T _ {n-1} ) ,\ \ \delta > 0 , $$

where $ M $ is an absolute constant and $ [ 1 / \delta ] $ is the integer part of $ 1 / \delta $. And if for some positive integer $ r $ the series

$$ \sum _ { n=1 } ^ \infty n ^ {r-1} E ( f , T _ {n-1} ) $$

converges, then it follows that $ f \in \widetilde{C} {} ^ {r} $ and that, analogous to (2), $ \omega ( f ^ { (r) } , 1 / n ) $ can be estimated in terms of $ E ( f , T _ {n-1} ) $, $ n = 1 , 2 ,\dots $ (see [4], [8] and [12]). These estimates imply, in particular, that if

$$ E ( f , T _ {n-1} ) = \ O ( n ^ {- r - \alpha } ) ,\ \ r = 0 , 1, \dots \ 0 < \alpha < 1 , $$

then $ f \in \widetilde{C} {} ^ {r} $ and the $ f ^ { (r) } (t) $ satisfy for $ 0 < \alpha < 1 $ the Hölder inequality

$$ \tag{3 } | f ^ { (r) } ( t ^ \prime ) - f ^ { (r) } ( t ^ {\prime\prime} ) | \leq K \ | t ^ \prime - t ^ {\prime\prime} | ^ \alpha , $$

whereas for $ \alpha = 1 $ they satisfy the Zygmund inequality

$$ \tag{4 } \left | f ^ { (r) } ( t ^ \prime ) - 2 f ^ { (r) } \left ( \frac{t ^ \prime + t ^ {\prime\prime} }{2} \right ) + f ^ { (r) } ( t ^ {\prime\prime} ) \right | \leq K | t ^ \prime - t ^ {\prime\prime} | . $$

Denoting this class of functions by $ K \widetilde{H} {} ^ {r + \alpha } $, one obtains a constructive characteristic for it: $ f \in K \widetilde{H} {} ^ {r + \alpha } $ if and only if

$$ E ( f , T _ {n-1} ) = \ O ( n ^ {- r - \alpha } ) . $$

A $ 2 \pi $-periodic function is infinitely differentiable on the whole real axis if and only if

$$ \lim\limits _ {n \rightarrow \infty } \ n ^ {r} E ( f , T _ {n-1} ) = 0 $$

for all $ r = 1 , 2 ,\dots $.

Similar properties hold for the approximation of periodic functions in the metric of $ L _ {p} ( 0 , 2 \pi ) $ ($ 1 \leq p < \infty $), and also for (not necessarily periodic) functions defined on the whole real axis when being approximated by entire functions of finite order (see [7] and [8]). Direct and inverse theorems are known for $ f \in \widetilde{C} {} ^ {r} $; as a difference-differential characteristic the modulus of smoothness $ \omega _ {k} ( f , \delta ) $ of order $ k = 1 , 2 \dots $ of $ f $ (or of some of its derivatives) is used (see [4] and [8]).

The situation is different in the case of approximation on a finite interval. Let $ C = C [ a , b ] $ be the space of continuous functions on $ [ a , b ] $, with norm

$$ \| f \| _ {C} = \ \max _ {a \leq t \leq b } \ | f (t) | . $$

Let $ C ^ {r} = C ^ {r} [ a , b ] $ be the set of the functions that are $ r $ times continuously differentiable on $ [ a , b ] $, $ C ^ {0} = C $, and let $ K H ^ {r + \alpha } [ a , b ] $ be the class of functions defined by the inequalities (3) and (4) with $ t ^ \prime , t ^ {\prime\prime} \in [ a , b ] $. For the best approximation,

$$ E ( f , A _ {n-1} ; a , b ) = \ \inf _ {p \in A _ {n-1} } \ \| f - p \| _ {C} ,\ \ n = 1 , 2 \dots $$

of a function $ f \in C ^ {r} $ by the subspace $ A _ {n-1} $ of algebraic polynomials of degree at most $ n - 1 $ one has an estimate of the form (1) in terms of the modulus of continuity of $ f ^ { (r) } $ on $ [ a , b ] $, but the converse, analogous to that of the periodic case (with an inequality of the form (2)), now only applies for intervals contained in $ ( a , b ) $. For instance, if

$$ \tag{5 } E ( f , A _ {n-1} ; a , b ) \leq M n ^ {- r - \alpha } , $$

$$ r = 0 , 1 \dots \ 0 < \alpha < 1 , $$

then one can only assert that $ f $ belongs to $ K H ^ {r + \alpha } [ a _ {1} , b _ {1} ] $ defined by the inequalities (3) (with $ 0 < \alpha < 1 $) merely on the interval $ [ a _ {1} , b _ {1} ] \subset ( a , b ) $; the constant $ K $ depends on $ a , a _ {1} , b _ {1} , b $, and can increase unboundedly as $ a _ {1} \rightarrow a $ and $ b _ {1} \rightarrow b $. There exist functions not belonging to $ K H ^ {r + \alpha } [ a , b ] $ for which nevertheless

$$ E ( f , A _ {n-1} ; \ a , b ) = O ( n ^ {- r - \alpha } ) . $$

For instance

$$ E ( \sqrt {1 - t ^ {2} } ,\ A _ {n-1} ; - 1 , 1 ) < \ \frac{2}{\pi n } ,\ \ n = 1 , 2 \dots $$

although $ \sqrt {1 - t ^ {2} } \notin K H ^ \alpha $ on $ [ - 1 , 1 ] $ for every $ \alpha > 1 / 2 $. It turned out that algebraic polynomials, retaining on the whole interval $ [ a , b ] $ the best order of approximation of a function $ f \in C $, can yield at the end points of the interval a substantially better approximation. This phenomenon was first discovered by S.M. Nikol'skii (see [3]). If, in particular, $ f \in K H ^ {r + \alpha } [ a , b ] $, then for any $ n > r $ there is a polynomial $ p _ {n} (t) \in A _ {n-1} $ such that

$$ \tag{6 } | f (t) - p _ {n} (t) | \leq M \left [ \frac{1}{n} \sqrt {( t - a ) ( b - t ) } + \frac{1}{n ^ {2} } \right ] ^ {r + \alpha } , $$

$$ a \leq t \leq b , $$

where the constant $ M $ depends neither on $ n $ nor on $ t $. Contrary to (5), this assertion has a converse: If for an $ f \in C $ there exists a sequence of polynomials $ p _ {n} (t) \in A _ {n-1} $ such that for some $ r = 0 , 1 \dots $ and $ 0 < \alpha < 1 $ (6) is satisfied, then $ f \in K H ^ {r + \alpha } [ a , b ] $. Direct and inverse theorems for $ f \in C ^ {r} [ a , b ] $ are known, stated in terms of the modulus of continuity and moduli of smoothness (see [4] and [8]).

Direct theorems giving order estimates for the error of approximation in terms of difference-differential properties of the approximated function were established for many concrete approximation methods (see [6], [8] and [9], in particular for splines, splines of best approximation and interpolation [10], cf. Spline).

Direct and inverse theorems for approximation in a Hausdorff metric are known (see [13]). Some peculiarities arise here; in particular, the characterization of classes of functions by their best Hausdorff approximation is connected not only with the order of this approximation but also with the magnitude of the constant in the relevant inequality. For direct and inverse theorems in the multi-dimensional case, see Approximation of functions of several real variables.

References

[1] D. Jackson, "Ueber die Genauigkeit der Annäherung stetiger Funktionen durch ganze rationale Funktionen gegebenen Grades und trigonometrische Summen gegebener Ordnung" , Göttingen (1911) (Thesis)
[2] S.N. Bernshtein, "On the best approximation of continuous functions by polynomials of a given degree (1912)" , Collected works , 1 , Moscow (1952) pp. 11–104
[3] S.M. Nikol'skii, Izv. Akad. Nauk SSSR Ser. Mat. , 10 : 4 (1946) pp. 295–317
[4] V.K. Dzyadyk, "Introduction to the theory of uniform approximation of functions by polynomials" , Moscow (1977) (In Russian)
[5] N.P. Korneichuk, "Extremal problems in approximation theory" , Moscow (1976) (In Russian)
[6] V.M. Tikhomirov, "Some problems in approximation theory" , Moscow (1976) (In Russian)
[7] N.I. [N.I. Akhiezer] Achiezer, "Theory of approximation" , F. Ungar (1956) (Translated from Russian)
[8] A.F. Timan, "Theory of approximation of functions of a real variable" , Pergamon (1963) (Translated from Russian)
[9] P.P. Korovkin, "Linear operators and approximation theory" , Hindushtan Publ. Comp. (1960) (Translated from Russian)
[10] S.B. Stechkin, Yu.N. Subbotin, "Splines in numerical mathematics" , Moscow (1976) (In Russian)
[11] I.K. Daugavet, "Introduction to the theory of approximation of functions" , Leningrad (1977) (In Russian)
[12] S.A. Stechkin, Izv. Akad. Nauk SSSR Ser. : 3 (1951) pp. 217–242
[13] B. Sendov, "Hausdorff approximations" , Sofia (1979)
[14] G.G. Lorentz, "Approximation of functions" , Holt, Rinehart & Winston (1966)

Comments

References

[a1] I.P. Natanson, "Constructive function theory" , 1–3 , F. Ungar (1964–1965) (Translated from Russian)
[a2] R.A. DeVore, "The approximation of continuous functions by positive linear operators" , Springer (1972)
How to Cite This Entry:
Approximation of functions, direct and inverse theorems. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Approximation_of_functions,_direct_and_inverse_theorems&oldid=45293
This article was adapted from an original article by N.P. Korneichuk (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article