Namespaces
Variants
Actions

Difference between revisions of "Adaptive quadrature"

From Encyclopedia of Mathematics
Jump to: navigation, search
m (MR/ZBL numbers added)
(TeX)
Line 1: Line 1:
 +
{{TEX|done}}
 
It is a common task to compute an approximate value of an [[Integral|integral]], the simplest case being
 
It is a common task to compute an approximate value of an [[Integral|integral]], the simplest case being
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110310/a1103101.png" /></td> </tr></table>
+
$$I(f)=\int\limits_a^bf(x)dx.$$
  
If the integrand <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110310/a1103102.png" /> is given by a Fortran-like expression, then one can use symbolic methods, interval-methods or numerical methods. If the integrand <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110310/a1103103.png" /> is given by function values, only numerical methods of the form
+
If the integrand $f$ is given by a Fortran-like expression, then one can use symbolic methods, interval-methods or numerical methods. If the integrand $f$ is given by function values, only numerical methods of the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110310/a1103104.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a1)</td></tr></table>
+
$$I_n(f)=\phi_n(f(x_1),\dots,f(x_n))\tag{a1}$$
  
can be used. A method (a1) is called non-adaptive if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110310/a1103105.png" /> and the knots <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110310/a1103106.png" /> are fixed in advance and do not depend on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110310/a1103107.png" />. One might hope that it is possible to learn about <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110310/a1103108.png" /> during the computation of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110310/a1103109.png" /> in such a way that one can choose the next knot <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110310/a11031010.png" /> suitably to reduce the error. Therefore, one studies adaptive methods, where the number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110310/a11031011.png" /> depends on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110310/a11031012.png" /> and the choice of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110310/a11031013.png" /> may depend on the already computed function values. In [[Mathematical statistics|mathematical statistics]], adaptive information is known as sequential design and non-adaptive information as non-sequential design.
+
can be used. A method \ref{a1} is called non-adaptive if $n$ and the knots $x_k$ are fixed in advance and do not depend on $f$. One might hope that it is possible to learn about $f$ during the computation of $f(x_1),\dots,f(x_{k-1})$ in such a way that one can choose the next knot $x_k$ suitably to reduce the error. Therefore, one studies adaptive methods, where the number $n$ depends on $f$ and the choice of $x_k$ may depend on the already computed function values. In [[Mathematical statistics|mathematical statistics]], adaptive information is known as sequential design and non-adaptive information as non-sequential design.
  
The use of adaptive algorithms is widespread. Often these methods are based on a way to estimate the error in different subintervals of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110310/a11031014.png" />. One then adds new knots in those intervals where a high error is expected. See [[#References|[a1]]], [[#References|[a2]]], [[#References|[a4]]], [[#References|[a6]]] for detailed information concerning these strategies.
+
The use of adaptive algorithms is widespread. Often these methods are based on a way to estimate the error in different subintervals of $[a,b]$. One then adds new knots in those intervals where a high error is expected. See [[#References|[a1]]], [[#References|[a2]]], [[#References|[a4]]], [[#References|[a6]]] for detailed information concerning these strategies.
  
 
A non-adaptive method provides an immediate decomposition for parallel computation. If adaptive information is superior to non-adaptive information, then an analysis of the trade-off between using adaptive or non-adaptive information on a parallel computer should be carried out.
 
A non-adaptive method provides an immediate decomposition for parallel computation. If adaptive information is superior to non-adaptive information, then an analysis of the trade-off between using adaptive or non-adaptive information on a parallel computer should be carried out.
  
To prove any bounds for the error <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110310/a11031015.png" />, one must have some a priori knowledge concerning <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110310/a11031016.png" />, such as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110310/a11031017.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110310/a11031018.png" /> is a class of integrable functions. Without such assumptions on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110310/a11031019.png" />, the error of any method (a1) can be arbitrarily large. Much is known concerning the power of adaption; [[#References|[a5]]] contains known results as well as new results of the authors. See also the survey [[#References|[a3]]]. One can study the worst case as well as the average case and errors for randomized (Monte-Carlo) methods (cf. also [[Monte-Carlo method|Monte-Carlo method]]). It may happen that adaption does not help in the worst-case setting but helps significantly with respect to other settings.
+
To prove any bounds for the error $|I(f)-I_n(f)|$, one must have some a priori knowledge concerning $f$, such as $f\in F$, where $F$ is a class of integrable functions. Without such assumptions on $f$, the error of any method \ref{a1} can be arbitrarily large. Much is known concerning the power of adaption; [[#References|[a5]]] contains known results as well as new results of the authors. See also the survey [[#References|[a3]]]. One can study the worst case as well as the average case and errors for randomized (Monte-Carlo) methods (cf. also [[Monte-Carlo method|Monte-Carlo method]]). It may happen that adaption does not help in the worst-case setting but helps significantly with respect to other settings.
  
Under certain assumptions, adaptive methods are not better than non-adaptive ones. N.S. Bakhvalov (1971) proved, see [[#References|[a3]]], [[#References|[a5]]], that adaption does not help in the worst-case setting if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a110/a110310/a11031020.png" /> is a symmetric and convex set of functions. For some other natural classes adaption helps significantly. In particular, adaption significantly helps for certain classes of functions with singularities. The power of adaption depends critically on a priori knowledge concerning the problem being studied; even a seemingly small change in the assumptions can lead to a different answer.
+
Under certain assumptions, adaptive methods are not better than non-adaptive ones. N.S. Bakhvalov (1971) proved, see [[#References|[a3]]], [[#References|[a5]]], that adaption does not help in the worst-case setting if $F$ is a symmetric and convex set of functions. For some other natural classes adaption helps significantly. In particular, adaption significantly helps for certain classes of functions with singularities. The power of adaption depends critically on a priori knowledge concerning the problem being studied; even a seemingly small change in the assumptions can lead to a different answer.
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> P.J. Davis, P. Rabinowitz, "Methods of numerical integration" , Acad. Press (1984) (Edition: Second) {{MR|0760629}} {{ZBL|0537.65020}} </TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> A.R. Krommer, C.W. Ueberhuber, "Numerical integration on advanced computer systems" , ''Lecture Notes in Computer Science'' , '''848''' , Springer (1994) {{MR|1324342}} {{ZBL|0825.65012}} {{ZBL|0903.65019}} </TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top"> E. Novak, "On the power of adaption" ''J. Complexity'' , '''12''' (1196) pp. 199–237 {{MR|1408328}} {{ZBL|0870.65042}} </TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top"> R. Piessens, E. de Doncker-Kapenga, C.W. Überhuber, D.K. Kahaner, "Quadpack" , Springer (1983) {{MR|0712135}} {{ZBL|0508.65005}} </TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top"> J.F. Traub, G.W. Wasilkowski, H. Woźniakowski, "Information-based complexity" , Acad. Press (1988) {{MR|0958691}} {{ZBL|0654.94004}} </TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top"> D. Zwillinger, "Handbook of integration" , Jones and Bartlett (1992) {{MR|1174295}} {{ZBL|0758.65017}} </TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> P.J. Davis, P. Rabinowitz, "Methods of numerical integration" , Acad. Press (1984) (Edition: Second) {{MR|0760629}} {{ZBL|0537.65020}} </TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> A.R. Krommer, C.W. Ueberhuber, "Numerical integration on advanced computer systems" , ''Lecture Notes in Computer Science'' , '''848''' , Springer (1994) {{MR|1324342}} {{ZBL|0825.65012}} {{ZBL|0903.65019}} </TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top"> E. Novak, "On the power of adaption" ''J. Complexity'' , '''12''' (1196) pp. 199–237 {{MR|1408328}} {{ZBL|0870.65042}} </TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top"> R. Piessens, E. de Doncker-Kapenga, C.W. Überhuber, D.K. Kahaner, "Quadpack" , Springer (1983) {{MR|0712135}} {{ZBL|0508.65005}} </TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top"> J.F. Traub, G.W. Wasilkowski, H. Woźniakowski, "Information-based complexity" , Acad. Press (1988) {{MR|0958691}} {{ZBL|0654.94004}} </TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top"> D. Zwillinger, "Handbook of integration" , Jones and Bartlett (1992) {{MR|1174295}} {{ZBL|0758.65017}} </TD></TR></table>

Revision as of 13:25, 25 August 2014

It is a common task to compute an approximate value of an integral, the simplest case being

$$I(f)=\int\limits_a^bf(x)dx.$$

If the integrand $f$ is given by a Fortran-like expression, then one can use symbolic methods, interval-methods or numerical methods. If the integrand $f$ is given by function values, only numerical methods of the form

$$I_n(f)=\phi_n(f(x_1),\dots,f(x_n))\tag{a1}$$

can be used. A method \ref{a1} is called non-adaptive if $n$ and the knots $x_k$ are fixed in advance and do not depend on $f$. One might hope that it is possible to learn about $f$ during the computation of $f(x_1),\dots,f(x_{k-1})$ in such a way that one can choose the next knot $x_k$ suitably to reduce the error. Therefore, one studies adaptive methods, where the number $n$ depends on $f$ and the choice of $x_k$ may depend on the already computed function values. In mathematical statistics, adaptive information is known as sequential design and non-adaptive information as non-sequential design.

The use of adaptive algorithms is widespread. Often these methods are based on a way to estimate the error in different subintervals of $[a,b]$. One then adds new knots in those intervals where a high error is expected. See [a1], [a2], [a4], [a6] for detailed information concerning these strategies.

A non-adaptive method provides an immediate decomposition for parallel computation. If adaptive information is superior to non-adaptive information, then an analysis of the trade-off between using adaptive or non-adaptive information on a parallel computer should be carried out.

To prove any bounds for the error $|I(f)-I_n(f)|$, one must have some a priori knowledge concerning $f$, such as $f\in F$, where $F$ is a class of integrable functions. Without such assumptions on $f$, the error of any method \ref{a1} can be arbitrarily large. Much is known concerning the power of adaption; [a5] contains known results as well as new results of the authors. See also the survey [a3]. One can study the worst case as well as the average case and errors for randomized (Monte-Carlo) methods (cf. also Monte-Carlo method). It may happen that adaption does not help in the worst-case setting but helps significantly with respect to other settings.

Under certain assumptions, adaptive methods are not better than non-adaptive ones. N.S. Bakhvalov (1971) proved, see [a3], [a5], that adaption does not help in the worst-case setting if $F$ is a symmetric and convex set of functions. For some other natural classes adaption helps significantly. In particular, adaption significantly helps for certain classes of functions with singularities. The power of adaption depends critically on a priori knowledge concerning the problem being studied; even a seemingly small change in the assumptions can lead to a different answer.

References

[a1] P.J. Davis, P. Rabinowitz, "Methods of numerical integration" , Acad. Press (1984) (Edition: Second) MR0760629 Zbl 0537.65020
[a2] A.R. Krommer, C.W. Ueberhuber, "Numerical integration on advanced computer systems" , Lecture Notes in Computer Science , 848 , Springer (1994) MR1324342 Zbl 0825.65012 Zbl 0903.65019
[a3] E. Novak, "On the power of adaption" J. Complexity , 12 (1196) pp. 199–237 MR1408328 Zbl 0870.65042
[a4] R. Piessens, E. de Doncker-Kapenga, C.W. Überhuber, D.K. Kahaner, "Quadpack" , Springer (1983) MR0712135 Zbl 0508.65005
[a5] J.F. Traub, G.W. Wasilkowski, H. Woźniakowski, "Information-based complexity" , Acad. Press (1988) MR0958691 Zbl 0654.94004
[a6] D. Zwillinger, "Handbook of integration" , Jones and Bartlett (1992) MR1174295 Zbl 0758.65017
How to Cite This Entry:
Adaptive quadrature. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Adaptive_quadrature&oldid=33134
This article was adapted from an original article by E. Novak (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article