Namespaces
Variants
Actions

Difference between revisions of "Fredholm equation, numerical methods"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
Line 1: Line 1:
 +
<!--
 +
f0414301.png
 +
$#A+1 = 187 n = 0
 +
$#C+1 = 187 : ~/encyclopedia/old_files/data/F041/F.0401430 Fredholm equation, numerical methods
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
Approximation methods for solving Fredholm integral equations of the second kind, amounting to performing a finite number of operations on numbers.
 
Approximation methods for solving Fredholm integral equations of the second kind, amounting to performing a finite number of operations on numbers.
  
 
Let
 
Let
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f0414301.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$ \tag{1 }
 +
\phi ( x) - \lambda
 +
\int\limits _ { D }
 +
K ( x, s) \phi ( s) \
 +
ds  = f ( x)
 +
$$
  
be a Fredholm integral equation of the second kind, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f0414302.png" /> is a complex number, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f0414303.png" /> is a known vector function, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f0414304.png" /> is an unknown vector function, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f0414305.png" /> is the kernel of equation (1), and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f0414306.png" /> is a domain in some <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f0414307.png" />-dimensional Euclidean space. Below it is assumed that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f0414308.png" /> does not belong to the spectrum of the integral operator with kernel <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f0414309.png" /> (that is, for the given <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143010.png" /> equation (1) has a unique solution in some class of functions corresponding to the degree of smoothness of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143011.png" />). The expression (1) naturally includes the case of a system of Fredholm equations.
+
be a Fredholm integral equation of the second kind, where $  \lambda $
 +
is a complex number, f ( x) $
 +
is a known vector function, $  \phi ( x) $
 +
is an unknown vector function, $  K ( x, s) $
 +
is the kernel of equation (1), and $  D $
 +
is a domain in some $  m $-
 +
dimensional Euclidean space. Below it is assumed that $  \lambda $
 +
does not belong to the spectrum of the integral operator with kernel $  K $(
 +
that is, for the given $  \lambda $
 +
equation (1) has a unique solution in some class of functions corresponding to the degree of smoothness of $  K $).  
 +
The expression (1) naturally includes the case of a system of Fredholm equations.
  
 
For a general description of the problems of constructing and investigating numerical methods for solving Fredholm equations of the second kind one uses the language of [[Functional analysis|functional analysis]]. The integral equation (1) can be written as a linear operator equation
 
For a general description of the problems of constructing and investigating numerical methods for solving Fredholm equations of the second kind one uses the language of [[Functional analysis|functional analysis]]. The integral equation (1) can be written as a linear operator equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143012.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$ \tag{2 }
 +
( E - \lambda A) \phi  = f,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143013.png" /> is an unknown element of some Banach space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143014.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143015.png" /> is a given element of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143016.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143017.png" /> is a bounded linear operator from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143018.png" /> to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143019.png" />. The operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143020.png" /> is assumed to act invertibly from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143021.png" /> to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143022.png" />. The plan of any numerical method for solving (1) consists of the following. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143023.png" /> be a Banach space in some way associated with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143024.png" />, but, generally speaking, different from it, and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143025.png" /> be a linear operator from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143026.png" /> to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143027.png" />. The equation
+
where $  \phi $
 +
is an unknown element of some Banach space $  \Phi $,  
 +
f $
 +
is a given element of $  \Phi $
 +
and $  A $
 +
is a bounded linear operator from $  \Phi $
 +
to $  \Phi $.  
 +
The operator $  E - \lambda A $
 +
is assumed to act invertibly from $  \Phi $
 +
to $  \Phi $.  
 +
The plan of any numerical method for solving (1) consists of the following. Let $  \widetilde \Phi  $
 +
be a Banach space in some way associated with $  \Phi $,  
 +
but, generally speaking, different from it, and let $  \widetilde{A}  $
 +
be a linear operator from $  \widetilde \Phi  $
 +
to $  \widetilde \Phi  $.  
 +
The equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143028.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
$$ \tag{3 }
 +
( E - \lambda \widetilde{A}  ) \widetilde \phi    = \widetilde{f}
 +
$$
  
is called an approximating equation for (2). The approximating operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143029.png" /> is usually chosen in such a way that either it may be possible to compute <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143030.png" /> directly from (3), or (more commonly) that one may find an approximate solution of (3) of the form
+
is called an approximating equation for (2). The approximating operator $  \widetilde{A}  $
 +
is usually chosen in such a way that either it may be possible to compute $  \widetilde \phi  $
 +
directly from (3), or (more commonly) that one may find an approximate solution of (3) of the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143031.png" /></td> <td valign="top" style="width:5%;text-align:right;">(4)</td></tr></table>
+
$$ \tag{4 }
 +
{\widetilde \phi  } tilde  = \psi ( \widetilde{A}  , \widetilde{f)
 +
$$
  
such that the right-hand side of (4) may be computed in a finite number of arithmetical operations. The expression <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143032.png" /> denotes performing certain actions on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143033.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143034.png" />, in particular <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143035.png" /> may be a simple operator function of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143036.png" /> (for example, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143037.png" />). The choice of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143038.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143039.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143040.png" />, and also of the space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143041.png" />, is subject to the requirement that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143042.png" /> and the exact solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143043.png" /> of (1), (2) be (in some sense) close, and is not, generally speaking, unique. Similarly, for a concrete numerical method (a concrete approximation formula for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143044.png" />) the choice of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143045.png" /> is not unique. The concrete choice of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143046.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143047.png" /> is dictated by the requirements of  "closeness"  of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143048.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143049.png" />, and also by the convenience of the investigation. The specific nature of numerical methods for solving Fredholm equations of the second kind is determined mainly by one or the other concrete approximation of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143050.png" /> by means of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143051.png" />. So usually an approximation method also gives its name to one or another numerical method for solving (1).
+
such that the right-hand side of (4) may be computed in a finite number of arithmetical operations. The expression $  \psi ( \widetilde{A}  , \widetilde{f}  ) $
 +
denotes performing certain actions on $  \widetilde{A}  $
 +
and $  \widetilde{f}  $,  
 +
in particular $  \psi $
 +
may be a simple operator function of $  \widetilde{A}  $(
 +
for example, $  \psi ( \widetilde{A}  , \widetilde{f}  ) = ( E - \lambda \widetilde{A}  )  ^ {-} 1 \widetilde{f}  $).  
 +
The choice of $  \widetilde{A}  $,  
 +
$  \psi $
 +
and $  \widetilde{f}  $,  
 +
and also of the space $  \widetilde \Phi  $,  
 +
is subject to the requirement that $  {\widetilde \phi  } tilde $
 +
and the exact solution $  \phi $
 +
of (1), (2) be (in some sense) close, and is not, generally speaking, unique. Similarly, for a concrete numerical method (a concrete approximation formula for $  A $)  
 +
the choice of $  \widetilde \Phi  $
 +
is not unique. The concrete choice of $  \Phi $
 +
and $  \widetilde \Phi  $
 +
is dictated by the requirements of  "closeness"  of $  {\widetilde \phi  } tilde $
 +
and $  \phi $,  
 +
and also by the convenience of the investigation. The specific nature of numerical methods for solving Fredholm equations of the second kind is determined mainly by one or the other concrete approximation of $  A $
 +
by means of $  \widetilde{A}  $.  
 +
So usually an approximation method also gives its name to one or another numerical method for solving (1).
  
After <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143052.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143053.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143054.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143055.png" /> have been chosen, the closeness of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143056.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143057.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143058.png" />) is established using theorems from the general theory of approximation methods for solving operator equations.
+
After $  \Phi $,  
 +
$  \widetilde \Phi  $,  
 +
$  \widetilde{A}  $,  
 +
and $  \widetilde{f}  $
 +
have been chosen, the closeness of $  \phi $
 +
and $  \widetilde \phi  $(
 +
$  {\widetilde \phi  } tilde $)  
 +
is established using theorems from the general theory of approximation methods for solving operator equations.
  
In the case <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143059.png" />, in order to establish that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143060.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143061.png" /> are close, it is sufficient to prove that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143062.png" /> is small. For a suitable choice of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143063.png" />, this can be successfully done for most classical methods for approximately solving Fredholm equations of the second kind.
+
In the case $  \Phi = \widetilde \Phi  $,  
 +
in order to establish that $  \widetilde \phi  $
 +
and $  \phi $
 +
are close, it is sufficient to prove that $  \| \widetilde{A}  - A \| $
 +
is small. For a suitable choice of $  \Phi $,  
 +
this can be successfully done for most classical methods for approximately solving Fredholm equations of the second kind.
  
In the majority of concrete methods the solution of equation (3) can easily be reduced to the solution of a system of linear algebraic equations; in order to construct <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143064.png" /> in (4), one can make use of some algorithm for solving systems of linear algebraic equations (see [[Linear algebra, numerical methods in|Linear algebra, numerical methods in]]).
+
In the majority of concrete methods the solution of equation (3) can easily be reduced to the solution of a system of linear algebraic equations; in order to construct $  \psi $
 +
in (4), one can make use of some algorithm for solving systems of linear algebraic equations (see [[Linear algebra, numerical methods in|Linear algebra, numerical methods in]]).
  
 
The principal methods for constructing approximating operators are:
 
The principal methods for constructing approximating operators are:
  
Quadrature methods and generalizations of them. These are the most-widespread methods for approximating the integral operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143065.png" /> in (1). The basis of these methods, which can be applied in the case of continuous <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143066.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143067.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143068.png" />, consists in replacing the integral (with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143069.png" />) in (1) by some [[Quadrature formula|quadrature formula]] with respect to a grid of nodes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143070.png" />.
+
Quadrature methods and generalizations of them. These are the most-widespread methods for approximating the integral operator $  A $
 +
in (1). The basis of these methods, which can be applied in the case of continuous $  K ( x, s) $,  
 +
$  \phi ( s) $
 +
and f ( x) $,  
 +
consists in replacing the integral (with respect to $  s $)  
 +
in (1) by some [[Quadrature formula|quadrature formula]] with respect to a grid of nodes $  \{ s _ {i} \} \in D $.
  
 
In this case
 
In this case
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143071.png" /></td> <td valign="top" style="width:5%;text-align:right;">(5)</td></tr></table>
+
$$ \tag{5 }
 +
\widetilde{A}  y  = \
 +
\sum _ {i = 1 } ^ { N }
 +
a _ {i}  ^ {(} N)
 +
K ( x, s _ {i} ) y ( s _ {i} ),
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143072.png" /> are the coefficients of the quadrature formula.
+
where $  a _ {i}  ^ {(} N) $
 +
are the coefficients of the quadrature formula.
  
The approximating equation (3) can be regarded as an operator equation in the same space as the basic equation (1) (for example, in the space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143073.png" /> of continuous vector functions on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143074.png" />). In this case it will have the form
+
The approximating equation (3) can be regarded as an operator equation in the same space as the basic equation (1) (for example, in the space $  C ( D) $
 +
of continuous vector functions on $  D $).  
 +
In this case it will have the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143075.png" /></td> <td valign="top" style="width:5%;text-align:right;">(6)</td></tr></table>
+
$$ \tag{6 }
 +
\widetilde \phi  ( x) - \lambda
 +
\sum _ {i = 1 } ^ { N }
 +
a _ {i}  ^ {(} N)
 +
K ( x, s _ {i} )
 +
\widetilde \phi  ( s _ {i} )  = f ( x).
 +
$$
  
Equation (6) can be reduced to a system of linear algebraic equations in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143076.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143077.png" />:
+
Equation (6) can be reduced to a system of linear algebraic equations in $  \widetilde \phi  ( s _ {i} ) $,  
 +
$  i = 1 \dots N $:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143078.png" /></td> <td valign="top" style="width:5%;text-align:right;">(7)</td></tr></table>
+
$$ \tag{7 }
 +
\widetilde \phi  ( s _ {j} ) - \lambda
 +
\sum _ {i = 1 } ^ { N }
 +
a _ {i}  ^ {(} N)
 +
K ( s _ {j} , s _ {i} )
 +
\widetilde \phi  ( s _ {i} )  = f( s _ {j} ),\ \
 +
j = 1 \dots N.
 +
$$
  
The solution (exact or approximate) of the system (7) gives <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143079.png" />.
+
The solution (exact or approximate) of the system (7) gives $  {\widetilde \phi  } tilde $.
  
 
Sometimes one takes the equation (7) itself as approximating (1), and then (7) corresponds to equation (3).
 
Sometimes one takes the equation (7) itself as approximating (1), and then (7) corresponds to equation (3).
  
In this approach, the space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143080.png" /> is not the same as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143081.png" />. The space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143082.png" /> may, for example, be identified with the quotient space of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143083.png" /> by the subspace of functions in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143084.png" /> that vanish at the points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143085.png" />. The method (5) admits various generalizations, which are convenient to use, for example, in the case of a discontinuous <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143086.png" />. In these generalized methods the operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143087.png" /> has the form
+
In this approach, the space $  \widetilde \Phi  $
 +
is not the same as $  \Phi $.  
 +
The space $  \widetilde \Phi  $
 +
may, for example, be identified with the quotient space of $  \Phi $
 +
by the subspace of functions in $  \Phi $
 +
that vanish at the points $  \{ s _ {i} \} _ {i = 1 }  ^ {N} $.  
 +
The method (5) admits various generalizations, which are convenient to use, for example, in the case of a discontinuous $  K ( x, s) $.  
 +
In these generalized methods the operator $  \widetilde{A}  $
 +
has the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143088.png" /></td> <td valign="top" style="width:5%;text-align:right;">(5prm)</td></tr></table>
+
$$ \tag{5'}
 +
\widetilde{A}  y  = \
 +
\sum _ {i = 1 } ^ { N }
 +
a _ {i}  ^ {(} N) ( x) y ( s _ {i} ),
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143089.png" /> are functions associated with the kernel <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143090.png" />.
+
where $  a _ {i}  ^ {(} N) ( x) $
 +
are functions associated with the kernel $  K ( x, s) $.
  
 
See also [[Quadrature-sum method|Quadrature-sum method]].
 
See also [[Quadrature-sum method|Quadrature-sum method]].
  
Methods for replacing the kernel by an approximate one. These make use of an approximating operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143091.png" /> of the form
+
Methods for replacing the kernel by an approximate one. These make use of an approximating operator $  \widetilde{A}  $
 +
of the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143092.png" /></td> </tr></table>
+
$$
 +
\widetilde{A}  y  = \
 +
\int\limits _ { D }
 +
\widetilde{K}  ( x, s)
 +
y ( s)  ds,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143093.png" /> is some function close to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143094.png" />, but simpler. Most often <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143095.png" /> is a [[Degenerate kernel|degenerate kernel]], that is,
+
where $  \widetilde{K}  $
 +
is some function close to $  K $,  
 +
but simpler. Most often $  \widetilde{K}  $
 +
is a [[Degenerate kernel|degenerate kernel]], that is,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143096.png" /></td> <td valign="top" style="width:5%;text-align:right;">(8)</td></tr></table>
+
$$ \tag{8 }
 +
\widetilde{K}  ( x, s)  = \
 +
\sum _ {i = 1 } ^ { N }
 +
a _ {i} ( x) b _ {i} ( s).
 +
$$
  
 
In this case (3) is an integral Fredholm equation with a degenerate kernel (cf. also [[Degenerate kernels, method of|Degenerate kernels, method of]]). Its solution reduces to solving a system of linear algebraic equations. However, the matrix entries of the system of equations obtained will be expressed as integrals of known functions, and for a numerical solution of them one must, generally speaking, approximate them by quadrature sums.
 
In this case (3) is an integral Fredholm equation with a degenerate kernel (cf. also [[Degenerate kernels, method of|Degenerate kernels, method of]]). Its solution reduces to solving a system of linear algebraic equations. However, the matrix entries of the system of equations obtained will be expressed as integrals of known functions, and for a numerical solution of them one must, generally speaking, approximate them by quadrature sums.
  
There are many ways of making a concrete choice of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143097.png" /> by formula (8) (see, for example, [[Strip method (integral equations)|Strip method (integral equations)]]). The theoretical investigation of the closeness of solutions of equations (3) and (1) is usually significantly simpler in these methods than, for example, in quadrature methods, since in most cases one can set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143098.png" /> and the choice of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f04143099.png" /> is essentially determined directly from the way the problem is posed. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430100.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430101.png" /> are close, then, as a rule, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430102.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430103.png" /> are close in the norm of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430104.png" />. However, the practical realization of these methods is in most cases significantly more laborious as compared with quadrature methods and their generalizations.
+
There are many ways of making a concrete choice of $  \widetilde{K}  $
 +
by formula (8) (see, for example, [[Strip method (integral equations)|Strip method (integral equations)]]). The theoretical investigation of the closeness of solutions of equations (3) and (1) is usually significantly simpler in these methods than, for example, in quadrature methods, since in most cases one can set $  \widetilde \Phi  = \Phi $
 +
and the choice of $  \Phi $
 +
is essentially determined directly from the way the problem is posed. If $  \widetilde{K}  $
 +
and $  K $
 +
are close, then, as a rule, $  A $
 +
and $  \widetilde{A}  $
 +
are close in the norm of $  \Phi $.  
 +
However, the practical realization of these methods is in most cases significantly more laborious as compared with quadrature methods and their generalizations.
  
Projection methods lead to an approximating equation (3) of the form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430105.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430106.png" /> is a subspace of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430107.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430108.png" /> is the projection onto this subspace. Freedom in the choice of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430109.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430110.png" /> and even <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430111.png" /> itself leads to a large number of concrete projection methods for solving Fredholm integral equations of the second kind. A typical example of a projection method is the [[Galerkin method|Galerkin method]]. In order to obtain the concrete computational formulas of this method, one must (if possible) treat the integral equation (1) as an operator equation in the Hilbert space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430112.png" /> of functions that are square-integrable in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430113.png" />, and take as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430114.png" /> the orthogonal projection that associates to a function in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430115.png" /> the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430116.png" />-term segment of its Fourier series with respect to some complete orthonormal system of functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430117.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430118.png" />.
+
Projection methods lead to an approximating equation (3) of the form $  \widetilde \phi  - \lambda PA \widetilde \phi  = Pf $,  
 +
where $  \widetilde \Phi  $
 +
is a subspace of $  \Phi $
 +
and $  P $
 +
is the projection onto this subspace. Freedom in the choice of $  \Phi $,  
 +
$  \widetilde \Phi  $
 +
and even $  P $
 +
itself leads to a large number of concrete projection methods for solving Fredholm integral equations of the second kind. A typical example of a projection method is the [[Galerkin method|Galerkin method]]. In order to obtain the concrete computational formulas of this method, one must (if possible) treat the integral equation (1) as an operator equation in the Hilbert space $  L _ {2} ( D) $
 +
of functions that are square-integrable in $  D $,  
 +
and take as $  P $
 +
the orthogonal projection that associates to a function in $  L _ {2} ( D) $
 +
the $  N $-
 +
term segment of its Fourier series with respect to some complete orthonormal system of functions $  \{ \psi _ {k} \} $
 +
in $  L _ {2} ( D) $.
  
 
In another interpretation, the Galerkin method is equivalent to that of replacing the kernel by a degenerate one of the form
 
In another interpretation, the Galerkin method is equivalent to that of replacing the kernel by a degenerate one of the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430119.png" /></td> </tr></table>
+
$$
 +
\widetilde{K}  ( x, s)  = \
 +
\sum _ {i = 1 } ^ { N }
 +
\int\limits _ { D }
 +
K ( x, s) \phi _ {i} ( x) \
 +
dx \cdot \phi _ {i} ( x)
 +
$$
  
 
with a simultaneous replacement of the right-hand side by a function close to it:
 
with a simultaneous replacement of the right-hand side by a function close to it:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430120.png" /></td> </tr></table>
+
$$
 +
\widetilde{f}  ( s)  = \
 +
\sum _ {i = 1 } ^ { N }
 +
\int\limits _ { D }
 +
f ( s) \phi _ {i} ( s) \
 +
ds \cdot \phi _ {i} ( s).
 +
$$
  
As another example of projection methods one can take the [[Collocation method|collocation method]]. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430121.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430122.png" /> are continuous functions, then (1) can be regarded as an operator equation (2) in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430123.png" /> — the space of continuous functions on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430124.png" />. The collocation method corresponds to a choice of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430125.png" /> in the form
+
As another example of projection methods one can take the [[Collocation method|collocation method]]. If $  K ( x, s) $
 +
and f ( x) $
 +
are continuous functions, then (1) can be regarded as an operator equation (2) in $  C ( D) $—  
 +
the space of continuous functions on $  D $.  
 +
The collocation method corresponds to a choice of $  P $
 +
in the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430126.png" /></td> </tr></table>
+
$$
 +
Py  = Z ( y),\ \
 +
y \in D,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430127.png" /> is the Lagrange interpolation polynomial with respect to some grid of nodes in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430128.png" /> (cf. also [[Lagrange interpolation formula|Lagrange interpolation formula]]).
+
where $  Z $
 +
is the Lagrange interpolation polynomial with respect to some grid of nodes in $  D $(
 +
cf. also [[Lagrange interpolation formula|Lagrange interpolation formula]]).
  
 
In the practical realization of most projection methods applied to a Fredholm integral equation of the second kind there arise difficulties of the additional approximation of the integrals that appear, which also makes these methods (just like the methods of replacing the kernel by an approximate one) usually more laborious as compared with typical quadrature methods. However, this assertion is relative, since the actual classification of the methods is a matter of convention. For example, the collocation method can be interpreted both as a projection method and as a generalized quadrature method.
 
In the practical realization of most projection methods applied to a Fredholm integral equation of the second kind there arise difficulties of the additional approximation of the integrals that appear, which also makes these methods (just like the methods of replacing the kernel by an approximate one) usually more laborious as compared with typical quadrature methods. However, this assertion is relative, since the actual classification of the methods is a matter of convention. For example, the collocation method can be interpreted both as a projection method and as a generalized quadrature method.
  
Methods for solving the approximating equations. Usually, the solution of an approximating equation (3) reduces to solving a system of linear algebraic equations. One can use methods of successive approximations (the simplest of them) for relatively small <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430129.png" />, and with necessary modifications (for example, in the method of averaging functional corrections) one can use them for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430130.png" /> not belonging to the spectrum of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430131.png" />.
+
Methods for solving the approximating equations. Usually, the solution of an approximating equation (3) reduces to solving a system of linear algebraic equations. One can use methods of successive approximations (the simplest of them) for relatively small $  | \lambda | $,  
 +
and with necessary modifications (for example, in the method of averaging functional corrections) one can use them for all $  \lambda $
 +
not belonging to the spectrum of $  A $.
  
Obtaining a sequence of more accurate approximations. In the theoretical investigation of some numerical method one succeeds in most cases in establishing only the plain fact that the approximations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430132.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430133.png" /> converge to a solution of (1) and (2) as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430134.png" /> converges to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430135.png" /> in some sense, and it is extremely rare that one manages to obtain effective estimates of the closeness of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430136.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430137.png" /> to a solution of (1) and (2). To verify the accuracy, one uses in practice a sequence of approximate solutions of (3) with an increasingly-accurate operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430138.png" />. In the simplest version of this verification one compares two neighbouring terms in this sequence of approximate solutions and stops obtaining further approximations when the two preceding ones are equal to a given accuracy. The unwieldiness of directly obtaining the terms of such a sequence has been partially surmounted in a variety of algorithms for iteratively obtaining more-accurate approximate solutions. A typical example of such algorithms is the following. If the sequence <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430139.png" /> of approximating operators converges in the norm of some Banach function space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430140.png" /> to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430141.png" /> in (2), then the iterative procedure
+
Obtaining a sequence of more accurate approximations. In the theoretical investigation of some numerical method one succeeds in most cases in establishing only the plain fact that the approximations $  \widetilde \phi  $
 
+
and $  {\widetilde \phi  } tilde $
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430142.png" /></td> <td valign="top" style="width:5%;text-align:right;">(9)</td></tr></table>
+
converge to a solution of (1) and (2) as $  \widetilde{A}  $
 
+
converges to $  A $
gives a sequence of approximations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430143.png" /> that converge to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430144.png" />, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430145.png" /> converges to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430146.png" /> in norm and if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430147.png" /> is sufficiently close to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430148.png" /> in norm. In using the sequence (9) one requires only one inverse of an operator. The closer <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430149.png" /> is to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430150.png" /> in norm, the better the convergence is. It is convenient, for example, to choose operators in the form (5). One can in this case establish that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430151.png" /> converges uniformly to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430152.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430153.png" />, provided that the kernel satisfies certain requirements.
+
in some sense, and it is extremely rare that one manages to obtain effective estimates of the closeness of $  \widetilde \phi  $
 +
and $  {\widetilde \phi  } tilde $
 +
to a solution of (1) and (2). To verify the accuracy, one uses in practice a sequence of approximate solutions of (3) with an increasingly-accurate operator $  \widetilde{A}  $.  
 +
In the simplest version of this verification one compares two neighbouring terms in this sequence of approximate solutions and stops obtaining further approximations when the two preceding ones are equal to a given accuracy. The unwieldiness of directly obtaining the terms of such a sequence has been partially surmounted in a variety of algorithms for iteratively obtaining more-accurate approximate solutions. A typical example of such algorithms is the following. If the sequence $  \{ A _ {n} \} $
 +
of approximating operators converges in the norm of some Banach function space $  \Phi $
 +
to $  A $
 +
in (2), then the iterative procedure
  
 +
$$ \tag{9 }
 +
( E - \lambda A _ {0} )
 +
\phi _ {n + 1 }  = \
 +
\lambda ( A _ {n + 1 }  - A _ {0} )
 +
\phi _ {n} + f _ {n}  $$
  
 +
gives a sequence of approximations  $  \phi _ {n} $
 +
that converge to  $  \phi $,
 +
if  $  f _ {n} $
 +
converges to  $  f $
 +
in norm and if  $  A _ {0} $
 +
is sufficiently close to  $  A $
 +
in norm. In using the sequence (9) one requires only one inverse of an operator. The closer  $  A _ {0} $
 +
is to  $  A $
 +
in norm, the better the convergence is. It is convenient, for example, to choose operators in the form (5). One can in this case establish that  $  \phi _ {n} $
 +
converges uniformly to  $  \phi $
 +
in  $  D $,
 +
provided that the kernel satisfies certain requirements.
  
 
====Comments====
 
====Comments====
For some concrete methods, the requirement that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430154.png" /> should be small is too strong. Instead, one typically has that (2) is approximated by a sequence <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430155.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430156.png" /> convergences pointwise to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430157.png" /> and is collectively compact (i.e. for all bounded sets <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430158.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430159.png" /> is compact). Under these assumptions one can prove convergence of the approximate solutions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430160.png" /> and give error bounds. An example where this theory applies is the Nyström method, which consists of solving (7) and then using (6) for extrapolation to all of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430161.png" />. For the theory of collectively-compact operators and its use in connection with integral equations see [[#References|[a1]]].
+
For some concrete methods, the requirement that $  \| A - \widetilde{A}  \| $
 +
should be small is too strong. Instead, one typically has that (2) is approximated by a sequence $  ( E - \lambda A _ {n} ) \phi _ {n} = f _ {n} $,  
 +
where $  ( A _ {n} ) $
 +
convergences pointwise to $  A $
 +
and is collectively compact (i.e. for all bounded sets $  B $,  
 +
$  {\cup _ {n \in \mathbf N }  A _ {n} ( B) } bar $
 +
is compact). Under these assumptions one can prove convergence of the approximate solutions $  \phi _ {n} $
 +
and give error bounds. An example where this theory applies is the Nyström method, which consists of solving (7) and then using (6) for extrapolation to all of $  D $.  
 +
For the theory of collectively-compact operators and its use in connection with integral equations see [[#References|[a1]]].
  
 
In collocation methods one can also use interpolation functions other than polynomials, e.g. splines ( "spline collocationspline collocation" ). Because of the better convergence properties of splines (cf. [[Spline approximation|Spline approximation]]), spline collocation is superior to polynomial collocation.
 
In collocation methods one can also use interpolation functions other than polynomials, e.g. splines ( "spline collocationspline collocation" ). Because of the better convergence properties of splines (cf. [[Spline approximation|Spline approximation]]), spline collocation is superior to polynomial collocation.
  
A variety of concrete numerical methods for Fredholm equations of the second kind are described in [[#References|[a2]]] (with Fortran programs), [[#References|[a3]]] and [[#References|[a4]]]. About numerical methods for solving Fredholm equations if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430162.png" /> is an eigen value see [[#References|[a5]]], pp. 368ff.
+
A variety of concrete numerical methods for Fredholm equations of the second kind are described in [[#References|[a2]]] (with Fortran programs), [[#References|[a3]]] and [[#References|[a4]]]. About numerical methods for solving Fredholm equations if $  \lambda \neq 0 $
 +
is an eigen value see [[#References|[a5]]], pp. 368ff.
  
 
The article above discusses Fredholm integral equations of the second kind only. However, there is also a theory for Fredholm integral equations of the first kind.
 
The article above discusses Fredholm integral equations of the second kind only. However, there is also a theory for Fredholm integral equations of the first kind.
Line 109: Line 322:
 
These are equations of the form
 
These are equations of the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430163.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a1)</td></tr></table>
+
$$ \tag{a1 }
 +
( A \phi ) ( s)  = \int\limits _ { D } K ( x , s ) \phi ( s)  d s  =  f ( x) .
 +
$$
 +
 
 +
They are usually ill-posed in the sense that their solution might not exist, not be unique, and will (if it exists) in general depend on  $  f $
 +
in a discontinuous way (see [[Ill-posed problems|Ill-posed problems]]). The notion of solution one commonly uses is the  "best-approximate solution"   $  A  ^ {+} \phi $,
 +
where  $  A  ^ {+} $
 +
is the Moore–Penrose generalized inverse of  $  A $(
 +
see [[Fredholm equation|Fredholm equation]]).  $  A  ^ {+} $
 +
is in general unbounded. Thus, a  "naive" numerical approximation of (a1), e.g. by discretization, will lead to ill-conditioned linear systems. For solving (a1) numerically one uses  "regularization methods" (see [[Regularization method|Regularization method]]). In general, regularization for (a1) is the replacement of the unbounded operator  $  A  ^ {+} $
 +
by a parameter-dependent family of bounded linear operators  $  R _  \alpha  $(
 +
$  \alpha > 0 $).
 +
The  "regularized solution" computed with a specific  "regularization operator"   $  R _  \alpha  $
 +
and noisy data  $  f _  \alpha  $(
 +
$  \| f - f _  \delta  \| \leq  \delta $)
 +
is then  $  \phi _  \alpha  ^  \delta  = R _  \alpha  f _  \delta  $.  
 +
The most popular choice for  $  R _  \alpha  $
 +
is  $  ( \alpha I + A  ^ {*} A )  ^ {-} 1 A  ^ {*} $(
 +
Tikhonov regularization). Other methods are iterated Tikhonov regularization, iterative methods like Landweber iteration, and truncated singular value expansion, where
  
They are usually ill-posed in the sense that their solution might not exist, not be unique, and will (if it exists) in general depend on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430164.png" /> in a discontinuous way (see [[Ill-posed problems|Ill-posed problems]]). The notion of solution one commonly uses is the "best-approximate solution" <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430165.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430166.png" /> is the Moore–Penrose generalized inverse of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430167.png" /> (see [[Fredholm equation|Fredholm equation]]). <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430168.png" /> is in general unbounded. Thus, a "naive"  numerical approximation of (a1), e.g. by discretization, will lead to ill-conditioned linear systems. For solving (a1) numerically one uses  "regularization methods"  (see [[Regularization method|Regularization method]]). In general, regularization for (a1) is the replacement of the unbounded operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430169.png" /> by a parameter-dependent family of bounded linear operators <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430170.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430171.png" />). The "regularized solution" computed with a specific "regularization operator"  <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430172.png" /> and noisy data <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430173.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430174.png" />) is then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430175.png" />. The most popular choice for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430176.png" /> is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430177.png" /> (Tikhonov regularization). Other methods are iterated Tikhonov regularization, iterative methods like Landweber iteration, and truncated singular value expansion, where
+
$$
 +
\phi _ \alpha  ^ \delta  = \
 +
\sum _ {\begin{array}{c}
 +
n= 1 \\
 +
  {\sigma _ {n} > \alpha }
 +
\end{array}
 +
  } ^ \infty  
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430178.png" /></td> </tr></table>
+
\frac{\langle  f _  \delta  , v _ {n} \rangle }{\sigma _ {n} }
 +
u _ {n} .
 +
$$
  
Here, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430179.png" /> denotes the [[Inner product|inner product]] in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430180.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430181.png" /> is a singular system for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430182.png" /> (i.e. the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430183.png" /> are the eigen values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430184.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430185.png" /> is a corresponding orthonormal set of eigen vectors, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430186.png" />).
+
Here, $  \langle  \cdot , \cdot \rangle $
 +
denotes the [[Inner product|inner product]] in $  L _ {2} $,  
 +
and $  \langle  \sigma _ {n} ;  u _ {n} , v _ {n} \rangle $
 +
is a singular system for $  A $(
 +
i.e. the $  \sigma _ {n}  ^ {2} $
 +
are the eigen values of $  A  ^ {*} A $,  
 +
$  \{ u _ {n} \} $
 +
is a corresponding orthonormal set of eigen vectors, $  \sigma _ {n} v _ {n} = A u _ {n} $).
  
Convergence rates for regularization methods depend on a priori smoothness assumptions on the data, which also influence the choice of the regularization parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f041/f041430/f041430187.png" />. For these questions and details about regularization methods see e.g. [[#References|[a6]]], [[#References|[a7]]].
+
Convergence rates for regularization methods depend on a priori smoothness assumptions on the data, which also influence the choice of the regularization parameter $  \alpha $.  
 +
For these questions and details about regularization methods see e.g. [[#References|[a6]]], [[#References|[a7]]].
  
 
For numerical realization, regularization methods have to be combined with projection methods; the latter can also be used directly for regularization (see [[#References|[a8]]]).
 
For numerical realization, regularization methods have to be combined with projection methods; the latter can also be used directly for regularization (see [[#References|[a8]]]).

Revision as of 19:40, 5 June 2020


Approximation methods for solving Fredholm integral equations of the second kind, amounting to performing a finite number of operations on numbers.

Let

$$ \tag{1 } \phi ( x) - \lambda \int\limits _ { D } K ( x, s) \phi ( s) \ ds = f ( x) $$

be a Fredholm integral equation of the second kind, where $ \lambda $ is a complex number, $ f ( x) $ is a known vector function, $ \phi ( x) $ is an unknown vector function, $ K ( x, s) $ is the kernel of equation (1), and $ D $ is a domain in some $ m $- dimensional Euclidean space. Below it is assumed that $ \lambda $ does not belong to the spectrum of the integral operator with kernel $ K $( that is, for the given $ \lambda $ equation (1) has a unique solution in some class of functions corresponding to the degree of smoothness of $ K $). The expression (1) naturally includes the case of a system of Fredholm equations.

For a general description of the problems of constructing and investigating numerical methods for solving Fredholm equations of the second kind one uses the language of functional analysis. The integral equation (1) can be written as a linear operator equation

$$ \tag{2 } ( E - \lambda A) \phi = f, $$

where $ \phi $ is an unknown element of some Banach space $ \Phi $, $ f $ is a given element of $ \Phi $ and $ A $ is a bounded linear operator from $ \Phi $ to $ \Phi $. The operator $ E - \lambda A $ is assumed to act invertibly from $ \Phi $ to $ \Phi $. The plan of any numerical method for solving (1) consists of the following. Let $ \widetilde \Phi $ be a Banach space in some way associated with $ \Phi $, but, generally speaking, different from it, and let $ \widetilde{A} $ be a linear operator from $ \widetilde \Phi $ to $ \widetilde \Phi $. The equation

$$ \tag{3 } ( E - \lambda \widetilde{A} ) \widetilde \phi = \widetilde{f} $$

is called an approximating equation for (2). The approximating operator $ \widetilde{A} $ is usually chosen in such a way that either it may be possible to compute $ \widetilde \phi $ directly from (3), or (more commonly) that one may find an approximate solution of (3) of the form

$$ \tag{4 } {\widetilde \phi } tilde = \psi ( \widetilde{A} , \widetilde{f} ) $$

such that the right-hand side of (4) may be computed in a finite number of arithmetical operations. The expression $ \psi ( \widetilde{A} , \widetilde{f} ) $ denotes performing certain actions on $ \widetilde{A} $ and $ \widetilde{f} $, in particular $ \psi $ may be a simple operator function of $ \widetilde{A} $( for example, $ \psi ( \widetilde{A} , \widetilde{f} ) = ( E - \lambda \widetilde{A} ) ^ {-} 1 \widetilde{f} $). The choice of $ \widetilde{A} $, $ \psi $ and $ \widetilde{f} $, and also of the space $ \widetilde \Phi $, is subject to the requirement that $ {\widetilde \phi } tilde $ and the exact solution $ \phi $ of (1), (2) be (in some sense) close, and is not, generally speaking, unique. Similarly, for a concrete numerical method (a concrete approximation formula for $ A $) the choice of $ \widetilde \Phi $ is not unique. The concrete choice of $ \Phi $ and $ \widetilde \Phi $ is dictated by the requirements of "closeness" of $ {\widetilde \phi } tilde $ and $ \phi $, and also by the convenience of the investigation. The specific nature of numerical methods for solving Fredholm equations of the second kind is determined mainly by one or the other concrete approximation of $ A $ by means of $ \widetilde{A} $. So usually an approximation method also gives its name to one or another numerical method for solving (1).

After $ \Phi $, $ \widetilde \Phi $, $ \widetilde{A} $, and $ \widetilde{f} $ have been chosen, the closeness of $ \phi $ and $ \widetilde \phi $( $ {\widetilde \phi } tilde $) is established using theorems from the general theory of approximation methods for solving operator equations.

In the case $ \Phi = \widetilde \Phi $, in order to establish that $ \widetilde \phi $ and $ \phi $ are close, it is sufficient to prove that $ \| \widetilde{A} - A \| $ is small. For a suitable choice of $ \Phi $, this can be successfully done for most classical methods for approximately solving Fredholm equations of the second kind.

In the majority of concrete methods the solution of equation (3) can easily be reduced to the solution of a system of linear algebraic equations; in order to construct $ \psi $ in (4), one can make use of some algorithm for solving systems of linear algebraic equations (see Linear algebra, numerical methods in).

The principal methods for constructing approximating operators are:

Quadrature methods and generalizations of them. These are the most-widespread methods for approximating the integral operator $ A $ in (1). The basis of these methods, which can be applied in the case of continuous $ K ( x, s) $, $ \phi ( s) $ and $ f ( x) $, consists in replacing the integral (with respect to $ s $) in (1) by some quadrature formula with respect to a grid of nodes $ \{ s _ {i} \} \in D $.

In this case

$$ \tag{5 } \widetilde{A} y = \ \sum _ {i = 1 } ^ { N } a _ {i} ^ {(} N) K ( x, s _ {i} ) y ( s _ {i} ), $$

where $ a _ {i} ^ {(} N) $ are the coefficients of the quadrature formula.

The approximating equation (3) can be regarded as an operator equation in the same space as the basic equation (1) (for example, in the space $ C ( D) $ of continuous vector functions on $ D $). In this case it will have the form

$$ \tag{6 } \widetilde \phi ( x) - \lambda \sum _ {i = 1 } ^ { N } a _ {i} ^ {(} N) K ( x, s _ {i} ) \widetilde \phi ( s _ {i} ) = f ( x). $$

Equation (6) can be reduced to a system of linear algebraic equations in $ \widetilde \phi ( s _ {i} ) $, $ i = 1 \dots N $:

$$ \tag{7 } \widetilde \phi ( s _ {j} ) - \lambda \sum _ {i = 1 } ^ { N } a _ {i} ^ {(} N) K ( s _ {j} , s _ {i} ) \widetilde \phi ( s _ {i} ) = f( s _ {j} ),\ \ j = 1 \dots N. $$

The solution (exact or approximate) of the system (7) gives $ {\widetilde \phi } tilde $.

Sometimes one takes the equation (7) itself as approximating (1), and then (7) corresponds to equation (3).

In this approach, the space $ \widetilde \Phi $ is not the same as $ \Phi $. The space $ \widetilde \Phi $ may, for example, be identified with the quotient space of $ \Phi $ by the subspace of functions in $ \Phi $ that vanish at the points $ \{ s _ {i} \} _ {i = 1 } ^ {N} $. The method (5) admits various generalizations, which are convenient to use, for example, in the case of a discontinuous $ K ( x, s) $. In these generalized methods the operator $ \widetilde{A} $ has the form

$$ \tag{5'} \widetilde{A} y = \ \sum _ {i = 1 } ^ { N } a _ {i} ^ {(} N) ( x) y ( s _ {i} ), $$

where $ a _ {i} ^ {(} N) ( x) $ are functions associated with the kernel $ K ( x, s) $.

See also Quadrature-sum method.

Methods for replacing the kernel by an approximate one. These make use of an approximating operator $ \widetilde{A} $ of the form

$$ \widetilde{A} y = \ \int\limits _ { D } \widetilde{K} ( x, s) y ( s) ds, $$

where $ \widetilde{K} $ is some function close to $ K $, but simpler. Most often $ \widetilde{K} $ is a degenerate kernel, that is,

$$ \tag{8 } \widetilde{K} ( x, s) = \ \sum _ {i = 1 } ^ { N } a _ {i} ( x) b _ {i} ( s). $$

In this case (3) is an integral Fredholm equation with a degenerate kernel (cf. also Degenerate kernels, method of). Its solution reduces to solving a system of linear algebraic equations. However, the matrix entries of the system of equations obtained will be expressed as integrals of known functions, and for a numerical solution of them one must, generally speaking, approximate them by quadrature sums.

There are many ways of making a concrete choice of $ \widetilde{K} $ by formula (8) (see, for example, Strip method (integral equations)). The theoretical investigation of the closeness of solutions of equations (3) and (1) is usually significantly simpler in these methods than, for example, in quadrature methods, since in most cases one can set $ \widetilde \Phi = \Phi $ and the choice of $ \Phi $ is essentially determined directly from the way the problem is posed. If $ \widetilde{K} $ and $ K $ are close, then, as a rule, $ A $ and $ \widetilde{A} $ are close in the norm of $ \Phi $. However, the practical realization of these methods is in most cases significantly more laborious as compared with quadrature methods and their generalizations.

Projection methods lead to an approximating equation (3) of the form $ \widetilde \phi - \lambda PA \widetilde \phi = Pf $, where $ \widetilde \Phi $ is a subspace of $ \Phi $ and $ P $ is the projection onto this subspace. Freedom in the choice of $ \Phi $, $ \widetilde \Phi $ and even $ P $ itself leads to a large number of concrete projection methods for solving Fredholm integral equations of the second kind. A typical example of a projection method is the Galerkin method. In order to obtain the concrete computational formulas of this method, one must (if possible) treat the integral equation (1) as an operator equation in the Hilbert space $ L _ {2} ( D) $ of functions that are square-integrable in $ D $, and take as $ P $ the orthogonal projection that associates to a function in $ L _ {2} ( D) $ the $ N $- term segment of its Fourier series with respect to some complete orthonormal system of functions $ \{ \psi _ {k} \} $ in $ L _ {2} ( D) $.

In another interpretation, the Galerkin method is equivalent to that of replacing the kernel by a degenerate one of the form

$$ \widetilde{K} ( x, s) = \ \sum _ {i = 1 } ^ { N } \int\limits _ { D } K ( x, s) \phi _ {i} ( x) \ dx \cdot \phi _ {i} ( x) $$

with a simultaneous replacement of the right-hand side by a function close to it:

$$ \widetilde{f} ( s) = \ \sum _ {i = 1 } ^ { N } \int\limits _ { D } f ( s) \phi _ {i} ( s) \ ds \cdot \phi _ {i} ( s). $$

As another example of projection methods one can take the collocation method. If $ K ( x, s) $ and $ f ( x) $ are continuous functions, then (1) can be regarded as an operator equation (2) in $ C ( D) $— the space of continuous functions on $ D $. The collocation method corresponds to a choice of $ P $ in the form

$$ Py = Z ( y),\ \ y \in D, $$

where $ Z $ is the Lagrange interpolation polynomial with respect to some grid of nodes in $ D $( cf. also Lagrange interpolation formula).

In the practical realization of most projection methods applied to a Fredholm integral equation of the second kind there arise difficulties of the additional approximation of the integrals that appear, which also makes these methods (just like the methods of replacing the kernel by an approximate one) usually more laborious as compared with typical quadrature methods. However, this assertion is relative, since the actual classification of the methods is a matter of convention. For example, the collocation method can be interpreted both as a projection method and as a generalized quadrature method.

Methods for solving the approximating equations. Usually, the solution of an approximating equation (3) reduces to solving a system of linear algebraic equations. One can use methods of successive approximations (the simplest of them) for relatively small $ | \lambda | $, and with necessary modifications (for example, in the method of averaging functional corrections) one can use them for all $ \lambda $ not belonging to the spectrum of $ A $.

Obtaining a sequence of more accurate approximations. In the theoretical investigation of some numerical method one succeeds in most cases in establishing only the plain fact that the approximations $ \widetilde \phi $ and $ {\widetilde \phi } tilde $ converge to a solution of (1) and (2) as $ \widetilde{A} $ converges to $ A $ in some sense, and it is extremely rare that one manages to obtain effective estimates of the closeness of $ \widetilde \phi $ and $ {\widetilde \phi } tilde $ to a solution of (1) and (2). To verify the accuracy, one uses in practice a sequence of approximate solutions of (3) with an increasingly-accurate operator $ \widetilde{A} $. In the simplest version of this verification one compares two neighbouring terms in this sequence of approximate solutions and stops obtaining further approximations when the two preceding ones are equal to a given accuracy. The unwieldiness of directly obtaining the terms of such a sequence has been partially surmounted in a variety of algorithms for iteratively obtaining more-accurate approximate solutions. A typical example of such algorithms is the following. If the sequence $ \{ A _ {n} \} $ of approximating operators converges in the norm of some Banach function space $ \Phi $ to $ A $ in (2), then the iterative procedure

$$ \tag{9 } ( E - \lambda A _ {0} ) \phi _ {n + 1 } = \ \lambda ( A _ {n + 1 } - A _ {0} ) \phi _ {n} + f _ {n} $$

gives a sequence of approximations $ \phi _ {n} $ that converge to $ \phi $, if $ f _ {n} $ converges to $ f $ in norm and if $ A _ {0} $ is sufficiently close to $ A $ in norm. In using the sequence (9) one requires only one inverse of an operator. The closer $ A _ {0} $ is to $ A $ in norm, the better the convergence is. It is convenient, for example, to choose operators in the form (5). One can in this case establish that $ \phi _ {n} $ converges uniformly to $ \phi $ in $ D $, provided that the kernel satisfies certain requirements.

Comments

For some concrete methods, the requirement that $ \| A - \widetilde{A} \| $ should be small is too strong. Instead, one typically has that (2) is approximated by a sequence $ ( E - \lambda A _ {n} ) \phi _ {n} = f _ {n} $, where $ ( A _ {n} ) $ convergences pointwise to $ A $ and is collectively compact (i.e. for all bounded sets $ B $, $ {\cup _ {n \in \mathbf N } A _ {n} ( B) } bar $ is compact). Under these assumptions one can prove convergence of the approximate solutions $ \phi _ {n} $ and give error bounds. An example where this theory applies is the Nyström method, which consists of solving (7) and then using (6) for extrapolation to all of $ D $. For the theory of collectively-compact operators and its use in connection with integral equations see [a1].

In collocation methods one can also use interpolation functions other than polynomials, e.g. splines ( "spline collocationspline collocation" ). Because of the better convergence properties of splines (cf. Spline approximation), spline collocation is superior to polynomial collocation.

A variety of concrete numerical methods for Fredholm equations of the second kind are described in [a2] (with Fortran programs), [a3] and [a4]. About numerical methods for solving Fredholm equations if $ \lambda \neq 0 $ is an eigen value see [a5], pp. 368ff.

The article above discusses Fredholm integral equations of the second kind only. However, there is also a theory for Fredholm integral equations of the first kind.

Fredholm integral equations of the first kind.

These are equations of the form

$$ \tag{a1 } ( A \phi ) ( s) = \int\limits _ { D } K ( x , s ) \phi ( s) d s = f ( x) . $$

They are usually ill-posed in the sense that their solution might not exist, not be unique, and will (if it exists) in general depend on $ f $ in a discontinuous way (see Ill-posed problems). The notion of solution one commonly uses is the "best-approximate solution" $ A ^ {+} \phi $, where $ A ^ {+} $ is the Moore–Penrose generalized inverse of $ A $( see Fredholm equation). $ A ^ {+} $ is in general unbounded. Thus, a "naive" numerical approximation of (a1), e.g. by discretization, will lead to ill-conditioned linear systems. For solving (a1) numerically one uses "regularization methods" (see Regularization method). In general, regularization for (a1) is the replacement of the unbounded operator $ A ^ {+} $ by a parameter-dependent family of bounded linear operators $ R _ \alpha $( $ \alpha > 0 $). The "regularized solution" computed with a specific "regularization operator" $ R _ \alpha $ and noisy data $ f _ \alpha $( $ \| f - f _ \delta \| \leq \delta $) is then $ \phi _ \alpha ^ \delta = R _ \alpha f _ \delta $. The most popular choice for $ R _ \alpha $ is $ ( \alpha I + A ^ {*} A ) ^ {-} 1 A ^ {*} $( Tikhonov regularization). Other methods are iterated Tikhonov regularization, iterative methods like Landweber iteration, and truncated singular value expansion, where

$$ \phi _ \alpha ^ \delta = \ \sum _ {\begin{array}{c} n= 1 \\ {\sigma _ {n} > \alpha } \end{array} } ^ \infty \frac{\langle f _ \delta , v _ {n} \rangle }{\sigma _ {n} } u _ {n} . $$

Here, $ \langle \cdot , \cdot \rangle $ denotes the inner product in $ L _ {2} $, and $ \langle \sigma _ {n} ; u _ {n} , v _ {n} \rangle $ is a singular system for $ A $( i.e. the $ \sigma _ {n} ^ {2} $ are the eigen values of $ A ^ {*} A $, $ \{ u _ {n} \} $ is a corresponding orthonormal set of eigen vectors, $ \sigma _ {n} v _ {n} = A u _ {n} $).

Convergence rates for regularization methods depend on a priori smoothness assumptions on the data, which also influence the choice of the regularization parameter $ \alpha $. For these questions and details about regularization methods see e.g. [a6], [a7].

For numerical realization, regularization methods have to be combined with projection methods; the latter can also be used directly for regularization (see [a8]).

References

[a1] P.M. Anselone, "Collectively compact operator approximation theory and applications to integral equations" , Prentice-Hall (1971)
[a2] K.E. Atkinson, "A survey of numerical methods for the solution of Fredholm integral equations of the second kind" , SIAM (1976)
[a3] C.T.H. Baker, "The numerical treatment of integral equations" , Clarendon Press (1977) pp. Chapt. 4
[a4] L.M. Delves, J.L. Mohamed, "Computational methods for integral equations" , Cambridge Univ. Press (1985)
[a5] M.Z. Nashed (ed.) , Genealized inverses and applications , Acad. Press (1976)
[a6] H.W. Engl (ed.) C.W. Groetsch (ed.) , Inverse and ill-posed problems , Acad. Press (1987)
[a7] C.W. Groetsch, "The theory of Tikhonov regularization for Fredholm equations of the first kind" , Pitman (1984)
[a8] F. Natterer, "The finite element method for ill-posed problems" RAIRO Anal. Numer. , 11 (1977) pp. 271–278
[a9] L.V. Kantorovich, V.I. Krylov, "Approximate methods of higher analysis" , Noordhoff (1958) (Translated from Russian)
[a10] I.C. [I.Ts. Gokhberg] Gohberg, I.A. Feld'man, "Convolution equations and projection methods for their solution" , Transl. Math. Monogr. , 41 , Amer. Math. Soc. (1974) (Translated from Russian)
[a11] P.P. Zabreiko (ed.) A.I. Koshelev (ed.) M.A. Krasnoselskii (ed.) S.G. Mikhlin (ed.) L.S. Rakovshchik (ed.) V.Ya. Stet'senko (ed.) T.O. Shaposhnikova (ed.) R.S. Anderssen (ed.) , Integral equations - a reference text , Noordhoff (1975) (Translated from Russian)
How to Cite This Entry:
Fredholm equation, numerical methods. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Fredholm_equation,_numerical_methods&oldid=13562
This article was adapted from an original article by A.B. Bakushinskii (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article