# Ill-posed problems

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

incorrectly-posed problems, improperly-posed problems


Problems for which at least one of the conditions below, which characterize well-posed problems, is violated. The problem of determining a solution $z=R(u)$ in a metric space $Z$ (with metric $\rho_Z(,)$) from "initial data" $u$ in a metric space $U$ (with metric $\rho_U(,)$) is said to be well-posed on the pair of spaces $(Z,U)$ if: a) for every $u \in U$ there exists a solution $z \in Z$; b) the solution is uniquely determined; and c) the problem is stable on the spaces $(Z,U)$, i.e.: For every $\epsilon > 0$ there is a $\delta(\epsilon) > 0$ such that for any $u_1, u_2 \in U$ it follows from $\rho_U(u_1,u_2) \leq \delta(\epsilon)$ that $\rho_Z(z_1,z_2) < \epsilon$, where $z_1 = R(u_1)$ and $z_2 = R(u_2)$.

The concept of a well-posed problem is due to J. Hadamard (1923), who took the point of view that every mathematical problem corresponding to some physical or technological problem must be well-posed. In fact, what physical interpretation can a solution have if an arbitrary small change in the data can lead to large changes in the solution? Moreover, it would be difficult to apply approximation methods to such problems. This put the expediency of studying ill-posed problems in doubt.

However, this point of view, which is natural when applied to certain time-depended phenomena, cannot be extended to all problems. The following problems are unstable in the metric of $Z$, and therefore ill-posed: the solution of integral equations of the first kind; differentiation of functions known only approximately; numerical summation of Fourier series when their coefficients are known approximately in the metric of $\ell_2$; the Cauchy problem for the Laplace equation; the problem of analytic continuation of functions; and the inverse problem in gravimetry. Other ill-posed problems are the solution of systems of linear algebraic equations when the system is ill-conditioned; the minimization of functionals having non-convergent minimizing sequences; various problems in linear programming and optimal control; design of optimal systems and optimization of constructions (synthesis problems for antennas and other physical systems); and various other control problems described by differential equations (in particular, differential games). Various physical and technological questions lead to the problems listed (see [TiAr]).

A broad class of so-called inverse problems that arise in physics, technology and other branches of science, in particular, problems of data processing of physical experiments, belongs to the class of ill-posed problems. Let $z$ be a characteristic quantity of the phenomenon (or object) to be studied. In a physical experiment the quantity $z$ is frequently inaccessible to direct measurement, but what is measured is a certain transform $Az=u$ (also called outcome). For the interpretation of the results it is necessary to determine $z$ from $u$, that is, to solve the equation $$\tag{1} Az = u.$$ Problems of solving an equation (1) are often called pattern recognition problems. Problems leading to the minimization of functionals (design of antennas and other systems or constructions, problems of optimal control and many others) are also called synthesis problems.

Suppose that in a mathematical model for some physical experiments the object to be studied (the phenomenon) is characterized by an element $z$ (a function, a vector) belonging to a set $Z$ of possible solutions in a metric space $\hat{Z}$. Suppose that $z_T$ is inaccessible to direct measurement and that what is measured is a transform, $Az_T=u_T$, $u_T \in AZ$, where $AZ$ is the image of $Z$ under the operator $A$. Evidently, $z_T = A^{-1}u_T$, where $A^{-1}$ is the operator inverse to $A$. Since $u_T$ is obtained by measurement, it is known only approximately. Let $\tilde{u}$ be this approximate value. Under these conditions the question can only be that of finding a "solution" of the equation $$\tag{2} Az = \tilde{u},$$ approximating $z_T$.

In many cases the operator $A$ is such that its inverse $A^{-1}$ is not continuous, for example, when $A$ is a completely-continuous operator in a Hilbert space, in particular an integral operator of the form $$\int_a^b K(x,s) z(s) \rd s.$$ Under these conditions one cannot take, following classical ideas, an exact solution of (2), that is, the element $z=A^{-1}\tilde{u}$, as an approximate "solution" to $z_T$. In fact: a) such a solution need not exist on $Z$, since $\tilde{u}$ need not belong to $AZ$; and b) such a solution, if it exists, need not be stable under small changes of $\tilde{u}$ (due to the fact that $A^{-1}$ is not continuous) and, consequently, need not have a physical interpretation. The problem (2) then is ill-posed.

## Numerical methods for solving ill-posed problems.

For ill-posed problems of the form (1) the question arises: What is meant by an approximate solution? Clearly, it should be so defined that it is stable under small changes of the original information. A second question is: What algorithms are there for the construction of such solutions? Answers to these basic questions were given by A.N. Tikhonov (see [Ti], [Ti2]).

The selection method. In some cases an approximate solution of (1) can be found by the selection method. It consists of the following: From the class of possible solutions $M \subset Z$ one selects an element $\tilde{z}$ for which $A\tilde{z}$ approximates the right-hand side of (1) with required accuracy. For the desired approximate solution one takes the element $\tilde{z}$. The question arises: When is this method applicable, that is, when does $$\rho_U(A\tilde{z},Az_T) \leq \delta$$ imply that $$\rho_Z(z,z_T) \leq \epsilon(\delta),$$ where $\epsilon(\delta) \rightarrow 0$ as $\delta \rightarrow 0$? This holds under the conditions that the solution of (1) is unique and that $M$ is compact (see [Ti3]). On the basis of these arguments one has formulated the concept (or the condition) of being Tikhonov well-posed, also called conditionally well-posed (see [La]). As applied to (1), a problem is said to be conditionally well-posed if it is known that for the exact value of the right-hand side $u=u_T$ there exists a unique solution $z_T$ of (1) belonging to a given compact set $M$. In this case $A^{-1}$ is continuous on $M$, and if instead of $u_T$ an element $u_\delta$ is known such that $\rho_U(u_\delta,u_T) \leq \delta$ and $u_\delta \in AM$, then as an approximate solution of (1) with right-hand side $u = u_\delta$ one can take $z_\delta = A^{-1}u_\delta$. As $\delta \rightarrow 0$, $z_\delta$ tends to $z_T$.

In many cases the approximately known right-hand side $\tilde{u}$ does not belong to $AM$. Under these conditions equation (1) does not have a classical solution. As an approximate solution one takes then a generalized solution, a so-called quasi-solution (see [Iv]). A quasi-solution of (1) on $M$ is an element $\tilde{z}\in M$ that minimizes for a given $\tilde{u}$ the functional $\rho_U(Az,\tilde{u})$ on $M$ (see [Iv2]). If $M$ is compact, then a quasi-solution exists for any $\tilde{u} \in U$, and if in addition $\tilde{u} \in AM$, then a quasi-solution $\tilde{z}$ coincides with the classical (exact) solution of (1). The existence of quasi-solutions is guaranteed only when the set $M$ of possible solutions is compact.

The regularization method. For a number of applied problems leading to (1) a typical situation is that the set $Z$ of possible solutions is not compact, the operator $A^{-1}$ is not continuous on $AZ$, and changes of the right-hand side of (1) connected with the approximate character can cause the solution to go out of $AZ$. Such problems are called essentially ill-posed. An approach has been worked out to solve ill-posed problems that makes it possible to construct numerical methods that approximate solutions of essentially ill-posed problems of the form (1) which are stable under small changes of the data. In this context, both the right-hand side $u$ and the operator $A$ should be among the data.

In what follows, for simplicity of exposition it is assumed that the operator $A$ is known exactly. At the basis of the approach lies the concept of a regularizing operator (see [Ti2], [TiAr]). An operator $R(u,\delta)$ from $U$ to $Z$ is said to be a regularizing operator for the equation $Az=u$ (in a neighbourhood of $u=u_T$) if it has the following properties: 1) there exists a $\delta_1 > 0$ such that the operator $R(u,\delta)$ is defined for every $\delta$, $0 \leq \delta \leq \delta_1$, and for any $u_\delta \in U$ such that $\rho_U(u_\delta,u_T) \leq \delta$; and 2) for every $\epsilon > 0$ there exists a $\delta_0 = \delta_0(\epsilon,u_T)$ such that $\rho_U(u_\delta,u_T) \leq \delta \leq \delta_0$ implies $\rho_Z(z_\delta,z_T) \leq \epsilon$, where $z_\delta = R(u_\delta,\delta)$.

Sometimes it is convenient to use another definition of a regularizing operator, comprising the previous one. An operator $R(u,\alpha)$ from $U$ to $Z$, depending on a parameter $\alpha$, is said to be a regularizing operator (or regularization operator) for the equation $Az=u$ (in a neighbourhood of $u=u_T$) if it has the following properties: 1) there exists a $\delta_1 > 0$ such that $R(u,\alpha)$ is defined for every $\alpha$ and any $u_\delta \in U$ for which $\rho_U(u_\delta,u_T) < \delta \leq \delta_1$; and 2) there exists a function $\alpha = \alpha(\delta)$ of $\delta$ such that for any $\epsilon > 0$ there is a $\delta(\epsilon) \leq \delta_1$ such that if $u_\delta \in U$ and $\rho_U(u_\delta,u_T) \leq \delta(\epsilon)$, then $\rho_Z(z_\delta,z_T) < \epsilon$, where $z_\delta = R(u_\delta,\alpha(\delta))$. In this definition it is not assumed that the operator $R(u,\alpha(\delta))$ is globally single-valued.

If $\rho_U(u_\delta,u_T)$, then as an approximate solution of (1) with an approximately known right-hand side $u_\delta$ one can take the element $z_\alpha = R(u_\delta,\alpha)$ obtained by means of the regularizing operator $R(u,\alpha)$, where $\alpha = \alpha(\delta)$ is compatible with the error of the initial data $u_\delta$ (see [Ti], [Ti2], [TiAr]). This is said to be a regularized solution of (1). The numerical parameter $\alpha$ is called the regularization parameter. As $\delta \rightarrow 0$, the regularized approximate solution $z_\alpha(\delta) = R(u_\delta,\alpha(\delta))$ tends (in the metric of $Z$) to the exact solution $z_T$.

Thus, the task of finding approximate solutions of (1) that are stable under small changes of the right-hand side reduces to: a) finding a regularizing operator; and b) determining the regularization parameter $\alpha$ from additional information on the problem, for example, the size of the error with which the right-hand side $u$ is given.

The construction of regularizing operators. It is assumed that the equation $Az = u_T$ has a unique solution $z_T$. Suppose that instead of $Az = u_T$ the equation $Az = u_\delta$ is solved and that $\rho_U(u_\delta,u_T) \leq \delta$. Since $\rho_U(Az_T,u_\delta) \leq \delta$, the approximate solution of $Az = u_\delta$ is looked for in the class $Z_\delta$ of elements $z_\delta$ such that $\rho_U(u_\delta,u_T) \leq \delta$. This $Z_\delta$ is the set of possible solutions. As an approximate solution one cannot take an arbitrary element $z_\delta$ from $Z_\delta$, since such a "solution" is not unique and is, generally speaking, not continuous in $\delta$. As a selection principle for the possible solutions ensuring that one obtains an element (or elements) from $Z_\delta$ depending continuously on $\delta$ and tending to $z_T$ as $\delta \rightarrow 0$, one uses the so-called variational principle (see [Ti]). Let $\Omega[z]$ be a continuous non-negative functional defined on a subset $F_1$ of $Z$ that is everywhere-dense in $Z$ and is such that: a) $z_1 \in F_1$; and b) for every $d > 0$ the set of elements $z$ in $F_1$ for which $\Omega[z] \leq d$, is compact in $F_1$. Functionals having these properties are said to be stabilizing functionals for problem (1). Let $\Omega[z]$ be a stabilizing functional defined on a subset $F_1$ of $Z$. ($F_1$ can be the whole of $Z$.) Among the elements of $F_{1,\delta} = F_1 \cap Z_\delta$ one looks for one (or several) that minimize(s) $\Omega[z]$ on $F_{1,\delta}$. The existence of such an element $z_\delta$ can be proved (see [TiAr]). It can be regarded as the result of applying a certain operator $R_1(u_\delta,d)$ to the right-hand side of the equation $Az = u_\delta$, that is, $z_\delta=R_1(u_\delta,d)$. Then $R_1(u,\delta)$ is a regularizing operator for equation (1). In practice the search for $z_\delta$ can be carried out in the following manner: under mild addition al restrictions on $\Omega[z]$ (quasi-monotonicity of $\Omega[z]$, see [TiAr]) it can be proved that $\inf\Omega[z]$ is attained on elements $z_\delta$ for which $\rho_U(Az_\delta,u_\delta) = \delta$. An element $z_\delta$ is a solution to the problem of minimizing $\Omega[z]$ given $\rho_U(Az,u_\delta)=\delta$, that is, a solution of a problem of conditional extrema, which can be solved using Lagrange's multiplier method and minimization of the functional $$M^\alpha[z,u_\delta] = \rho_U^2(Az,u_\delta) + \alpha \Omega[z].$$ For any $\alpha > 0$ one can prove that there is an element $z_\alpha$ minimizing $M^\alpha[z,u_\delta]$. The parameter $\alpha$ is determined from the condition $\rho_U(Az_\alpha,u_\delta) = \delta$. If there is an $\alpha$ for which $\rho_U(Az_\alpha,u_\delta) = \delta$, then the original variational problem is equivalent to that of minimizing $M^\alpha[z,u_\delta]$, which can be solved by various methods on a computer (for example, by solving the corresponding Euler equation for $M^\alpha[z,u_\delta]$). The element $z_\alpha$ minimizing $M^\alpha[z,u_\delta]$ can be regarded as the result of applying to the right-hand side of the equation $Az = u_\delta$ a certain operator $R_2(u_\delta,\alpha)$ depending on $\alpha$, that is, $z_\alpha = R_2(u_\delta,\alpha)$ in which $\alpha$ is determined by the discrepancy relation $\rho_U(Az_\alpha,u_\delta) = \delta$. Then $R_2(u,\alpha)$ is a regularizing operator for (1). Equivalence of the original variational problem with that of finding the minimum of $M^\alpha[z,u_\delta]$ holds, for example, for linear operators $A$. For non-linear operators $A$ this need not be the case (see [GoLeYa]).

The so-called smoothing functional $M^\alpha[z,u_\delta]$ can be introduced formally, without connecting it with a conditional extremum problem for the functional $\Omega[z]$, and for an element $z_\alpha$ minimizing it sought on the set $F_{1,\delta}$. This poses the problem of finding the regularization parameter $\alpha$ as a function of $\delta$, $\alpha = \alpha(\delta)$, such that the operator $R_2(u,\alpha(\delta))$ determining the element $z_\alpha = R_2(u_\delta,\alpha(\delta))$ is regularizing for (1). Under certain conditions (for example, when it is known that $\rho_U(u_\delta,u_T) \leq \delta$ and $A$ is a linear operator) such a function exists and can be found from the relation $\rho_U(Az_\alpha,u_\delta) = \delta$. There are also other methods for finding $\alpha(\delta)$.

Let $T_{\delta_1}$ be a class of non-negative non-decreasing continuous functions on $[0,\delta_1]$, $z_T$ a solution of (1) with right-hand side $u=u_T$, and $A$ a continuous operator from $Z$ to $U$. For any positive number $\epsilon$ and functions $\beta_1(\delta)$ and $\beta_2(\delta)$ from $T_{\delta_1}$ such that $\beta_2(0) = 0$ and $\delta^2 / \beta_1(\delta) \leq \beta_2(\delta)$, there exists a $\delta_0 = \delta_0(\epsilon,\beta_1,\beta_2)$ such that for $u_\delta \in U$ and $\delta \leq \delta_0$ it follows from $\rho_U(u_\delta,u_T) \leq \delta$ that $\rho_Z(z^\delta,z_T) \leq \epsilon$, where $z^\alpha = R_2(u_\delta,\alpha)$ for all $\alpha$ for which $\delta^2 / \beta_1(\delta) \leq \alpha \leq \beta_2(\delta)$.

Methods for finding the regularization parameter depend on the additional information available on the problem. If the error of the right-hand side of the equation for $u_\delta$ is known, say $\rho_U(u_\delta,u_T) \leq \delta$, then in accordance with the preceding it is natural to determine $\alpha$ by the discrepancy, that is, from the relation $\rho_U(Az_\alpha^\delta,u_\delta) = \phi(\alpha) = \delta$.

The function $\phi(\alpha)$ is monotone and semi-continuous for every $\alpha > 0$. If $A$ is a linear operator, $Z$ a Hilbert space and $\Omega[z]$ a strictly-convex functional (for example, quadratic), then the element $z_{\alpha_\delta}$ is unique and $\phi(\alpha)$ is a single-valued function. Under these conditions, for every positive number $\delta < \rho_U(Az_0,u_\delta)$, where $z_0 \in \set{ z : \Omega[z] = \inf_{y\in F}\Omega[y] }$, there is an $\alpha(\delta)$ such that $\rho_U(Az_\alpha^\delta,u_\delta) = \delta$ (see [TiAr]).

However, for a non-linear operator $A$ the equation $\phi(\alpha) = \delta$ may have no solution (see [GoLeYa]).

The regularization method is closely connected with the construction of splines (cf. Spline). For example, the problem of finding a function $z(x)$ with piecewise-continuous second-order derivative on $[a,b]$ that minimizes the functional $$\Omega[z] = \int_a^b (z^{\prime\prime}(x))^2 \rd x$$ and takes given values $\set{z_i}$ on a grid $\set{x_i}$, is equivalent to the construction of a spline of the second degree.

A regularizing operator can be constructed by spectral methods (see [TiAr], [GoLeYa]), by means of the classical integral transforms in the case of equations of convolution type (see [Ar], [TiAr]), by the method of quasi-mappings (see [LaLi]), or by the iteration method (see [Kr]). Necessary and sufficient conditions for the existence of a regularizing operator are known (see [Vi]).

Next, suppose that not only the right-hand side of (1) but also the operator $A$ is given approximately, so that instead of the exact initial data $(A,u_T)$ one has $(A_h,u_\delta)$, where $$\rho_U(u_\delta,u_T) \leq \delta, \qquad h = \sup_{\text{z \in F_1, \Omega[z] \neq 0}} \frac{\rho_U(A_hz,Az)}{\Omega[z]^{1/2}} < \infty.$$ Under these conditions the procedure for obtaining an approximate solution is the same, only instead of $M^\alpha[z,u_\delta]$ one has to consider the functional $$M^\alpha[z,u_\delta,A_h] = \rho_U^2(A_hz,u_\delta) + \alpha\Omega[z],$$ and the parameter $\alpha$ can be determined, for example, from the relation (see [TiAr]) $$\rho_U^2(A_hz,u_\delta) = \bigl( \delta + h \Omega[z_\alpha]^{1/2} \bigr)^2.$$

If (1) has an infinite set of solutions, one introduces the concept of a normal solution. Suppose that $Z$ is a normed space. Then one can take, for example, a solution $\bar{z}$ for which the deviation in norm from a given element $z_0 \in Z$ is minimal, that is, $$\norm{\bar{z} - z_0}_Z = \inf_{z \in Z} \norm{z - z_0}_Z .$$ An approximation to a normal solution that is stable under small changes in the right-hand side of (1) can be found by the regularization method described above. The class of problems with infinitely many solutions includes degenerate systems of linear algebraic equations. So-called badly-conditioned systems of linear algebraic equations can be regarded as systems obtained from degenerate ones when the operator $A$ is replaced by its approximation $A_h$. As a normal solution of a corresponding degenerate system one can take a solution $z$ of minimal norm $\norm{z}$. In the smoothing functional one can take for $\Omega[z]$ the functional $\Omega[z] = \norm{z}^2$. Approximate solutions of badly-conditioned systems can also be found by the regularization method with $\Omega[z] = \norm{z}^2$ (see [TiAr]).

Similar methods can be used to solve a Fredholm integral equation of the second kind in the spectrum, that is, when the parameter $\lambda$ of the equation is equal to one of the eigen values of the kernel.

## Instability problems in the minimization of functionals.

A number of problems important in practice leads to the minimization of functionals $f[z]$. One distinguishes two types of such problems. In the first class one has to find a minimal (or maximal) value of the functional. Many problems in the design of optimal systems or constructions fall in this class. For such problems it is irrelevant on what elements the required minimum is attained. Therefore, as approximate solutions of such problems one can take the values of the functional $f[z]$ on any minimizing sequence $\set{z_n}$.

In the second type of problems one has to find elements $z$ on which the minimum of $f[z]$ is attained. They are called problems of minimizing over the argument. E.g., the minimizing sequences may be divergent. In these problems one cannot take as approximate solutions the elements of minimizing sequences. Such problems are called unstable or ill-posed. These include, for example, problems of optimal control, in which the function to be optimized (the object function) depends only on the phase variables.

Suppose that $f[z]$ is a continuous functional on a metric space $Z$ and that there is an element $z_0 \in Z$ minimizing $f[z]$. A minimizing sequence $\set{z_n}$ of $f[z]$ is called regularizing if there is a compact set $\hat{Z}$ in $Z$ containing $\set{z_n}$. If the minimization problem for $f[z]$ has a unique solution $z_0$, then a regularizing minimizing sequence converges to $z_0$, and under these conditions it is sufficient to exhibit algorithms for the construction of regularizing minimizing sequences. This can be done by using stabilizing functionals $\Omega[z]$.

Let $\Omega[z]$ be a stabilizing functional defined on a set $F_1 \subset Z$, let $\inf_{z \in F_1}f[z] = f[z_0]$ and let $z_0 \in F_1$. Frequently, instead of $f[z]$ one takes its $\delta$-approximation $f_\delta[z]$ relative to $\Omega[z]$, that is, a functional such that for every $z \in F_1$, $$\abs{f_\delta[z] - f[z]} \leq \delta\Omega[z].$$ Then for any $\alpha > 0$ the problem of minimizing the functional $$M^\alpha[z,f_\delta] = f_\delta[z] + \alpha \Omega[z]$$ over the argument is stable.

Let $\set{\delta_n}$ and $\set{\alpha_n}$ be null-sequences such that $\delta_n/\alpha_n \leq q < 1$ for every $n$, and let $\set{z_{\alpha_n,\delta_n}}$ be a sequence of elements minimizing $M^{\alpha_n}[z,f_{\delta_n}]$. This is a regularizing minimizing sequence for the functional $f_\delta[z]$ (see [TiAr]), consequently, it converges as $n \rightarrow \infty$ to an element $z_0$. As approximate solutions of the problems one can then take the elements $z_{\alpha_n,\delta_n}$.

Similarly approximate solutions of ill-posed problems in optimal control can be constructed.

In applications ill-posed problems often occur where the initial data contain random errors. For the construction of approximate solutions to such classes both deterministic and probability approaches are possible (see [TiAr], [LaVa]).

#### References

 [Ar] V.Ya. Arsenin, "On a method for obtaining approximate solutions to convolution integral equations of the first kind" Proc. Steklov Inst. Math., 133 (1977) pp. 31–48 Trudy Mat. Inst. Steklov., 133 (1973) pp. 33–51 [Ba] A.B. Bakushinskii, "A general method for constructing regularizing algorithms for a linear ill-posed equation in Hilbert space" USSR Comp. Math. Math. Phys., 7 : 3 (1968) pp. 279–287 Zh. Vychisl. Mat. i Mat. Fiz., 7 : 3 (1967) pp. 672–677 [GoLeYa] A.V. Goncharskii, A.S. Leonov, A.G. Yagoda, "On the residual principle for solving nonlinear ill-posed problems" Soviet Math. Dokl., 15 (1974) pp. 166–168 Dokl. Akad. Nauk SSSR, 214 : 3 (1974) pp. 499–500 [Iv] V.K. Ivanov, "On ill-posed problems" Mat. Sb., 61 : 2 (1963) pp. 211–223 (In Russian) [Iv2] V.K. Ivanov, "On linear problems which are not well-posed" Soviet Math. Dokl., 3 (1962) pp. 981–983 Dokl. Akad. Nauk SSSR, 145 : 2 (1962) pp. 270–272 [Kr] A.V. Kryanev, "The solution of incorrectly posed problems by methods of successive approximations" Soviet Math. Dokl., 14 (1973) pp. 673–676 Dokl. Akad. Nauk SSSR, 210 : 1 pp. 20–22 [La] M.M. [M.A. Lavrent'ev] Lavrentiev, "Some improperly posed problems of mathematical physics", Springer (1967) (Translated from Russian) [LaLi] R. Lattes, J.L. Lions, "Méthode de quasi-réversibilité et applications", Dunod (1967) [LaVa] M.M. Lavrent'ev, V.G. Vasil'ev, "The posing of certain improper problems of mathematical physics" Sib. Math. J., 7 : 3 (1966) pp. 450–463 Sibirsk. Mat. Zh., 7 : 3 (1966) pp. 559–576 [Ti] A.N. Tikhonov, "Solution of incorrectly formulated problems and the regularization method" Soviet Math. Dokl., 4 (1963) pp. 1035–1038 Dokl. Akad. Nauk SSSR, 151 : 3 (1963) pp. 501–504 [Ti2] A.N. Tikhonov, "Regularization of incorrectly posed problems" Soviet Math. Dokl., 4 (1963) pp. 1624–1627 Dokl. Akad. Nauk SSSR, 153 : 1 (1963) pp. 49–52 [Ti3] A.N. Tikhonov, "On stability of inverse problems" Dokl. Akad. Nauk SSSR, 39 : 5 (1943) pp. 176–179 (In Russian) [Ti4] A.N. Tikhonov, "On the stability of the functional optimization problem" USSR Comp. Math. Math. Phys., 6 : 4 (1966) pp. 28–33 Zh. Vychisl. Mat. i Mat. Fiz., 6 : 4 (1966) pp. 631–634 [TiAr] A.N. Tikhonov, V.I. [V.I. Arsenin] Arsenine, "Solution of ill-posed problems", Winston (1977) (Translated from Russian) [Vi] V.A. Vinokurov, "On the regularization of discontinuous mappings" USSR Comp. Math. Math. Phys., 11 : 5 (1971) pp. 1–21 Zh. Vychisl. Mat. i Mat. Fiz., 11 : 5 (1971) pp. 1097–1112

If $A$ is a bounded linear operator between Hilbert spaces, then, as also mentioned above, regularization operators can be constructed viaspectral theory: If $U(\alpha,\lambda) \rightarrow 1/\lambda$ as $\alpha \rightarrow 0$, then under mild assumptions, $U(\alpha,A^*A)A^*$ is a regularization operator (cf. [Gr]); for choices of the regularization parameter leading to optimal convergence rates for such methods see [EnGf]. For $U(\alpha,\lambda) = 1/(\alpha+\lambda)$, the resulting method is called Tikhonov regularization: The regularized solution $z_\alpha^\delta$ is defined via $(\alpha I + A^*A)z = A^*u_\delta$. A variant of this method in Hilbert scales has been developed in [Na] with parameter choice rules given in [Ne]. The parameter choice rule discussed in the article given by $\rho_U(Az_\alpha^\delta,u_\delta) = \delta$ is called the discrepancy principle ([Mo]), or often the Morozov discrepancy principle.