Namespaces
Variants
Actions

Difference between revisions of "Extremal problems, numerical methods"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
Line 1: Line 1:
 +
<!--
 +
e0372001.png
 +
$#A+1 = 151 n = 0
 +
$#C+1 = 151 : ~/encyclopedia/old_files/data/E037/E.0307200 Extremal problems, numerical methods
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
Methods of computational mathematics that are applied to find the extrema (maxima or minima) of functions and functionals.
 
Methods of computational mathematics that are applied to find the extrema (maxima or minima) of functions and functionals.
  
Line 5: Line 17:
 
For example, the problem of optimal control consisting of the minimization of the functional
 
For example, the problem of optimal control consisting of the minimization of the functional
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e0372001.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$ \tag{1 }
 +
J ( u)  = \int\limits _ {t _ {0} } ^ { T }
 +
f ^ { 0 } ( x ( t) , u ( t) , t)  d t + F ( x ( T) )
 +
$$
  
 
under the conditions
 
under the conditions
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e0372002.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$ \tag{2 }
 +
\dot{x}  = f ( x , u ( t) , t ),\ \
 +
t _ {0} \leq  t \leq  T ; \ \
 +
x ( t _ {0} ) =  x _ {0} ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e0372003.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
$$ \tag{3 }
 +
= u ( t)  \in  V ( t) ,\  t _ {0} \leq  t \leq  T ,
 +
$$
  
is usually considered in the function space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e0372004.png" />. Here <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e0372005.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e0372006.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e0372007.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e0372008.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e0372009.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720010.png" /> are given functions, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720011.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720012.png" /> known moments in time, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720013.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720014.png" /> is a given initial point, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720015.png" /> is a given set in the Euclidean space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720016.png" /> for every <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720017.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720018.png" /> is the Hilbert space of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720019.png" />-dimensional vector functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720020.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720021.png" />, where each function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720022.png" /> is, together with its square, Lebesgue integrable over <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720023.png" /> <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720024.png" />; the inner product of two functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720025.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720026.png" /> in this space is defined by
+
is usually considered in the function space $  L _ {2}  ^ {r} [ t _ {0} , T ] $.  
 +
Here $  x = ( x  ^ {1} \dots x  ^ {n} ) $,
 +
$  u = ( u  ^ {1} \dots u  ^ {r} ) $,
 +
$  f = ( f  ^ {1} \dots f ^ { n } ) $,  
 +
$  f ^ { i } ( x , u , t ) $,  
 +
$  i = 1 \dots n $,  
 +
and $  F ( x) $
 +
are given functions, $  t _ {0} $
 +
and $  T $
 +
known moments in time, $  t _ {0} < T $,  
 +
$  x _ {0} $
 +
is a given initial point, $  V ( t) $
 +
is a given set in the Euclidean space $  E  ^ {r} $
 +
for every $  t \in [ t _ {0} , T ] $,  
 +
and $  L _ {2}  ^ {r} [ t _ {0} , T ] $
 +
is the Hilbert space of $  r $-
 +
dimensional vector functions $  u = u ( t) = ( u  ^ {1} ( t) \dots u  ^ {r} ( t) ) $,  
 +
$  t _ {0} \leq  t \leq  T $,  
 +
where each function $  u  ^ {i} ( t) $
 +
is, together with its square, Lebesgue integrable over $  [ t _ {0} , t] $
 +
$  ( L _ {2}  ^ {1} [ t _ {0} , T ] = L _ {2} [ t _ {0} , T ] ) $;  
 +
the inner product of two functions $  u $
 +
and $  v $
 +
in this space is defined by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720027.png" /></td> </tr></table>
+
$$
 +
\langle  u , v \rangle _ {L _ {2}  ^ {r} }  = \int\limits _ {t _ {0} } ^ { T }
 +
\sum _ { i= } 1 ^ { r }  u  ^ {i} ( t) v  ^ {i} ( t)  d t ,
 +
$$
  
 
and the norm by
 
and the norm by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720028.png" /></td> </tr></table>
+
$$
 +
\| u \| _ {L _ {2}  ^ {r} }  = \left (
 +
\int\limits _ {t _ {0} } ^ { T }  \sum _ { i= } 1 ^ { r }
 +
| u  ^ {i} ( t) |  ^ {2}  d t \right )  ^ {1/2} .
 +
$$
 +
 
 +
For a specific degree of smoothness of the functions  $  f ^ { i } ( x , u , t ) $
 +
and  $  F ( x) $
 +
the increment of the functional (1) can be represented in the form
  
For a specific degree of smoothness of the functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720029.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720030.png" /> the increment of the functional (1) can be represented in the form
+
$$ \tag{4 }
 +
J ( u + h ) - J ( u)  = - \int\limits _ {t _ {0} } ^ { T }
 +
\sum _ { i= } 1 ^ { r } 
 +
\frac{\partial  H ( u) }{\partial  u  ^ {i} }
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720031.png" /></td> <td valign="top" style="width:5%;text-align:right;">(4)</td></tr></table>
+
h  ^ {i} ( t) d t + R =
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720032.png" /></td> </tr></table>
+
$$
 +
= \
 +
\langle  -  
 +
\frac{\partial  H ( u) }{\partial  u }
 +
, h \rangle _ {L _ {2}  ^ {r} } + R ,\  R = o ( \| h \| _ {L _ {2}  ^ {r} } ) ,
 +
$$
  
 
where
 
where
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720033.png" /></td> <td valign="top" style="width:5%;text-align:right;">(5)</td></tr></table>
+
$$ \tag{5 }
 +
H ( x , \psi , u , t )  = \sum _ { i= } 1 ^ { n }
 +
\psi _ {i} f ^ { i } ( x , u , t ) - f ^ { 0 } ( x , u , t ) ,
 +
$$
 +
 
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720034.png" /></td> </tr></table>
+
\frac{\partial  H }{\partial  u }
 +
  = \left (
 +
\frac{\partial  H }{\partial
 +
u  ^ {1} }
 +
\dots
 +
\frac{\partial  H }{\partial  u  ^ {r} }
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720035.png" /> is the solution of the problem (2) for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720036.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720037.png" /> is the solution of the adjoint problem
+
\right ) ,\  \left .
 +
\frac{\partial  H ( u) }{\partial  u
 +
}
 +
  =
 +
\frac{\partial  H }{\partial  u }
 +
\right | _ {\begin{array} {c}
 +
u = u ( t)
 +
\\
 +
x = x ( t ; u ) \\
 +
\psi = \psi ( t ; u )
 +
\end{array}
 +
} ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720038.png" /></td> <td valign="top" style="width:5%;text-align:right;">(6)</td></tr></table>
+
$  x = x ( t ; u ) $
 +
is the solution of the problem (2) for  $  u = u ( t) $,
 +
and  $  \psi = \psi ( t ; u ) $
 +
is the solution of the adjoint problem
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720039.png" /></td> <td valign="top" style="width:5%;text-align:right;">(7)</td></tr></table>
+
$$ \tag{6 }
 +
\left . \dot \psi  _ {i}  = -  
 +
\frac{\partial  H }{\partial  x  ^ {i} }
  
Formula (4) implies that the functional (1) is differentiable in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720040.png" /> and that its gradient is the vector function
+
\right | _ {\begin{array} {c}
 +
u = u ( y) \\
 +
x = x ( t ; u )  
 +
\end{array}
 +
} ,\ \
 +
t _ {0} \leq  t \leq  T ,\ \
 +
i = 1 \dots n ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720041.png" /></td> <td valign="top" style="width:5%;text-align:right;">(8)</td></tr></table>
+
$$ \tag{7 }
 +
\left . \psi _ {i} ( T)  =
 +
\frac{\partial  F }{\partial  x  ^ {i} }
 +
\right | _ {x = x ( T ; u ) }  .
 +
$$
  
Hence, (1)(3) can be solved by applying various methods that use the gradient of the functional. For <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720042.png" /> one may apply the gradient method
+
Formula (4) implies that the functional (1) is differentiable in  $  L _ {2}  ^ {r} [ t _ {0} , T ] $
 +
and that its gradient is the vector function
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720043.png" /></td> </tr></table>
+
$$ \tag{8 }
 +
J  ^  \prime  ( u)  = -  
 +
\frac{\partial  H ( u) }{\partial  u }
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720044.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720045.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720046.png" /> are given functions in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720047.png" />, then it is possible to apply the method of gradient projection:
+
\in  L _ {2}  ^ {r} [ t _ {0} , T ] .
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720048.png" /></td> </tr></table>
+
Hence, (1)–(3) can be solved by applying various methods that use the gradient of the functional. For  $  V ( t) = E  ^ {r} $
 +
one may apply the gradient method
 +
 
 +
$$
 +
u _ {k+} 1 ( t)  = u _ {k} ( t) + \alpha _ {k}
 +
\frac{\partial  H ( u _ {k} ) }{\partial  u }
 +
,\ \
 +
k = 0 , 1 , .  .  . .
 +
$$
 +
 
 +
If  $  V ( t) = \{ {u = ( u  ^ {1} \dots u  ^ {r} ) } : {\alpha _ {i} ( t) \leq  u  ^ {i} \leq  \beta _ {i} ( t),  i = 1 \dots r } \} $,
 +
where  $  \alpha _ {i} ( t) $
 +
and  $  \beta _ {i} ( t) $
 +
are given functions in  $  L _ {2} [ t _ {0} , T ] $,
 +
then it is possible to apply the method of gradient projection:
 +
 
 +
$$
 +
u _ {k+} 1 ( t)  = P \left ( u _ {k} ( t) + \alpha _ {k}
 +
\frac{\partial  H ( u _ {k} ) }{\partial  u }
 +
\right ) ,\ \
 +
k = 0 , 1 \dots
 +
$$
  
 
where
 
where
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720049.png" /></td> </tr></table>
+
$$
 +
P ( w)  = ( w  ^ {1} ( t) \dots w  ^ {r} ( t) ) ,\ \
 +
t _ {0} \leq  t \leq  T ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720050.png" /></td> </tr></table>
+
$$
 +
w  ^ {i} ( t)  = \left \{
 +
\begin{array}{ll}
 +
\alpha _ {i} ( t)  &\textrm{ for }  v  ^ {i} ( t) \leq  \alpha _ {i} ( t) ,  \\
 +
v  ^ {i} ( t)  &\textrm{ for }
 +
\alpha _ {i} ( t) \leq  v  ^ {i} ( t) \leq  \beta _ {i} ( t) ,  \\
 +
\beta _ {i} ( t)  &\textrm{ for }  v  ^ {i} ( t) > \beta _ {i} ( t) ,\  i = 1 \dots r . \\
 +
\end{array}
  
The parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720051.png" /> can be chosen from the condition <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720052.png" />. The methods of the conditional gradient, dual gradients, etc. (see [[#References|[4]]]–[[#References|[6]]] and [[#References|[11]]]) can be adapted for the problem (1)–(3) analogously. If this problem is considered under the additional restrictions
+
\right .$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720053.png" /></td> <td valign="top" style="width:5%;text-align:right;">(9)</td></tr></table>
+
The parameter  $  \alpha _ {k} > 0 $
 +
can be chosen from the condition  $  J ( u _ {k+} 1 ) < J ( u _ {k} ) $.  
 +
The methods of the conditional gradient, dual gradients, etc. (see [[#References|[4]]]–[[#References|[6]]] and [[#References|[11]]]) can be adapted for the problem (1)–(3) analogously. If this problem is considered under the additional restrictions
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720054.png" /> is a given set in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720055.png" />, then these restrictions can be taken into account by using the method of penalty functions (cf. [[Penalty functions, method of|Penalty functions, method of]]). For example, if
+
$$ \tag{9 }
 +
x ( t ;  u )  \in G ( t) ,\ \
 +
t _ {0} \leq  t \leq  t ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720056.png" /></td> </tr></table>
+
where  $  G ( t) $
 +
is a given set in  $  E  ^ {n} $,
 +
then these restrictions can be taken into account by using the method of penalty functions (cf. [[Penalty functions, method of|Penalty functions, method of]]). For example, if
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720057.png" /></td> </tr></table>
+
$$
 +
G ( t) =
 +
$$
 +
 
 +
$$
 +
= \
 +
\{ {x \in E  ^ {n} } : {g _ {i} ( x , t ) \leq  0 ,\
 +
i = 1 \dots m ;
 +
g _ {i} ( x , t ) = 0 , i = m + 1 \dots s } \} ,
 +
$$
  
 
then as a penalty function one may take
 
then as a penalty function one may take
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720058.png" /></td> </tr></table>
+
$$
 +
P ( u)  = \int\limits _ {t _ {0} } ^ { T }
 +
\left ( \sum _ { i= } 1 ^ { m }  (
 +
\max \{ g _ {i} ( x ( t ; u ) , t ) ; 0 \} )  ^ {p\right} . +
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720059.png" /></td> </tr></table>
+
$$
 +
+ \left .
 +
\sum _ { i= } m+ 1 ^ { s }  | g _ {i} ( x ( t ; u
 +
) , t ) |  ^ {p} \right )  d t,\  p \geq  1 ,
 +
$$
  
and (1)–(3), (9) is replaced by the problem of minimizing the functional <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720060.png" /> under the conditions (2) and (3), where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720061.png" /> is the penalty coefficient and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720062.png" />.
+
and (1)–(3), (9) is replaced by the problem of minimizing the functional $  \Phi _ {k} ( u) = J ( u) + A _ {k} P ( u) $
 +
under the conditions (2) and (3), where $  A _ {k} $
 +
is the penalty coefficient and $  \lim\limits _ {k \rightarrow \infty }  A _ {k} = + \infty $.
  
 
Other methods for solving (1)–(3), (9) are based on Pontryagin's maximum principle and on dynamic programming (see [[Pontryagin maximum principle|Pontryagin maximum principle]]; [[Dynamic programming|Dynamic programming]] and [[Variational calculus, numerical methods of|Variational calculus, numerical methods of]]).
 
Other methods for solving (1)–(3), (9) are based on Pontryagin's maximum principle and on dynamic programming (see [[Pontryagin maximum principle|Pontryagin maximum principle]]; [[Dynamic programming|Dynamic programming]] and [[Variational calculus, numerical methods of|Variational calculus, numerical methods of]]).
Line 79: Line 242:
 
The problem of minimizing a quadratic functional subject to a system of linear ordinary differential equations or linear partial differential equations can be solved by applying the method of moments (see [[#References|[3]]] and [[#References|[8]]]). This method is described below when applied to the problem of minimizing the functional
 
The problem of minimizing a quadratic functional subject to a system of linear ordinary differential equations or linear partial differential equations can be solved by applying the method of moments (see [[#References|[3]]] and [[#References|[8]]]). This method is described below when applied to the problem of minimizing the functional
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720063.png" /></td> <td valign="top" style="width:5%;text-align:right;">(10)</td></tr></table>
+
$$ \tag{10 }
 +
J ( u)  = | x ( T ; u ) - y | _ {E  ^ {n}  }  ^ {2} ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720064.png" /> is the solution of the problem
+
where $  x = x ( t ;  u ) $
 +
is the solution of the problem
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720065.png" /></td> <td valign="top" style="width:5%;text-align:right;">(11)</td></tr></table>
+
$$ \tag{11 }
 +
\dot{x}  = A ( t) x + B ( t) + f ( t) ,\ \
 +
t _ {0} \leq  t \leq  T ,\ \
 +
x ( t _ {0} ) =  x _ {0} ,
 +
$$
  
and the controls <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720066.png" /> are such that
+
and the controls $  u = u ( t) \in L _ {2}  ^ {r} [ t _ {0} , T ] $
 +
are such that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720067.png" /></td> <td valign="top" style="width:5%;text-align:right;">(12)</td></tr></table>
+
$$ \tag{12 }
 +
\| u \| _ {L _ {2}  ^ {r} }  ^ {2}  = \
 +
\int\limits _ {t _ {0} } ^ { T }  | u ( t) | _ {E  ^ {r}  }  ^ {2}
 +
d t  \leq  R  ^ {2} ;
 +
$$
  
here <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720068.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720069.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720070.png" /> are given matrices of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720071.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720072.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720073.png" />, respectively, with piecewise-continuous entries on the interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720074.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720075.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720076.png" /> are given points, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720077.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720078.png" /> is the inner product in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720079.png" />. From the rule of [[Lagrange multipliers|Lagrange multipliers]] it follows that the control <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720080.png" /> is optimal in the problem (10)–(12) if and only if there is a number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720081.png" /> (the Lagrange multiplier for the constraint (12)) such that
+
here $  A ( t) $,  
 +
$  B ( t) $
 +
and $  f ( t) $
 +
are given matrices of order $  n \times n $,  
 +
$  n \times r $
 +
and $  n \times 1 $,  
 +
respectively, with piecewise-continuous entries on the interval $  t _ {0} \leq  t \leq  T $,  
 +
$  0 < R \leq  \infty $,
 +
$  x , y \in E  ^ {n} $
 +
are given points, $  | x | _ {E  ^ {m}  }  ^ {2} = \langle  x , x \rangle _ {E  ^ {m}  } $,  
 +
and $  \langle  x , y \rangle _ {E  ^ {m}  } = \sum _ {i=} 1  ^ {m} x  ^ {i} y  ^ {i} $
 +
is the inner product in $  E  ^ {m} $.  
 +
From the rule of [[Lagrange multipliers|Lagrange multipliers]] it follows that the control $  u = u ( t) $
 +
is optimal in the problem (10)–(12) if and only if there is a number $  \gamma \geq  0 $(
 +
the Lagrange multiplier for the constraint (12)) such that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720082.png" /></td> <td valign="top" style="width:5%;text-align:right;">(13)</td></tr></table>
+
$$ \tag{13 }
 +
J  ^  \prime  ( u) + \gamma u  = 0 ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720083.png" /></td> <td valign="top" style="width:5%;text-align:right;">(14)</td></tr></table>
+
$$ \tag{14 }
 +
\gamma ( \| u \| _ {L _ {2}  ^ {r} } - R )  = \
 +
0 ,\  \| u \| _ {L _ {2}  ^ {r} }  \leq  R ;
 +
$$
  
here <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720084.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720085.png" />. Formulas (5)–(8) imply that the gradient <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720086.png" /> of the functional (10) in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720087.png" /> has the form
+
here $  \gamma = 0 $
 +
for $  R = \infty $.  
 +
Formulas (5)–(8) imply that the gradient $  J  ^  \prime  ( u) $
 +
of the functional (10) in $  L _ {2}  ^ {r} [ t _ {0} , T ] $
 +
has the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720088.png" /></td> </tr></table>
+
$$
 +
J  ^  \prime  ( u)  = B  ^ {T} ( t) \psi ( t ; u ) ,\ \
 +
t _ {0} \leq  t \leq  T ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720089.png" /> is the solution of the problem
+
where $  \psi = \psi ( t ;  u ) $
 +
is the solution of the problem
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720090.png" /></td> <td valign="top" style="width:5%;text-align:right;">(15)</td></tr></table>
+
$$ \tag{15 }
 +
\dot \psi  = - A  ^ {T} ( t) \psi ,\ \
 +
t _ {0} \leq  t \leq  T ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720091.png" /></td> <td valign="top" style="width:5%;text-align:right;">(16)</td></tr></table>
+
$$ \tag{16 }
 +
\psi ( T)  = 2 ( x ( T ; u ) - y ) ;
 +
$$
  
here <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720092.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720093.png" /> are the transposed matrices of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720094.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720095.png" />, respectively. Then condition (13) becomes
+
here $  A  ^ {T} $
 +
and $  B  ^ {T} $
 +
are the transposed matrices of $  A $
 +
and $  B $,  
 +
respectively. Then condition (13) becomes
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720096.png" /></td> <td valign="top" style="width:5%;text-align:right;">(17)</td></tr></table>
+
$$ \tag{17 }
 +
B  ^ {T} ( t) \psi ( t ; u ) + \gamma u ( t)  = 0 ,\ \
 +
t _ {0} \leq  t \leq  T .
 +
$$
  
 
Condition (16) is equivalent to
 
Condition (16) is equivalent to
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720097.png" /></td> <td valign="top" style="width:5%;text-align:right;">(18)</td></tr></table>
+
$$ \tag{18 }
 +
\int\limits _ {t _ {0} } ^ { T }  \langle  u ( t), B  ^ {T} ( t) p _ {k} ( t) \rangle _ {E  ^ {r}  }  d t -  
 +
\frac{1}{2}
 +
\langle  \psi ( T) , p _ {k} ( T) \rangle _ {E  ^ {n}  }  =  a _ {k} ,
 +
$$
  
for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720098.png" />, where
+
for $  k = 1 \dots n $,  
 +
where
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e03720099.png" /></td> </tr></table>
+
$$
 +
a _ {k}  = \langle  y , p _ {k} ( T) \rangle _ {E  ^ {n}  } - < x _ {0} ,\
 +
p _ {k} ( t _ {0} ) > _ {E  ^ {n}  } -
 +
\int\limits _ {t _ {0} } ^ { T }  \langle  f ( t) , p _ {k} ( t) \rangle _ {E  ^ {r}  }  d t ,
 +
$$
  
and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200100.png" /> is the solution of the system (15) satisfying <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200101.png" /> (a unit vector). Hence, the optimal control <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200102.png" /> in the problem (10)–(12) is determined by solving the system (14), (15), (17), (18) relative to the functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200103.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200104.png" /> and the number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200105.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200106.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200107.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200108.png" /> and (18) reduces to the problem of moments (see [[Moment problem|Moment problem]]): To find a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200109.png" />, knowing
+
and $  p _ {k} ( t) $
 +
is the solution of the system (15) satisfying $  p _ {k} ( T) = e _ {k} = ( 0 \dots 0 , 1 , 0 \dots 0) $(
 +
a unit vector). Hence, the optimal control $  u = u ( t) $
 +
in the problem (10)–(12) is determined by solving the system (14), (15), (17), (18) relative to the functions $  u ( t) $
 +
and $  \psi ( t) $
 +
and the number $  \gamma \geq  0 $.  
 +
If $  R = \infty $,  
 +
then $  \gamma = 0 $,  
 +
$  \psi ( t) = 0 $
 +
and (18) reduces to the problem of moments (see [[Moment problem|Moment problem]]): To find a function $  u = u( t) $,  
 +
knowing
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200110.png" /></td> </tr></table>
+
$$
 +
\int\limits _ {t _ {0} } ^ { T }  \langle  u ( t) , \phi _ {k} ( t) \rangle  d t  = a _ {k}  $$
  
with respect to the system <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200111.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200112.png" />.
+
with respect to the system $  \phi _ {k} ( t) = B  ^ {T} ( t) p _ {k} ( t) $,  
 +
$  k = 1 \dots n $.
  
System (14), (15), (17), (18) is a generalized moment problem for (10)–(12) with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200113.png" /> (see [[#References|[3]]] and [[#References|[8]]]).
+
System (14), (15), (17), (18) is a generalized moment problem for (10)–(12) with $  0 < R < \infty $(
 +
see [[#References|[3]]] and [[#References|[8]]]).
  
Any solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200114.png" /> of (15) is uniquely representable in the form
+
Any solution $  \psi ( t) $
 +
of (15) is uniquely representable in the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200115.png" /></td> <td valign="top" style="width:5%;text-align:right;">(19)</td></tr></table>
+
$$ \tag{19 }
 +
\psi ( t)  = \sum _ { k= } 1 ^ { n }  \psi _ {k} p _ {k} ( t) ,\ \
 +
t _ {0} \leq  t \leq  T .
 +
$$
  
System (15), (17), (18) has a solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200116.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200117.png" /> for any fixed <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200118.png" />, and among all its solutions there is a unique one such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200119.png" /> has the form
+
System (15), (17), (18) has a solution $  \psi ( t ;  \gamma ) $,  
 +
$  u ( t ;  \gamma ) $
 +
for any fixed $  \gamma \geq  0 $,  
 +
and among all its solutions there is a unique one such that $  u ( t ;  \gamma ) $
 +
has the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200120.png" /></td> <td valign="top" style="width:5%;text-align:right;">(20)</td></tr></table>
+
$$ \tag{20 }
 +
u ( t ; \gamma )  = \sum _ { k= } 1 ^ { n }  u _ {k} B  ^ {T}
 +
( t) p _ {k} ( t) ,\ \
 +
t _ {0} \leq  t \leq  T .
 +
$$
  
To find <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200121.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200122.png" /> one substitutes (19) and (20) in (17) and (18) and obtains a system of linear algebraic equations with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200123.png" />, from which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200124.png" /> are determined uniquely and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200125.png" /> non-uniquely if the system <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200126.png" /> is linearly dependent. In the practical solution of the problem (10)–(12) it is advisable to set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200127.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200128.png" /> from the beginning and to use (18) to determine <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200129.png" /> of the form (20). Next, it must be checked that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200130.png" />. If this inequality holds, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200131.png" /> is the optimal control of (10)–(12) with minimal norm among all optimal controls; the set of all optimal controls in this case consists of controls of the form
+
To find $  \psi ( t ;  \gamma ) $
 +
and $  u ( t ;  \gamma ) $
 +
one substitutes (19) and (20) in (17) and (18) and obtains a system of linear algebraic equations with respect to $  \psi _ {1} \dots \psi _ {n} , u _ {1} \dots u _ {n} $,  
 +
from which $  \psi _ {1} \dots \psi _ {n} $
 +
are determined uniquely and $  u _ {1} \dots u _ {n} $
 +
non-uniquely if the system $  \{ {B  ^ {T} ( t) p _ {k} ( t) } : {k = 1 \dots n } \} $
 +
is linearly dependent. In the practical solution of the problem (10)–(12) it is advisable to set $  \gamma = 0 $
 +
and $  \psi ( t) = 0 $
 +
from the beginning and to use (18) to determine $  u ( t ;  0 ) $
 +
of the form (20). Next, it must be checked that $  \| u ( t ;  0 ) \| _ {L _ {2}  ^ {r} } \leq  R $.  
 +
If this inequality holds, then $  u ( t ;  0 ) $
 +
is the optimal control of (10)–(12) with minimal norm among all optimal controls; the set of all optimal controls in this case consists of controls of the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200132.png" /></td> </tr></table>
+
$$
 +
u ( t)  = u ( t ; 0 ) + v ( t) ,\ \
 +
t _ {0} \leq  t \leq  T ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200133.png" /> belongs to the orthogonal complement in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200134.png" /> of the linear span of the system of functions
+
where $  v ( t) $
 +
belongs to the orthogonal complement in $  L _ {2}  ^ {r} [ t _ {0} ;  T ] $
 +
of the linear span of the system of functions
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200135.png" /></td> </tr></table>
+
$$
 +
\{ {B  ^ {T} ( t) p _ {k} ( t) } : {k = 1 \dots n } \}
 +
,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200136.png" /></td> </tr></table>
+
$$
 +
\| v ( t) \| _ {L _ {2}  ^ {r} }  \leq  R  ^ {2} - \|
 +
u ( t ; 0 ) \| _ {L _ {2}  ^ {r} }  ^ {2} .
 +
$$
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200137.png" />, then for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200138.png" /> the solutions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200139.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200140.png" /> of the form (19) and (20), respectively, are found from (17) and (18), and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200141.png" /> is found from the equation
+
If $  \| u ( t ;  0 ) \| _ {L _ {2}  ^ {r} } > R $,  
 +
then for $  \gamma > 0 $
 +
the solutions $  \psi ( t ;  \gamma ) $
 +
and $  u ( t ;  \gamma ) $
 +
of the form (19) and (20), respectively, are found from (17) and (18), and $  \gamma $
 +
is found from the equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200142.png" /></td> <td valign="top" style="width:5%;text-align:right;">(21)</td></tr></table>
+
$$ \tag{21 }
 +
\| u ( t ; \gamma ) \| _ {L _ {2}  ^ {r} }  = R ,\  \gamma  > 0 .
 +
$$
  
The function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200143.png" /> of the variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200144.png" /> is continuous and strictly decreasing for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200145.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200146.png" />; therefore, the desired <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200147.png" /> is determined uniquely from (21). The control <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200148.png" /> is optimal for the problem (10)–(12); for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200149.png" /> this problem has no other optimal controls.
+
The function $  \| u ( t ;  \gamma ) \| _ {L _ {2}  ^ {r} } $
 +
of the variable $  \gamma $
 +
is continuous and strictly decreasing for $  \gamma > 0 $,  
 +
and $  \lim\limits _ {\gamma \rightarrow + \infty }  \| u ( t ;  0 ) \| _ {L _ {2}  ^ {r} } = 0 $;  
 +
therefore, the desired $  \gamma = \gamma _ {*} $
 +
is determined uniquely from (21). The control $  u ( t ;  \gamma _ {*} ) $
 +
is optimal for the problem (10)–(12); for $  \| u ( t ;  0 ) \| _ {L _ {2}  ^ {r} } > R $
 +
this problem has no other optimal controls.
  
 
The method of moments is also applicable for the solution of the high-speed problem for the system (11) and other linear systems (see [[#References|[3]]] and [[#References|[8]]]).
 
The method of moments is also applicable for the solution of the high-speed problem for the system (11) and other linear systems (see [[#References|[3]]] and [[#References|[8]]]).
Line 159: Line 443:
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  V.M. Alekseev,  V.M. Tikhomirov,  S.V. Fomin,  "Optimal control" , Consultants Bureau  (1987)  (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  I.V. Beiko,  B.N. Bublik,  P.N. Zin'ko,  "Methods and algorithms for the solution of optimization problems" , Kiev  (1983)  (In Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  A.G. Butkovskii,  "Methods of control by means of systems with distributed parameters" , Moscow  (1975)  (In Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  F.P. Vasil'ev,  "Numerical method for solving extremal problems" , Moscow  (1980)  (In Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  F.P. Vasil'ev,  "Methods for solving extremal problems" , Moscow  (1981)  (In Russian)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  Yu.G. Evtushenko,  "Numerical optimization techniques" , Optim. Software  (1985)  (Translated from Russian)</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top">  A.I. Egorov,  "Optimal control by means of thermal and diffusion processes" , Moscow  (1978)  (In Russian)</TD></TR><TR><TD valign="top">[8]</TD> <TD valign="top">  N.N. Krasovskii,  "Theory of control by motion" , Moscow  (1968)  (In Russian)</TD></TR><TR><TD valign="top">[9]</TD> <TD valign="top">  J.-L. Lions,  "Optimal control of systems governed by partial differential equations" , Springer  (1971)  (Translated from French)</TD></TR><TR><TD valign="top">[10]</TD> <TD valign="top">  B.T. Polyak,  "Introduction to optimization" , Moscow  (1983)  (In Russian)</TD></TR><TR><TD valign="top">[11]</TD> <TD valign="top">  J. Céa,  "Lectures on optimization: theory and algorithms" , Springer  (1978)</TD></TR><TR><TD valign="top">[12]</TD> <TD valign="top">  T.K. Sirazetdinov,  "Optimization of systems with distributed parameters"  (1977)  (In Russian)</TD></TR><TR><TD valign="top">[13]</TD> <TD valign="top">  A.N. Tikhonov,  V.I. [V.I. Arsenin] Arsenine,  "Solutions of ill-posed problems" , Winston  (1977)  (Translated from Russian)</TD></TR><TR><TD valign="top">[14]</TD> <TD valign="top">  R.P. Fedorenko,  "Approximate solution of problems of optimal control" , Moscow  (1978)  (In Russian)</TD></TR><TR><TD valign="top">[15]</TD> <TD valign="top">  I. Ekeland,  R. Téman,  "Analyse convexe et problèmes variationnels" , Dunod  (1974)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  V.M. Alekseev,  V.M. Tikhomirov,  S.V. Fomin,  "Optimal control" , Consultants Bureau  (1987)  (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  I.V. Beiko,  B.N. Bublik,  P.N. Zin'ko,  "Methods and algorithms for the solution of optimization problems" , Kiev  (1983)  (In Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  A.G. Butkovskii,  "Methods of control by means of systems with distributed parameters" , Moscow  (1975)  (In Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  F.P. Vasil'ev,  "Numerical method for solving extremal problems" , Moscow  (1980)  (In Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  F.P. Vasil'ev,  "Methods for solving extremal problems" , Moscow  (1981)  (In Russian)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  Yu.G. Evtushenko,  "Numerical optimization techniques" , Optim. Software  (1985)  (Translated from Russian)</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top">  A.I. Egorov,  "Optimal control by means of thermal and diffusion processes" , Moscow  (1978)  (In Russian)</TD></TR><TR><TD valign="top">[8]</TD> <TD valign="top">  N.N. Krasovskii,  "Theory of control by motion" , Moscow  (1968)  (In Russian)</TD></TR><TR><TD valign="top">[9]</TD> <TD valign="top">  J.-L. Lions,  "Optimal control of systems governed by partial differential equations" , Springer  (1971)  (Translated from French)</TD></TR><TR><TD valign="top">[10]</TD> <TD valign="top">  B.T. Polyak,  "Introduction to optimization" , Moscow  (1983)  (In Russian)</TD></TR><TR><TD valign="top">[11]</TD> <TD valign="top">  J. Céa,  "Lectures on optimization: theory and algorithms" , Springer  (1978)</TD></TR><TR><TD valign="top">[12]</TD> <TD valign="top">  T.K. Sirazetdinov,  "Optimization of systems with distributed parameters"  (1977)  (In Russian)</TD></TR><TR><TD valign="top">[13]</TD> <TD valign="top">  A.N. Tikhonov,  V.I. [V.I. Arsenin] Arsenine,  "Solutions of ill-posed problems" , Winston  (1977)  (Translated from Russian)</TD></TR><TR><TD valign="top">[14]</TD> <TD valign="top">  R.P. Fedorenko,  "Approximate solution of problems of optimal control" , Moscow  (1978)  (In Russian)</TD></TR><TR><TD valign="top">[15]</TD> <TD valign="top">  I. Ekeland,  R. Téman,  "Analyse convexe et problèmes variationnels" , Dunod  (1974)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
Constraints of the kind <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200150.png" /> or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/e/e037/e037200/e037200151.png" /> are usually referred to as state constraints. Among the methods available for the numerical solution of [[Optimal control|optimal control]] problems, a distinction can be made between direct and indirect methods. With direct methods the optimal control problem is treated directly as a minimization problem, i.e. the method is started with an initial approximation of the solution, which is iteratively improved by minimizing the objective functional (cf. [[Objective function|Objective function]]) (augmented with a  "penalty"  term) along the direction of search. The direction of search is obtained via linearization of the problem. With indirect methods the optimality conditions, which must hold for a solution of the optimal control problem, are used to derive a multi-point boundary value problem. Solutions of the optimal control problem will also be solutions of this multi-point boundary value problem and hence the numerical solution of the multi-point boundary value problem yields a candidate for the solution of the optimal control problem.
+
Constraints of the kind $  g _ {i} ( x, t) \leq  0 $
 +
or $  g _ {i} ( x, t) = 0 $
 +
are usually referred to as state constraints. Among the methods available for the numerical solution of [[Optimal control|optimal control]] problems, a distinction can be made between direct and indirect methods. With direct methods the optimal control problem is treated directly as a minimization problem, i.e. the method is started with an initial approximation of the solution, which is iteratively improved by minimizing the objective functional (cf. [[Objective function|Objective function]]) (augmented with a  "penalty"  term) along the direction of search. The direction of search is obtained via linearization of the problem. With indirect methods the optimality conditions, which must hold for a solution of the optimal control problem, are used to derive a multi-point boundary value problem. Solutions of the optimal control problem will also be solutions of this multi-point boundary value problem and hence the numerical solution of the multi-point boundary value problem yields a candidate for the solution of the optimal control problem.
  
 
Most direct methods are of the gradient type, i.e. they are function space analogues of the well-known [[Gradient method|gradient method]] for finite-dimensional non-linear programming problems [[#References|[a1]]]. These methods can be extended to Newton-like methods [[#References|[a1]]], [[#References|[a2]]] (cf. [[Newton method|Newton method]]). As indicated in the main article above, state constraints can be treated via a penalty-function approach. Numerical results, however, indicate that this approach yields an inefficient and inaccurate method for the solution of state-constrained optimal control problems [[#References|[a3]]]. Another way to treat state constraints is via a slack-variable transformation technique, using quadratic slack variables [[#References|[a4]]].
 
Most direct methods are of the gradient type, i.e. they are function space analogues of the well-known [[Gradient method|gradient method]] for finite-dimensional non-linear programming problems [[#References|[a1]]]. These methods can be extended to Newton-like methods [[#References|[a1]]], [[#References|[a2]]] (cf. [[Newton method|Newton method]]). As indicated in the main article above, state constraints can be treated via a penalty-function approach. Numerical results, however, indicate that this approach yields an inefficient and inaccurate method for the solution of state-constrained optimal control problems [[#References|[a3]]]. Another way to treat state constraints is via a slack-variable transformation technique, using quadratic slack variables [[#References|[a4]]].

Latest revision as of 19:38, 5 June 2020


Methods of computational mathematics that are applied to find the extrema (maxima or minima) of functions and functionals.

The numerical solution of extremal problems considered in infinite-dimensional function spaces (for example, problems of optimal control by means of processes described by ordinary or partial differential equations) can be obtained by using appropriate generalizations of many methods of mathematical programming developed for problems of minimization or maximization of functions of finitely many variables. In this context it is very important to make the correct choice of a suitable function space in which a concrete problem should be considered. Such a space is usually chosen by taking into account physical considerations, properties of admissible controls, properties of the solutions of the corresponding initial-boundary value problems for a fixed control, etc.

For example, the problem of optimal control consisting of the minimization of the functional

$$ \tag{1 } J ( u) = \int\limits _ {t _ {0} } ^ { T } f ^ { 0 } ( x ( t) , u ( t) , t) d t + F ( x ( T) ) $$

under the conditions

$$ \tag{2 } \dot{x} = f ( x , u ( t) , t ),\ \ t _ {0} \leq t \leq T ; \ \ x ( t _ {0} ) = x _ {0} , $$

$$ \tag{3 } u = u ( t) \in V ( t) ,\ t _ {0} \leq t \leq T , $$

is usually considered in the function space $ L _ {2} ^ {r} [ t _ {0} , T ] $. Here $ x = ( x ^ {1} \dots x ^ {n} ) $, $ u = ( u ^ {1} \dots u ^ {r} ) $, $ f = ( f ^ {1} \dots f ^ { n } ) $, $ f ^ { i } ( x , u , t ) $, $ i = 1 \dots n $, and $ F ( x) $ are given functions, $ t _ {0} $ and $ T $ known moments in time, $ t _ {0} < T $, $ x _ {0} $ is a given initial point, $ V ( t) $ is a given set in the Euclidean space $ E ^ {r} $ for every $ t \in [ t _ {0} , T ] $, and $ L _ {2} ^ {r} [ t _ {0} , T ] $ is the Hilbert space of $ r $- dimensional vector functions $ u = u ( t) = ( u ^ {1} ( t) \dots u ^ {r} ( t) ) $, $ t _ {0} \leq t \leq T $, where each function $ u ^ {i} ( t) $ is, together with its square, Lebesgue integrable over $ [ t _ {0} , t] $ $ ( L _ {2} ^ {1} [ t _ {0} , T ] = L _ {2} [ t _ {0} , T ] ) $; the inner product of two functions $ u $ and $ v $ in this space is defined by

$$ \langle u , v \rangle _ {L _ {2} ^ {r} } = \int\limits _ {t _ {0} } ^ { T } \sum _ { i= } 1 ^ { r } u ^ {i} ( t) v ^ {i} ( t) d t , $$

and the norm by

$$ \| u \| _ {L _ {2} ^ {r} } = \left ( \int\limits _ {t _ {0} } ^ { T } \sum _ { i= } 1 ^ { r } | u ^ {i} ( t) | ^ {2} d t \right ) ^ {1/2} . $$

For a specific degree of smoothness of the functions $ f ^ { i } ( x , u , t ) $ and $ F ( x) $ the increment of the functional (1) can be represented in the form

$$ \tag{4 } J ( u + h ) - J ( u) = - \int\limits _ {t _ {0} } ^ { T } \sum _ { i= } 1 ^ { r } \frac{\partial H ( u) }{\partial u ^ {i} } h ^ {i} ( t) d t + R = $$

$$ = \ \langle - \frac{\partial H ( u) }{\partial u } , h \rangle _ {L _ {2} ^ {r} } + R ,\ R = o ( \| h \| _ {L _ {2} ^ {r} } ) , $$

where

$$ \tag{5 } H ( x , \psi , u , t ) = \sum _ { i= } 1 ^ { n } \psi _ {i} f ^ { i } ( x , u , t ) - f ^ { 0 } ( x , u , t ) , $$

$$ \frac{\partial H }{\partial u } = \left ( \frac{\partial H }{\partial u ^ {1} } \dots \frac{\partial H }{\partial u ^ {r} } \right ) ,\ \left . \frac{\partial H ( u) }{\partial u } = \frac{\partial H }{\partial u } \right | _ {\begin{array} {c} u = u ( t) \\ x = x ( t ; u ) \\ \psi = \psi ( t ; u ) \end{array} } , $$

$ x = x ( t ; u ) $ is the solution of the problem (2) for $ u = u ( t) $, and $ \psi = \psi ( t ; u ) $ is the solution of the adjoint problem

$$ \tag{6 } \left . \dot \psi _ {i} = - \frac{\partial H }{\partial x ^ {i} } \right | _ {\begin{array} {c} u = u ( y) \\ x = x ( t ; u ) \end{array} } ,\ \ t _ {0} \leq t \leq T ,\ \ i = 1 \dots n , $$

$$ \tag{7 } \left . \psi _ {i} ( T) = \frac{\partial F }{\partial x ^ {i} } \right | _ {x = x ( T ; u ) } . $$

Formula (4) implies that the functional (1) is differentiable in $ L _ {2} ^ {r} [ t _ {0} , T ] $ and that its gradient is the vector function

$$ \tag{8 } J ^ \prime ( u) = - \frac{\partial H ( u) }{\partial u } \in L _ {2} ^ {r} [ t _ {0} , T ] . $$

Hence, (1)–(3) can be solved by applying various methods that use the gradient of the functional. For $ V ( t) = E ^ {r} $ one may apply the gradient method

$$ u _ {k+} 1 ( t) = u _ {k} ( t) + \alpha _ {k} \frac{\partial H ( u _ {k} ) }{\partial u } ,\ \ k = 0 , 1 , . . . . $$

If $ V ( t) = \{ {u = ( u ^ {1} \dots u ^ {r} ) } : {\alpha _ {i} ( t) \leq u ^ {i} \leq \beta _ {i} ( t), i = 1 \dots r } \} $, where $ \alpha _ {i} ( t) $ and $ \beta _ {i} ( t) $ are given functions in $ L _ {2} [ t _ {0} , T ] $, then it is possible to apply the method of gradient projection:

$$ u _ {k+} 1 ( t) = P \left ( u _ {k} ( t) + \alpha _ {k} \frac{\partial H ( u _ {k} ) }{\partial u } \right ) ,\ \ k = 0 , 1 \dots $$

where

$$ P ( w) = ( w ^ {1} ( t) \dots w ^ {r} ( t) ) ,\ \ t _ {0} \leq t \leq T , $$

$$ w ^ {i} ( t) = \left \{ \begin{array}{ll} \alpha _ {i} ( t) &\textrm{ for } v ^ {i} ( t) \leq \alpha _ {i} ( t) , \\ v ^ {i} ( t) &\textrm{ for } \alpha _ {i} ( t) \leq v ^ {i} ( t) \leq \beta _ {i} ( t) , \\ \beta _ {i} ( t) &\textrm{ for } v ^ {i} ( t) > \beta _ {i} ( t) ,\ i = 1 \dots r . \\ \end{array} \right .$$

The parameter $ \alpha _ {k} > 0 $ can be chosen from the condition $ J ( u _ {k+} 1 ) < J ( u _ {k} ) $. The methods of the conditional gradient, dual gradients, etc. (see [4][6] and [11]) can be adapted for the problem (1)–(3) analogously. If this problem is considered under the additional restrictions

$$ \tag{9 } x ( t ; u ) \in G ( t) ,\ \ t _ {0} \leq t \leq t , $$

where $ G ( t) $ is a given set in $ E ^ {n} $, then these restrictions can be taken into account by using the method of penalty functions (cf. Penalty functions, method of). For example, if

$$ G ( t) = $$

$$ = \ \{ {x \in E ^ {n} } : {g _ {i} ( x , t ) \leq 0 ,\ i = 1 \dots m ; g _ {i} ( x , t ) = 0 , i = m + 1 \dots s } \} , $$

then as a penalty function one may take

$$ P ( u) = \int\limits _ {t _ {0} } ^ { T } \left ( \sum _ { i= } 1 ^ { m } ( \max \{ g _ {i} ( x ( t ; u ) , t ) ; 0 \} ) ^ {p\right} . + $$

$$ + \left . \sum _ { i= } m+ 1 ^ { s } | g _ {i} ( x ( t ; u ) , t ) | ^ {p} \right ) d t,\ p \geq 1 , $$

and (1)–(3), (9) is replaced by the problem of minimizing the functional $ \Phi _ {k} ( u) = J ( u) + A _ {k} P ( u) $ under the conditions (2) and (3), where $ A _ {k} $ is the penalty coefficient and $ \lim\limits _ {k \rightarrow \infty } A _ {k} = + \infty $.

Other methods for solving (1)–(3), (9) are based on Pontryagin's maximum principle and on dynamic programming (see Pontryagin maximum principle; Dynamic programming and Variational calculus, numerical methods of).

The problem of minimizing a quadratic functional subject to a system of linear ordinary differential equations or linear partial differential equations can be solved by applying the method of moments (see [3] and [8]). This method is described below when applied to the problem of minimizing the functional

$$ \tag{10 } J ( u) = | x ( T ; u ) - y | _ {E ^ {n} } ^ {2} , $$

where $ x = x ( t ; u ) $ is the solution of the problem

$$ \tag{11 } \dot{x} = A ( t) x + B ( t) + f ( t) ,\ \ t _ {0} \leq t \leq T ,\ \ x ( t _ {0} ) = x _ {0} , $$

and the controls $ u = u ( t) \in L _ {2} ^ {r} [ t _ {0} , T ] $ are such that

$$ \tag{12 } \| u \| _ {L _ {2} ^ {r} } ^ {2} = \ \int\limits _ {t _ {0} } ^ { T } | u ( t) | _ {E ^ {r} } ^ {2} d t \leq R ^ {2} ; $$

here $ A ( t) $, $ B ( t) $ and $ f ( t) $ are given matrices of order $ n \times n $, $ n \times r $ and $ n \times 1 $, respectively, with piecewise-continuous entries on the interval $ t _ {0} \leq t \leq T $, $ 0 < R \leq \infty $, $ x , y \in E ^ {n} $ are given points, $ | x | _ {E ^ {m} } ^ {2} = \langle x , x \rangle _ {E ^ {m} } $, and $ \langle x , y \rangle _ {E ^ {m} } = \sum _ {i=} 1 ^ {m} x ^ {i} y ^ {i} $ is the inner product in $ E ^ {m} $. From the rule of Lagrange multipliers it follows that the control $ u = u ( t) $ is optimal in the problem (10)–(12) if and only if there is a number $ \gamma \geq 0 $( the Lagrange multiplier for the constraint (12)) such that

$$ \tag{13 } J ^ \prime ( u) + \gamma u = 0 , $$

$$ \tag{14 } \gamma ( \| u \| _ {L _ {2} ^ {r} } - R ) = \ 0 ,\ \| u \| _ {L _ {2} ^ {r} } \leq R ; $$

here $ \gamma = 0 $ for $ R = \infty $. Formulas (5)–(8) imply that the gradient $ J ^ \prime ( u) $ of the functional (10) in $ L _ {2} ^ {r} [ t _ {0} , T ] $ has the form

$$ J ^ \prime ( u) = B ^ {T} ( t) \psi ( t ; u ) ,\ \ t _ {0} \leq t \leq T , $$

where $ \psi = \psi ( t ; u ) $ is the solution of the problem

$$ \tag{15 } \dot \psi = - A ^ {T} ( t) \psi ,\ \ t _ {0} \leq t \leq T , $$

$$ \tag{16 } \psi ( T) = 2 ( x ( T ; u ) - y ) ; $$

here $ A ^ {T} $ and $ B ^ {T} $ are the transposed matrices of $ A $ and $ B $, respectively. Then condition (13) becomes

$$ \tag{17 } B ^ {T} ( t) \psi ( t ; u ) + \gamma u ( t) = 0 ,\ \ t _ {0} \leq t \leq T . $$

Condition (16) is equivalent to

$$ \tag{18 } \int\limits _ {t _ {0} } ^ { T } \langle u ( t), B ^ {T} ( t) p _ {k} ( t) \rangle _ {E ^ {r} } d t - \frac{1}{2} \langle \psi ( T) , p _ {k} ( T) \rangle _ {E ^ {n} } = a _ {k} , $$

for $ k = 1 \dots n $, where

$$ a _ {k} = \langle y , p _ {k} ( T) \rangle _ {E ^ {n} } - < x _ {0} ,\ p _ {k} ( t _ {0} ) > _ {E ^ {n} } - \int\limits _ {t _ {0} } ^ { T } \langle f ( t) , p _ {k} ( t) \rangle _ {E ^ {r} } d t , $$

and $ p _ {k} ( t) $ is the solution of the system (15) satisfying $ p _ {k} ( T) = e _ {k} = ( 0 \dots 0 , 1 , 0 \dots 0) $( a unit vector). Hence, the optimal control $ u = u ( t) $ in the problem (10)–(12) is determined by solving the system (14), (15), (17), (18) relative to the functions $ u ( t) $ and $ \psi ( t) $ and the number $ \gamma \geq 0 $. If $ R = \infty $, then $ \gamma = 0 $, $ \psi ( t) = 0 $ and (18) reduces to the problem of moments (see Moment problem): To find a function $ u = u( t) $, knowing

$$ \int\limits _ {t _ {0} } ^ { T } \langle u ( t) , \phi _ {k} ( t) \rangle d t = a _ {k} $$

with respect to the system $ \phi _ {k} ( t) = B ^ {T} ( t) p _ {k} ( t) $, $ k = 1 \dots n $.

System (14), (15), (17), (18) is a generalized moment problem for (10)–(12) with $ 0 < R < \infty $( see [3] and [8]).

Any solution $ \psi ( t) $ of (15) is uniquely representable in the form

$$ \tag{19 } \psi ( t) = \sum _ { k= } 1 ^ { n } \psi _ {k} p _ {k} ( t) ,\ \ t _ {0} \leq t \leq T . $$

System (15), (17), (18) has a solution $ \psi ( t ; \gamma ) $, $ u ( t ; \gamma ) $ for any fixed $ \gamma \geq 0 $, and among all its solutions there is a unique one such that $ u ( t ; \gamma ) $ has the form

$$ \tag{20 } u ( t ; \gamma ) = \sum _ { k= } 1 ^ { n } u _ {k} B ^ {T} ( t) p _ {k} ( t) ,\ \ t _ {0} \leq t \leq T . $$

To find $ \psi ( t ; \gamma ) $ and $ u ( t ; \gamma ) $ one substitutes (19) and (20) in (17) and (18) and obtains a system of linear algebraic equations with respect to $ \psi _ {1} \dots \psi _ {n} , u _ {1} \dots u _ {n} $, from which $ \psi _ {1} \dots \psi _ {n} $ are determined uniquely and $ u _ {1} \dots u _ {n} $ non-uniquely if the system $ \{ {B ^ {T} ( t) p _ {k} ( t) } : {k = 1 \dots n } \} $ is linearly dependent. In the practical solution of the problem (10)–(12) it is advisable to set $ \gamma = 0 $ and $ \psi ( t) = 0 $ from the beginning and to use (18) to determine $ u ( t ; 0 ) $ of the form (20). Next, it must be checked that $ \| u ( t ; 0 ) \| _ {L _ {2} ^ {r} } \leq R $. If this inequality holds, then $ u ( t ; 0 ) $ is the optimal control of (10)–(12) with minimal norm among all optimal controls; the set of all optimal controls in this case consists of controls of the form

$$ u ( t) = u ( t ; 0 ) + v ( t) ,\ \ t _ {0} \leq t \leq T , $$

where $ v ( t) $ belongs to the orthogonal complement in $ L _ {2} ^ {r} [ t _ {0} ; T ] $ of the linear span of the system of functions

$$ \{ {B ^ {T} ( t) p _ {k} ( t) } : {k = 1 \dots n } \} , $$

$$ \| v ( t) \| _ {L _ {2} ^ {r} } \leq R ^ {2} - \| u ( t ; 0 ) \| _ {L _ {2} ^ {r} } ^ {2} . $$

If $ \| u ( t ; 0 ) \| _ {L _ {2} ^ {r} } > R $, then for $ \gamma > 0 $ the solutions $ \psi ( t ; \gamma ) $ and $ u ( t ; \gamma ) $ of the form (19) and (20), respectively, are found from (17) and (18), and $ \gamma $ is found from the equation

$$ \tag{21 } \| u ( t ; \gamma ) \| _ {L _ {2} ^ {r} } = R ,\ \gamma > 0 . $$

The function $ \| u ( t ; \gamma ) \| _ {L _ {2} ^ {r} } $ of the variable $ \gamma $ is continuous and strictly decreasing for $ \gamma > 0 $, and $ \lim\limits _ {\gamma \rightarrow + \infty } \| u ( t ; 0 ) \| _ {L _ {2} ^ {r} } = 0 $; therefore, the desired $ \gamma = \gamma _ {*} $ is determined uniquely from (21). The control $ u ( t ; \gamma _ {*} ) $ is optimal for the problem (10)–(12); for $ \| u ( t ; 0 ) \| _ {L _ {2} ^ {r} } > R $ this problem has no other optimal controls.

The method of moments is also applicable for the solution of the high-speed problem for the system (11) and other linear systems (see [3] and [8]).

The above methods are also widely used for the numerical solution of problems of optimal control by means of processes described by partial differential equations.

The numerical realizations of many methods for solving problems of optimal control are based on the use of some technique for the approximate solution of the intervening initial-boundary value problems (see Boundary value problem, numerical methods for partial differential equations) and of approximate computations of integrals (see Integration, numerical). As a result, the original problem of optimal control is replaced by a family of approximate problems depending on some parameters (for example, on the steps of the difference grid). About questions regarding the construction of approximate problems and the investigation of their convergence, see [5].

Large classes of extremal problems are ill posed (see Ill-posed problems) and their solution requires the use of a regularization method (see [5] and [3]).

References

[1] V.M. Alekseev, V.M. Tikhomirov, S.V. Fomin, "Optimal control" , Consultants Bureau (1987) (Translated from Russian)
[2] I.V. Beiko, B.N. Bublik, P.N. Zin'ko, "Methods and algorithms for the solution of optimization problems" , Kiev (1983) (In Russian)
[3] A.G. Butkovskii, "Methods of control by means of systems with distributed parameters" , Moscow (1975) (In Russian)
[4] F.P. Vasil'ev, "Numerical method for solving extremal problems" , Moscow (1980) (In Russian)
[5] F.P. Vasil'ev, "Methods for solving extremal problems" , Moscow (1981) (In Russian)
[6] Yu.G. Evtushenko, "Numerical optimization techniques" , Optim. Software (1985) (Translated from Russian)
[7] A.I. Egorov, "Optimal control by means of thermal and diffusion processes" , Moscow (1978) (In Russian)
[8] N.N. Krasovskii, "Theory of control by motion" , Moscow (1968) (In Russian)
[9] J.-L. Lions, "Optimal control of systems governed by partial differential equations" , Springer (1971) (Translated from French)
[10] B.T. Polyak, "Introduction to optimization" , Moscow (1983) (In Russian)
[11] J. Céa, "Lectures on optimization: theory and algorithms" , Springer (1978)
[12] T.K. Sirazetdinov, "Optimization of systems with distributed parameters" (1977) (In Russian)
[13] A.N. Tikhonov, V.I. [V.I. Arsenin] Arsenine, "Solutions of ill-posed problems" , Winston (1977) (Translated from Russian)
[14] R.P. Fedorenko, "Approximate solution of problems of optimal control" , Moscow (1978) (In Russian)
[15] I. Ekeland, R. Téman, "Analyse convexe et problèmes variationnels" , Dunod (1974)

Comments

Constraints of the kind $ g _ {i} ( x, t) \leq 0 $ or $ g _ {i} ( x, t) = 0 $ are usually referred to as state constraints. Among the methods available for the numerical solution of optimal control problems, a distinction can be made between direct and indirect methods. With direct methods the optimal control problem is treated directly as a minimization problem, i.e. the method is started with an initial approximation of the solution, which is iteratively improved by minimizing the objective functional (cf. Objective function) (augmented with a "penalty" term) along the direction of search. The direction of search is obtained via linearization of the problem. With indirect methods the optimality conditions, which must hold for a solution of the optimal control problem, are used to derive a multi-point boundary value problem. Solutions of the optimal control problem will also be solutions of this multi-point boundary value problem and hence the numerical solution of the multi-point boundary value problem yields a candidate for the solution of the optimal control problem.

Most direct methods are of the gradient type, i.e. they are function space analogues of the well-known gradient method for finite-dimensional non-linear programming problems [a1]. These methods can be extended to Newton-like methods [a1], [a2] (cf. Newton method). As indicated in the main article above, state constraints can be treated via a penalty-function approach. Numerical results, however, indicate that this approach yields an inefficient and inaccurate method for the solution of state-constrained optimal control problems [a3]. Another way to treat state constraints is via a slack-variable transformation technique, using quadratic slack variables [a4].

A well-known indirect method is the one based on the numerical solution of the multi-point boundary value problem using multiple shooting [a3], [a5] (cf. Shooting method). For optimal control problems with state constraints, the right-hand side of the differential equation of the multi-point boundary value problem will, in general, be discontinuous at junction points, i.e. points where an inequality constraint changes from active to inactive or vice versa. These discontinuities require special attention [a1].

In general, the properties of direct and indirect methods are somewhat complementary. Direct methods tend to have a relatively large region of convergence and tend to be relatively inaccurate, whereas indirect methods generally have a relatively small region of convergence and tend to be relatively accurate.

References

[a1] A.E. Bryson, Y.-C. Ho, "Applied optimal control" , Hemisphere (1975)
[a2] E.R. Edge, W.F. Powers, "Function-space quasi-Newton algorithms for optimal control problems with bounded controls and singular arcs" J. Optimization Theory and Appl. , 20 (1976) pp. 455–479
[a3] K.H. Well, "Uebungen zu den optimale Steuerungen" , Syllabus of the course "Optimierungsverfahren" of the Carl Cranz Geselschaft , Oberpfaffenhofen, FRG (1983)
[a4] D.H. Jacobson, M.M. Lele, "A transformation technique for optimal control problems with a state variable constraint" IEEE Trans. Automatic Control , 14 : 5 (1969)
[a5] H. Maurer, "An optimal control problem with bounded state variables and control appearing linearly" SIAM J. Control Optimization , 15 (1977) pp. 345–362
[a6] D.P. Bertsekas, "Constrained optimization and Lagrange multiplier methods" , Acad. Press (1982)
[a7] P.L. Falb, J.L. de Jong, "Some successive approximation methods in control and oscillation theory" , Acad. Press (1969)
How to Cite This Entry:
Extremal problems, numerical methods. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Extremal_problems,_numerical_methods&oldid=46893
This article was adapted from an original article by F.P. Vasil'ev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article