Namespaces
Variants
Actions

Difference between revisions of "Maximum principle"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (fixing subscripts)
 
(2 intermediate revisions by one other user not shown)
Line 1: Line 1:
 +
<!--
 +
m0631201.png
 +
$#A+1 = 35 n = 0
 +
$#C+1 = 35 : ~/encyclopedia/old_files/data/M063/M.0603120 Maximum principle,
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
''discrete''
 
''discrete''
  
The Pontryagin maximum principle for discrete-time control processes. For such a process the maximum principle need not be satisfied, even if the [[Pontryagin maximum principle|Pontryagin maximum principle]] is valid for its continuous analogue, obtained by replacing the finite difference operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m0631201.png" /> by the differential <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m0631202.png" />. For example, consider the optimal control problem
+
The Pontryagin maximum principle for discrete-time control processes. For such a process the maximum principle need not be satisfied, even if the [[Pontryagin maximum principle|Pontryagin maximum principle]] is valid for its continuous analogue, obtained by replacing the finite difference operator $  x _ {t+1} - x _ {t} $
 +
by the differential $  d x / d t $.  
 +
For example, consider the optimal control problem
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m0631203.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$ \tag{1 }
 +
\max  J ( x _ {T+1} ) ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m0631204.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$ \tag{2 }
 +
x _ {t+1}  = f _ {t} ( x _ {t} , u _ {t} ) ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m0631205.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
$$ \tag{3 }
 +
u _ {t}  \in  U ,\  t = 0 \dots T ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m0631206.png" /></td> <td valign="top" style="width:5%;text-align:right;">(4)</td></tr></table>
+
$$ \tag{4 }
 +
x _ {0= a .
 +
$$
  
This can be treated as a standard problem on an extremum in the presence of constraints. Then the condition for optimality of a trajectory <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m0631207.png" /> can be obtained with the help of the [[Lagrange function|Lagrange function]]
+
This can be treated as a standard problem on an extremum in the presence of constraints. Then the condition for optimality of a trajectory $  \{ x _ {t}  ^ {*} , u _ {t}  ^ {*} \} $
 +
can be obtained with the help of the [[Lagrange function|Lagrange function]]
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m0631208.png" /></td> </tr></table>
+
$$
 +
= J ( x _ {T+1} ) +
 +
\sum _ { t= 0} ^ { T }
 +
( p _ {t+1} f _ {t} ( x _ {t} , u _ {t} ) - p _ {t+1} x _ {t+1} ) .
 +
$$
  
 
Here
 
Here
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m0631209.png" /></td> </tr></table>
+
$$
 +
p _ {t+1} f _ {t} ( x _ {t} , u _ {t} )  = \
 +
H _ {t} ,
 +
$$
 +
 
 +
by analogy with the continuous case, is called the [[Hamilton function|Hamilton function]]. Let  $  J $,
 +
$  f _ {t} $,
 +
$  t = 0 \dots T $,
 +
be differentiable functions with respect to all variables and let  $  U $
 +
be a bounded closed set. Then for a solution  $  \{ x _ {t}  ^ {*} , u _ {t}  ^ {*} \} $
 +
of (1)–(4) to be optimal it is necessary that there exist [[Lagrange multipliers|Lagrange multipliers]]  $  \{ p _ {t+1}  ^ {*} \} $
 +
such that  $  ( x _ {t}  ^ {*} , u _ {t}  ^ {*} , p _ {t+1}  ^ {*} ) $
 +
is a stationary point of the Lagrange function, that is, at this point
 +
 
 +
$$
 +
 
 +
\frac{\partial  L }{\partial  x }
 +
 
 +
= 0 ; \ \
 +
 
 +
\frac{\partial  L }{\partial  p }
  
by analogy with the continuous case, is called the [[Hamilton function|Hamilton function]]. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m06312010.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m06312011.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m06312012.png" />, be differentiable functions with respect to all variables and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m06312013.png" /> be a bounded closed set. Then for a solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m06312014.png" /> of (1)–(4) to be optimal it is necessary that there exist [[Lagrange multipliers|Lagrange multipliers]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m06312015.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m06312016.png" /> is a stationary point of the Lagrange function, that is, at this point
+
= 0 ; \ \
 +
\delta _ {n} L  = \
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m06312017.png" /></td> </tr></table>
+
\frac{\partial  L }{\partial  u }
  
for all admissible variations of the control <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m06312018.png" />. The first condition leads to the dynamical equations of the discrete process (2) and the initial condition (4). The second leads to the boundary condition and the conjugate system for the impulses <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m06312019.png" />:
+
\delta u  \leq  0
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m06312020.png" /></td> </tr></table>
+
for all admissible variations of the control  $  \delta u $.  
 +
The first condition leads to the dynamical equations of the discrete process (2) and the initial condition (4). The second leads to the boundary condition and the conjugate system for the impulses  $  \{ p _ {t+1} \} $:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m06312021.png" /></td> </tr></table>
+
$$
 +
p _ {T+1}  = \
 +
 
 +
\frac{\partial  J }{\partial  x _ {T+1} }
 +
,
 +
$$
 +
 
 +
$$
 +
p _ {t}  = p _ {t+1}
 +
\frac{\partial  f }{\partial  x _ {t} }
 +
,\  t = T \dots 0 .
 +
$$
  
 
The third condition leads to a condition on the first variation of the Hamilton function:
 
The third condition leads to a condition on the first variation of the Hamilton function:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m06312022.png" /></td> <td valign="top" style="width:5%;text-align:right;">(5)</td></tr></table>
+
$$ \tag{5 }
 +
\delta _ {n} H _ {t}  = \
 +
 
 +
\frac{\partial  H _ {t} }{\partial  u _ {t} }
 +
 
 +
\delta u _ {t}  \leq  0 .
 +
$$
  
 
However, (5) does not imply that the Hamilton function attains its maximum at the optimal control,
 
However, (5) does not imply that the Hamilton function attains its maximum at the optimal control,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m06312023.png" /></td> </tr></table>
+
$$
 +
H _ {1} ( x _ {t}  ^ {*} , u _ {t}  ^ {*} , p _ {t+1}  ^ {*} )  = \
 +
\max _ {u \in U } \
 +
H _ {t} ( x _ {t}  ^ {*} , u _ {t}  ^ {*} , p _ {t+1}  ^ {*} ),
 +
$$
  
over all controls satisfying the constraint (3); it shows that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m06312024.png" /> is a stationary point of the Hamilton function. If the first variation of the Hamilton function, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m06312025.png" />, is zero (this holds, in particular, when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m06312026.png" /> is an interior point or when there are admissible variations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m06312027.png" /> of the control at <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m06312028.png" /> orthogonal to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m06312029.png" />), then the nature of the stationary point is determined by the successive terms in the expansion:
+
over all controls satisfying the constraint (3); it shows that $  u _ {t}  ^ {*} $
 +
is a stationary point of the Hamilton function. If the first variation of the Hamilton function, $  ( \partial  H _ {t} / \partial  u _ {t} ) \delta u _ {t} $,  
 +
is zero (this holds, in particular, when $  u _ {t}  ^ {*} $
 +
is an interior point or when there are admissible variations $  \delta u _ {t} $
 +
of the control at $  u _ {t}  ^ {*} $
 +
orthogonal to $  \partial  H _ {t} / \partial  u _ {t} $),  
 +
then the nature of the stationary point is determined by the successive terms in the expansion:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m06312030.png" /></td> </tr></table>
+
$$
 +
H _ {t} ( u _ {t}  ^ {*} + \epsilon \delta u _ {t} ) -
 +
H _ {t} ( u _ {t}  ^ {*} )
 +
= \epsilon
  
Examples have been constructed in which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m06312031.png" /> is a local maximum, a local minimum and even a saddle point of the Hamilton function. Therefore, in general, the maximum principle does not hold for discrete systems.
+
\frac{\partial  H _ {t} }{\partial  u _ {t} }
 +
 
 +
\delta u _ {t}  <  0 .
 +
$$
 +
 
 +
Examples have been constructed in which $  u _ {t}  ^ {*} $
 +
is a local maximum, a local minimum and even a saddle point of the Hamilton function. Therefore, in general, the maximum principle does not hold for discrete systems.
  
 
For systems which are linear in the phase variables,
 
For systems which are linear in the phase variables,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m06312032.png" /></td> </tr></table>
+
$$
 +
x _ {t+1}  = A ( u _ {t} ) x _ {t} + \phi ( u _ {t} ),
 +
$$
  
 
or in the controls,
 
or in the controls,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m06312033.png" /></td> </tr></table>
+
$$
 +
x _ {t+1}  = g ( x _ {t} ) + B ( x _ {t} ) u _ {t} ,
 +
$$
  
under the additional condition of linearity of the objective <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m06312034.png" /> in the first case, or convexity of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063120/m06312035.png" /> in the second case, the maximum principle is satisfied (see [[#References|[1]]]–[[#References|[5]]]).
+
under the additional condition of linearity of the objective $  J = c _ {T+1} x _ {T+1} $
 +
in the first case, or convexity of $  U $
 +
in the second case, the maximum principle is satisfied (see [[#References|[1]]]–[[#References|[5]]]).
  
 
If the optimal control problem for a linear discrete system is treated as a [[Linear programming|linear programming]] problem (see [[#References|[6]]], [[#References|[7]]]), then its dual discrete-time dynamical problem may be obtained. The conjugate system for the impulses gives the dynamical equation for the dual problem. On an optimal trajectory not only the objective but also the Hamilton functions of dual dynamical systems coincide.
 
If the optimal control problem for a linear discrete system is treated as a [[Linear programming|linear programming]] problem (see [[#References|[6]]], [[#References|[7]]]), then its dual discrete-time dynamical problem may be obtained. The conjugate system for the impulses gives the dynamical equation for the dual problem. On an optimal trajectory not only the objective but also the Hamilton functions of dual dynamical systems coincide.
  
 
====References====
 
====References====
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  L.-Ts. Fan,  Ch.-S. Wang,  "The discrete maximum principle" , Wiley  (1964)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  A.I. Propoi,  "Elements of the theory of optimal processes" , Moscow  (1973)  (In Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  B.H. Pshenichnyi,  "Necessary conditions for an extremum" , M. Dekker  (1971)  (Translated from Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  V.G. Boltyanskii,  "Optimal control of discrete systems" , Wiley  (1978)  (Translated from Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  R. Gabasov,  F.M. Kirillova,  "Extending L.S. Pontryagin's maximum principle to discrete systems"  ''Automation and Remote Control'' , '''27'''  (1966)  pp. 1878–1882  ''Avtomatika i Telemekhanika'' , '''11'''  (1966)  pp. 46–51</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  Yu.P. Ivanilov,  "Conjugate (dual) linear dynamic optimization problems and procedures for their computation"  ''Prikl. Mat. i Programmirov.'' , '''4'''  (1971)  pp. 31–40  (In Russian)</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top">  Yu.P. Ivanilov,  A.I. Propoi,  "On linear dynamic programming problems"  ''Soviet Math. Dokl.'' , '''12''' :  3  (1971)  pp. 926–930  ''Dokl. Akad. Nauk SSSR'' , '''198''' :  5  (1971)  pp. 1011–1014</TD></TR></table>
+
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  L.-Ts. Fan,  Ch.-S. Wang,  "The discrete maximum principle" , Wiley  (1964) {{MR|0165981}} {{ZBL|0143.42905}} {{ZBL|0128.14503}} </TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  A.I. Propoi,  "Elements of the theory of optimal processes" , Moscow  (1973)  (In Russian) {{MR|}} {{ZBL|}} </TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  B.H. Pshenichnyi,  "Necessary conditions for an extremum" , M. Dekker  (1971)  (Translated from Russian) {{MR|0276845}} {{ZBL|0764.90079}} </TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  V.G. Boltyanskii,  "Optimal control of discrete systems" , Wiley  (1978)  (Translated from Russian) {{MR|0528636}} {{MR|0514558}} {{ZBL|}} </TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  R. Gabasov,  F.M. Kirillova,  "Extending L.S. Pontryagin's maximum principle to discrete systems"  ''Automation and Remote Control'' , '''27'''  (1966)  pp. 1878–1882  ''Avtomatika i Telemekhanika'' , '''11'''  (1966)  pp. 46–51 {{MR|0215632}} {{ZBL|}} </TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  Yu.P. Ivanilov,  "Conjugate (dual) linear dynamic optimization problems and procedures for their computation"  ''Prikl. Mat. i Programmirov.'' , '''4'''  (1971)  pp. 31–40  (In Russian) {{MR|286533}} {{ZBL|}} </TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top">  Yu.P. Ivanilov,  A.I. Propoi,  "On linear dynamic programming problems"  ''Soviet Math. Dokl.'' , '''12''' :  3  (1971)  pp. 926–930  ''Dokl. Akad. Nauk SSSR'' , '''198''' :  5  (1971)  pp. 1011–1014 {{MR|}} {{ZBL|0233.90015}} </TD></TR></table>
 
 
 
 
  
 
====Comments====
 
====Comments====
Line 64: Line 154:
  
 
====References====
 
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  A.E. Bryson,  Y.-C. Ho,  "Applied optimal control" , Ginn  (1969)</TD></TR></table>
+
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  A.E. Bryson,  Y.-C. Ho,  "Applied optimal control" , Ginn  (1969) {{MR|0446628}} {{ZBL|}} </TD></TR></table>

Latest revision as of 03:47, 4 March 2022


discrete

The Pontryagin maximum principle for discrete-time control processes. For such a process the maximum principle need not be satisfied, even if the Pontryagin maximum principle is valid for its continuous analogue, obtained by replacing the finite difference operator $ x _ {t+1} - x _ {t} $ by the differential $ d x / d t $. For example, consider the optimal control problem

$$ \tag{1 } \max J ( x _ {T+1} ) , $$

$$ \tag{2 } x _ {t+1} = f _ {t} ( x _ {t} , u _ {t} ) , $$

$$ \tag{3 } u _ {t} \in U ,\ t = 0 \dots T , $$

$$ \tag{4 } x _ {0} = a . $$

This can be treated as a standard problem on an extremum in the presence of constraints. Then the condition for optimality of a trajectory $ \{ x _ {t} ^ {*} , u _ {t} ^ {*} \} $ can be obtained with the help of the Lagrange function

$$ L = J ( x _ {T+1} ) + \sum _ { t= 0} ^ { T } ( p _ {t+1} f _ {t} ( x _ {t} , u _ {t} ) - p _ {t+1} x _ {t+1} ) . $$

Here

$$ p _ {t+1} f _ {t} ( x _ {t} , u _ {t} ) = \ H _ {t} , $$

by analogy with the continuous case, is called the Hamilton function. Let $ J $, $ f _ {t} $, $ t = 0 \dots T $, be differentiable functions with respect to all variables and let $ U $ be a bounded closed set. Then for a solution $ \{ x _ {t} ^ {*} , u _ {t} ^ {*} \} $ of (1)–(4) to be optimal it is necessary that there exist Lagrange multipliers $ \{ p _ {t+1} ^ {*} \} $ such that $ ( x _ {t} ^ {*} , u _ {t} ^ {*} , p _ {t+1} ^ {*} ) $ is a stationary point of the Lagrange function, that is, at this point

$$ \frac{\partial L }{\partial x } = 0 ; \ \ \frac{\partial L }{\partial p } = 0 ; \ \ \delta _ {n} L = \ \frac{\partial L }{\partial u } \delta u \leq 0 $$

for all admissible variations of the control $ \delta u $. The first condition leads to the dynamical equations of the discrete process (2) and the initial condition (4). The second leads to the boundary condition and the conjugate system for the impulses $ \{ p _ {t+1} \} $:

$$ p _ {T+1} = \ \frac{\partial J }{\partial x _ {T+1} } , $$

$$ p _ {t} = p _ {t+1} \frac{\partial f }{\partial x _ {t} } ,\ t = T \dots 0 . $$

The third condition leads to a condition on the first variation of the Hamilton function:

$$ \tag{5 } \delta _ {n} H _ {t} = \ \frac{\partial H _ {t} }{\partial u _ {t} } \delta u _ {t} \leq 0 . $$

However, (5) does not imply that the Hamilton function attains its maximum at the optimal control,

$$ H _ {1} ( x _ {t} ^ {*} , u _ {t} ^ {*} , p _ {t+1} ^ {*} ) = \ \max _ {u \in U } \ H _ {t} ( x _ {t} ^ {*} , u _ {t} ^ {*} , p _ {t+1} ^ {*} ), $$

over all controls satisfying the constraint (3); it shows that $ u _ {t} ^ {*} $ is a stationary point of the Hamilton function. If the first variation of the Hamilton function, $ ( \partial H _ {t} / \partial u _ {t} ) \delta u _ {t} $, is zero (this holds, in particular, when $ u _ {t} ^ {*} $ is an interior point or when there are admissible variations $ \delta u _ {t} $ of the control at $ u _ {t} ^ {*} $ orthogonal to $ \partial H _ {t} / \partial u _ {t} $), then the nature of the stationary point is determined by the successive terms in the expansion:

$$ H _ {t} ( u _ {t} ^ {*} + \epsilon \delta u _ {t} ) - H _ {t} ( u _ {t} ^ {*} ) = \epsilon \frac{\partial H _ {t} }{\partial u _ {t} } \delta u _ {t} < 0 . $$

Examples have been constructed in which $ u _ {t} ^ {*} $ is a local maximum, a local minimum and even a saddle point of the Hamilton function. Therefore, in general, the maximum principle does not hold for discrete systems.

For systems which are linear in the phase variables,

$$ x _ {t+1} = A ( u _ {t} ) x _ {t} + \phi ( u _ {t} ), $$

or in the controls,

$$ x _ {t+1} = g ( x _ {t} ) + B ( x _ {t} ) u _ {t} , $$

under the additional condition of linearity of the objective $ J = c _ {T+1} x _ {T+1} $ in the first case, or convexity of $ U $ in the second case, the maximum principle is satisfied (see [1][5]).

If the optimal control problem for a linear discrete system is treated as a linear programming problem (see [6], [7]), then its dual discrete-time dynamical problem may be obtained. The conjugate system for the impulses gives the dynamical equation for the dual problem. On an optimal trajectory not only the objective but also the Hamilton functions of dual dynamical systems coincide.

References

[1] L.-Ts. Fan, Ch.-S. Wang, "The discrete maximum principle" , Wiley (1964) MR0165981 Zbl 0143.42905 Zbl 0128.14503
[2] A.I. Propoi, "Elements of the theory of optimal processes" , Moscow (1973) (In Russian)
[3] B.H. Pshenichnyi, "Necessary conditions for an extremum" , M. Dekker (1971) (Translated from Russian) MR0276845 Zbl 0764.90079
[4] V.G. Boltyanskii, "Optimal control of discrete systems" , Wiley (1978) (Translated from Russian) MR0528636 MR0514558
[5] R. Gabasov, F.M. Kirillova, "Extending L.S. Pontryagin's maximum principle to discrete systems" Automation and Remote Control , 27 (1966) pp. 1878–1882 Avtomatika i Telemekhanika , 11 (1966) pp. 46–51 MR0215632
[6] Yu.P. Ivanilov, "Conjugate (dual) linear dynamic optimization problems and procedures for their computation" Prikl. Mat. i Programmirov. , 4 (1971) pp. 31–40 (In Russian) MR286533
[7] Yu.P. Ivanilov, A.I. Propoi, "On linear dynamic programming problems" Soviet Math. Dokl. , 12 : 3 (1971) pp. 926–930 Dokl. Akad. Nauk SSSR , 198 : 5 (1971) pp. 1011–1014 Zbl 0233.90015

Comments

For the maximum principle for analytic functions see Maximum-modulus principle.

References

[a1] A.E. Bryson, Y.-C. Ho, "Applied optimal control" , Ginn (1969) MR0446628
How to Cite This Entry:
Maximum principle. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Maximum_principle&oldid=14552
This article was adapted from an original article by Yu.P. Ivanilov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article