Namespaces
Variants
Actions

Time-optimal control problem

From Encyclopedia of Mathematics
Jump to: navigation, search


One of the problems in the mathematical theory of optimal control (cf. Optimal control, mathematical theory of), consisting in the determination of the minimum time

$$ \tag{1 } J( u) = t _ {1} $$

in which a controlled object, the movement of which is described by a system of ordinary differential equations

$$ \dot{x} = f( x, u),\ \ u \in U,\ \ f : \mathbf R ^ {n} \times \mathbf R ^ {p} \rightarrow \mathbf R ^ {n} , $$

can be transferred from a given initial position $ x( 0) = x _ {0} $ to a given final position $ x( t _ {1} ) = x _ {1} $. Here, $ x = x( t) $ is the $ n $- dimensional vector of phase coordinates, while $ u = u( t) $ is the $ p $- dimensional vector of controlling parameters (controls) which, for any $ t $, belong to a given closed admissible domain of controls $ U $.

The required minimum time $ t _ {1} $ is the functional (1) depending on the chosen control $ u( t) $. As the class of admissible controls, in which the time-optimal control is to be found, it is sufficient, for the majority of applications, to examine piecewise-continuous controls $ u( t) $, i.e. functions which are continuous for all values of $ t $ being considered, with the exception of a finite number of moments of time, at which they can have discontinuities of the first kind. Theoretically, strictly speaking, the more general class of Lebesgue-measurable functions $ u( t) $, $ 0 \leq t \leq t _ {1} $, should be considered.

The time-optimal control problem can be considered as a particular instance of the Bolza problem or the Mayer problem in variational calculus, and is obtained from these problems by the special form of the functional to be optimized. The time-optimal control $ u( t) $ must satisfy the Pontryagin maximum principle, which is a necessary condition that generalizes the necessary conditions of Euler, Clebsch and Weierstrass, used in classical variational calculus.

For linear time-optimal control problems, certain conclusions can be drawn from the necessary conditions regarding the qualitative structure of the optimal control. Problems which satisfy the following three conditions are called linear time-optimal control problems ([1], [2]):

1) the controls of the movement of the object are linear in $ x $ and $ u $:

$$ \dot{x} = Ax + Bu, $$

where $ A $ and $ B $ are constant $ ( n \times n) $- and $ ( n \times p) $- matrices, respectively;

2) the final position $ x _ {1} $ coincides with the coordinate origin, which is an equilibrium position of the object if $ u = 0 $;

3) the domain of controls $ U $ is a $ p $- dimensional convex polyhedron, such that the coordinate origin of the $ u $- space belongs to $ U $ but is not a vertex of it.

Let the condition of general position be fulfilled, consisting of the linear independence of the vectors

$$ Bw, ABw \dots A ^ {n-} 1 Bw, $$

where $ w $ is an arbitrary $ p $- dimensional vector parallel to an edge of the polyhedron $ U $. Then a control $ u( t) $, $ 0 \leq t \leq t _ {1} $, transferring the object from a given initial position $ x _ {0} $ to an equilibrium position (the coordinate origin in the $ x $- space) is a time-optimal control if and only if the Pontryagin maximum principle holds for it. Furthermore, the optimal control $ u( t) $ in the linear time-optimal control problem is piecewise constant, and the vertices of the polyhedron $ U $ are its only values.

In general, the number of jumps of $ u( t) $, although finite, can be arbitrary. In the following important case, the number of jumps permits an upper bound.

If the polyhedron $ U $ is the $ p $- dimensional parallelopipedon

$$ a ^ {s} \leq u ^ {s} \leq b ^ {s} ,\ \ s = 1 \dots p, $$

and all the eigenvalues of the matrix $ A $ are real, then every one of the components $ u ^ {s} ( t) $, $ s = 1 \dots p $, of the optimal control $ u( t) $ is a piecewise-constant function, taking only the values $ a ^ {s} $ and $ b ^ {s} $ and having at most $ n- 1 $ jumps, i.e. at most $ n $ intervals of constancy.

The problem of time-optimal control can also be studied for non-autonomous systems, i.e. for systems whose right-hand side $ f $ depends on the time $ t $.

In those cases where this works, it is useful to look at the problem of time-optimal control not only in its programming formulation as described above, but also in a positional formulation in the form of a synthesis problem (see Optimal synthesis control). The solution of this synthesis problem provides a qualitative representation of the structure of the time-optimal control transferring the system from any point in a neighbourhood of an initial starting point $ x _ {0} $ to a given final position $ x _ {1} $.

References

[1] L.S. Pontryagin, V.G. Boltayanskii, R.V. Gamkrelidze, E.F. Mishchenko, "The mathematical theory of optimal processes" , Wiley (1962) (Translated from Russian)
[2] V.G. Boltyanskii, "Mathematical methods of optimal control" , Holt, Rinehart & Winston (1971) (Translated from Russian)

Comments

The concept of a reachable set is a useful aid for visualizing properties of the optimal control. The reachable set is a function of the time, $ R( t) $, and consists of all points that can be reached at time $ t $, starting from $ x _ {0} $, and using admissible controls only. For linear time-optimal control problems this set is compact and convex for any $ t $. The minimum time $ t _ {1} $ obviously satisfies $ t _ {1} = \min \{ {t } : {x _ {1} \in R( t) } \} $. For more information on the number of jumps (switches) see [a2].

References

[a1] J.P. LaSalle, "Functional analysis and time optimal control" , Acad. Press (1969)
[a2] G.J. Olsder, "Time-optimal control of multivariable systems near the origin" J. Optim. Theory & Appl. , 15 (1975) pp. 497–517
How to Cite This Entry:
Time-optimal control problem. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Time-optimal_control_problem&oldid=48978
This article was adapted from an original article by I.B. Vapnyarskii (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article