Namespaces
Variants
Actions

Optimal control

From Encyclopedia of Mathematics
Jump to: navigation, search

A solution of a non-classical variational problem of optimal control (see Optimal control, mathematical theory of). In a typical case, an optimal control gives a solution of a problem concerning an extremum of a given functional along the trajectories of an ordinary differential equation depending on parameters (controls, inputs) (in the presence of supplementary constraints, prescribed by the formulation of the problem). Here, depending on the class of controls under consideration, an optimal control can take the form of a function of time (in a problem of optimal programming control) or a function of time and current state (position) of the system (in a problem of optimal synthesis control).

In more complicated or more specialized problems, an optimal control can take the form of a generalized control: A function of time with values in a set of measures, a functional of a segment of a trajectory or of a set in phase space, a boundary condition for a partial differential equation, a many-valued mapping, a sequence of extreme elements of a non-stationary problem of mathematical programming, etc.


Comments

For alternative terminology and references see Optimal control, mathematical theory of.

How to Cite This Entry:
Optimal control. A.B. Kurzhanskii (originator), Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Optimal_control&oldid=14436
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098