Namespaces
Variants
Actions

Bellman equation

From Encyclopedia of Mathematics
Revision as of 16:55, 7 February 2011 by 127.0.0.1 (talk) (Importing text file)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

A partial differential equation of a special type to solve a problem of optimal control. If the solution of Cauchy's problem for the Bellman equation can be found, the optimal solution of the original problem is readily obtained.

A recurrence relation for the solution of a discrete problem of optimal control. The method for obtaining the optimal solution with the aid of Bellman's equation is known as dynamic programming.

References

[1] R. Bellman, "Dynamic programming" , Princeton Univ. Press (1957)


Comments

The Bellman equation for continuous-time optimal control problems is also often called the dynamic programming equation. Cf. the article Optimality, sufficient conditions for for examples and more details. There is also a variant for stochastic optimal control problems.

References

[a1] W.H. Fleming, R.W. Rishel, "Deterministic and stochastic optimal control" , Springer (1975)
How to Cite This Entry:
Bellman equation. V.G. Karmanov (originator), Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Bellman_equation&oldid=11588
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098