Hamilton-Jacobi theory

From Encyclopedia of Mathematics
Jump to: navigation, search

A branch of classical variational calculus and analytical mechanics in which the task of finding extremals (or the task of integrating a Hamiltonian system of equations) is reduced to the integration of a first-order partial differential equation — the so-called Hamilton–Jacobi equation. The fundamentals of the Hamilton–Jacobi theory were developed by W. Hamilton in the 1820s for problems in wave optics and geometrical optics. In 1834 Hamilton extended his ideas to problems in dynamics, and C.G.J. Jacobi (1837) applied the method to the general problems of classical variational calculus.

The starting points of the Hamilton–Jacobi theory were established in the 17th century by P. Fermat and Chr. Huygens, who used the subject of geometrical optics for this purpose (cf. Fermat principle; Huygens principle). Below the footsteps of Hamilton are followed and the problem of propagation of light through an inhomogeneous (but, for the sake of simplicity, isotropic) medium, is considered where is the local velocity of light at a point . According to Fermat's principle, light propagates from point to point in an inhomogeneous medium in shortest possible time. Let be the starting point, and let be the shortest possible time for the light to traverse the distance from to . The function is known as the eikonal or the optical length of the path. It is assumed that during a short time the light travels from the point to the point . According to the Huygens principle, light will travel, apart from small magnitudes of a higher order, along the normal to the level surface of the function . Thus, the equation

is satisfied, and the Hamilton–Jacobi equation for problems in geometrical optics follows:

In analytical mechanics the role of Fermat's principle is played by the variational Hamilton–Ostrogradski principle, while the role of the eikonal is played by the action functional, i.e. by the integral


along a trajectory connecting a given point with the point , where is the Lagrange function of the mechanical system.

It was suggested by Jacobi that a function resembling the action functional (1) should be used in solving all problems of classical variational calculus. The extremals of the problem issuing from the point intersect the level surface of the principal function transversally (cf. Transversality condition); the form of the differential of the action functional

is deduced from this condition. Here , and is the Hamilton function (see also Legendre transform).

The last-mentioned relation yields the following equation for the function :


This is the Hamilton–Jacobi equation.

The most important result of the Hamilton–Jacobi theory is Jacobi's theorem, which states that a complete integral of equation (2), i.e. the solution of this equation, which will depend on the parameters (provided that ), makes it possible to obtain the complete integral of the equation for the Euler functional (1) or, which is the same thing, of the Hamiltonian system connected with this functional, by the formulas , . The application of Jacobi's theorem to the integration of Hamiltonian systems is usually based on the method of separation of variables in special coordinates.

Despite the fact that the integration of partial differential equations is usually more difficult than solving ordinary equations, the Hamilton–Jacobi theory proved to be a powerful tool in the study of problems of optics, mechanics and geometry. The essence of Huygens' principle was used by R. Bellman in solving problems on optimal control.

See also Hilbert invariant integral.


[1] , Variational principles of mechanics , Moscow (1959) (In Russian)
[2] L.A. Pars, "A treatise on analytical dynamics" , Heinemann , London (1965)
[3] V.I. Arnol'd, "Mathematical methods of classical mechanics" , Springer (1978) (Translated from Russian)
[4] N.I. Akhiezer, "The calculus of variations" , Blaisdell (1962) (Translated from Russian)


In optimal control the Hamilton–Jacobi equation takes, for instance, the form


Cf., for instance, Optimal synthesis control. In this setting it is often referred to as the Bellman equation (especially in the engineering literature) or the Hamilton–Jacobi–Bellman equation. There is also a version for optimal stochastic control, cf. Controlled stochastic process. Because classical (everywhere ) solutions of the Hamilton–Jacobi equation often do not exist, it becomes necessary to consider various kinds of generalized solutions, such as viscosity solutions.


[a1] H. Goldstein, "Classical mechanics" , Addison-Wesley (1950)
[a2] P.L. Lions, "Generalized solutions of Hamilton–Jacobi equations" , Pitman (1982)
[a3] W.H. Fleming, R.W. Rishel, "Deterministic and stochastic optimal control" , Springer (1975)
[a4] P.L. Lions, "On the Hamilton–Jacobi–Bellman equations" Acta. Appl. Math. , 1 (1983) pp. 17–41
[a5] S.H. Benton jr., "The Hamilton–Jacobi equation: a global approach" , Acad. Press (1977)
How to Cite This Entry:
Hamilton-Jacobi theory. Encyclopedia of Mathematics. URL:
This article was adapted from an original article by V.M. Tikhomirov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article