method of statistical trials
A numerical method based on simulation by random variables and the construction of statistical estimators for the unknown quantities. It is usually supposed that the Monte-Carlo method originated in 1949 (see ) when, in connection with work on the construction of atomic reactors, J. von Neumann and S. Ulam suggested using the apparatus of probability theory in the computer solution of applied problems. The Monte-Carlo method is named after the town of Monte-Carlo, famous for its casinos.
- 1 Simulation by random variables with given distributions.
- 2 Monte-Carlo algorithms for estimating multiple integrals.
- 3 Monte-Carlo algorithms for solving integral equations of the second kind.
- 4 Modifications of Monte-Carlo methods in radiative transport theory.
Simulation by random variables with given distributions.
As a rule such a simulation is achieved by a transformation of one or more independent values of a random number , uniformly distributed in the interval . The sequence of "sample" values of is usually obtained on a computer by number-theoretic algorithms, of which the most widely used is the so-called method of residues, for example, in the form
Here is the order of the mantissa of the computer and
Numbers of this type are called pseudo-random numbers; they are used in statistical testing and in solving typical problems (see –). The length of the period of the above version of the method of residues is . Physical generators, tables of random numbers and quasi-random numbers are also used in the Monte-Carlo method. There are Monte-Carlo methods with a small number of playing parameters (see ).
The standard method for simulating a discrete random variable with distribution , is as follows: Put if the chosen value of satisfies
The standard method for simulating a continuous random variable (sometimes called the method of inverse functions) is to use the, easily verified, representation , where is the distribution function with given density . Sometimes randomization of the simulation is useful (in other words, the method of superposition), based on the expression
here one first chooses a number with distribution , and then obtains a sample value from the distribution with density . In other methods of randomization certain parameters of a deterministic method for solving the problem are considered as random variables (see –).
Another, more general, method for simulating a continuous random variable is the method of exclusion (method of selection), at the basis of which lies the following result: If is uniformly distributed in a domain , then . In the method of exclusion, choose a point uniformly in a domain and put if ; otherwise repeat the selection of , etc. For example, if and , one can take , . The average number of operations in the method of exclusion is proportional to .
For many random variables, special representations of the form have been obtained. For example, the random variables
have standard normal distributions and are independent; the random variable has the gamma-distribution with parameter ; the random variable is distributed with density , ; the random variable has the beta-distribution with parameters (see –).
The standard algorithm for simulating a continuous random vector is to successively choose the values of its components from conditional distributions corresponding to the representation
The method of exclusion extends to the multi-dimensional case without change; it is only necessary, in its formulation, to regard , and as vectors. A multi-dimensional normal vector can be simulated by using a special linear transformation of a vector of independent standard normal random variables. Special methods have also been developed for the approximate simulation of stationary Gaussian processes (see, for example, , ).
If, in a Monte-Carlo method calculation, random variables defined by a real phenomenon are simulated, then the calculation is a direct simulation (imitation) of this phenomenon. Computer simulations have been worked out for: the processes of transport, scattering and reproduction of particles: neutrons, gamma-quanta, photons, electrons, and others (see, for example, –); simulations of the evolution of ensembles of molecules for the solution of various problems in classical and quantum statistical physics (see, for example, –); simulation of queueing and industrial processes (see, for example , , ); simulation of various random processes in technology, biology; etc. (see ). Simulation algorithms are usually carefully developed; for example, they tabulate complicated functions, modify standard procedures, etc. None-the-less, a direct simulation often cannot provide the necessary accuracy in the estimates of the required variables. Many methods have been developed for increasing the effectiveness of a simulation.
Monte-Carlo algorithms for estimating multiple integrals.
Suppose it is required to estimate an integral with respect to the Lebesgue measure in an -dimensional Euclidean space and let be a probability density such that can be written as:
where . By computer simulation of it is possible to obtain sample values . By the law of large numbers,
Simultaneously it is possible to estimate the mean-square error in , that is, the quantity , and to approximately construct a suitable confidence interval for . By the choice of the density it is possible to arrange for estimates with possibly smaller variance. For example, if , then and if , then . The corresponding algorithm is called essential sampling (choice by importance). Another common modification — the method of selection of principal part — occurs when a function is determined whose integral is known. It is sometimes useful to combine Monte-Carlo methods and classical quadratures (cf. Quadrature) in so-called random quadrature formulas, the basic idea of which is that the nodes and coefficients in any quadrature sum (for example, in interpolation) are chosen randomly from a distribution, providing an unbiased estimator of the integral . Particular cases of these formulas are: the so-called method of stratified sampling, in which the nodes are chosen one in each part of a fixed partition of the domain of integration and the coefficients are proportional to the corresponding volumes; and the so-called method of symmetric sampling, which, in the case of integration over , is defined by the expression (see )
Here the order of the speed of convergence of the Monte-Carlo method is increased and, in certain cases, becomes best possible in the class of problems being considered.
In general, the domain of integration is partitioned into parallelopipeds. In each parallelopiped the value of the integral is calculated as the average of the value at a random point and the point symmetric to it relative to the centre of the parallelopiped.
A number of modifications of Monte-Carlo methods are based on the (perhaps formal) representation of the required value as a double integral
where and the vector is distributed with density . It is known that and that
where is the conditional mathematical expectation and is the conditional variance of given a fixed value of . Formula (1) is widely used in Monte-Carlo methods. In particular, it shows that , that is, analytic averaging over any variable increases the accuracy of the Monte-Carlo method. However, in this connection, the amount of computation may be significantly increased. The computer time necessary to achieve a given accuracy is proportional to , where is the average time it takes to obtain one value of . The method of splitting is optimal with respect to this criterion. Its simplest version uses the "unbiased" estimator
where are conditionally-independent and distributed as for a fixed value of . Using (1) it is possible to obtain an optimal value
where are the average computing times corresponding to the samples (see, for example, ).
If the integrand depends on a parameter, it is expedient to use the method of dependent trials, that is, to estimate the integral for various values of the parameter at one and the same random points . An important property of the Monte-Carlo method is the comparatively relatively weak dependence of the mean-square error on the number of measurements; moreover, the order of convergence relative to the number of points is always one and the same: . This allows one to estimate (after a preliminary transformation of the problem) integrals of very high, and even infinite, multiplicity. For example, methods have been worked out for the estimation of Wiener integrals .
Monte-Carlo algorithms for solving integral equations of the second kind.
Suppose it is required to estimate a linear functional , where , where the integral operator with kernel satisfies a condition providing convergence of the Neumann series: . A Markov chain is defined by an initial density and a transition density ; the probability of termination of the chain at is equal to
Let be the random index of the last state. Further, let a functional of the trajectories of the chain with expectation be defined. Most often the so-called collision estimator
is used, where
If for and for , then under certain additional conditions
where , then and (see ). If a suitable Markov chain is simulated on a computer, statistical estimates of linear functionals in the solution of an integral equation of the second kind can be obtained. This gives the possibility of locally estimating the solution on the basis of the representation , where . In a number of cases, alongside Monte-Carlo methods, number-theoretic methods are applied in order to solve these problems (see ). A Monte-Carlo method for estimating the first eigen value of an integral operator has been realized by an iteration method on the basis of the relation 
All these results can be almost automatically extended to systems of linear algebraic equations of the form (see ).
Modifications of Monte-Carlo methods in radiative transport theory.
(See –.) The density of the average number of particle collisions in a phase space with coordinates and velocities satisfies an integral equation of the second kind; its kernel in the single-velocity case has the form
Here is the scattering coefficient (section), is the relaxation coefficient, is the scattering indicatrix, and is the optical length of a path from to (see , ). For the construction of estimates with small variances one uses, for example, asymptotic solutions of the adjoint transport equation ; the simplest algorithm of this type is the so-called exponential transformation (see , ). A modification of a local estimate of the flow of particles has been developed (see , , –, , ). Using a simulation of a Markov chain (for example, the physical transport process in a certain medium) it is possible to simultaneously obtain dependent estimators of functionals for different values of the parameters; by differentiating the "weight" it is sometimes possible to obtain unbiased estimators of the corresponding derivatives (see , ). This provides an opportunity to use Monte-Carlo methods in solving certain inverse problems . For the solution of problems in transport theory "bifurcation" of the trajectory and analytic averaging is effective . The simulation of trajectories of particles in compound media can sometimes be essentially simplified by the method of a maximal section (see –).
These are constructed on the basis of the corresponding integral relations. For example, the standard five-point difference approximation for the Laplace equation has the form of the formula for the complete mathematical expectation of a symmetric random walk over a grid with absorption at the boundary (see, for example, , ). A continuous analogue of this formula is the relation
where the integral is taken over the surface of a sphere lying entirely within the given domain and with centre at . Formula (2) and other similar relations provide an opportunity to use isotropic "random walk on spheres" when solving elliptic and parabolic equations (see , ). Monte-Carlo methods are effective, for example, for estimating the solution of multi-dimensional boundary value problems at a point.
A simulation of Markov branching processes allows one to construct estimates of the solution of certain non-linear equations, for example, the Boltzmann equation in the theory of rarefied gases .
|||J. von Neumann, Nat. Bureau Standard Appl. Math. Series , 12 (1951) pp. 36–38|
|||N.P. Buslenko, et al., "The method of statistical trials (the Monte-Carlo method)" , Moscow (1962) (In Russian)|
|||S.M. Ermakov, "Die Monte-Carlo Methode und verwandte Fragen" , Deutsch. Verlag Wissenschaft. (1975) (Translated from Russian)|
|||G.A. Mikhailov, "Some questions in the theory of Monte-Carlo methods" , Novosibirsk (1971) (In Russian)|
|||I.M. Sobol', "Numerical Monte-Carlo methods" , Moscow (1973) (In Russian)|
|||Yu.G. Pollyak, "Probabilistic simulation on computers" , Moscow (1971) (In Russian)|
|||N.S. Bakhvalov, "On optimal convergence estimates for quadrature processes and integration methods of Monte-Carlo type on function classes" , Numerical Methods for Solving Differential and Integral Equations and Quadrature Formulas , Moscow (1964) pp. 5–63 (In Russian)|
|||N.S. Bakhvalov, "An estimate of the remainder term in quadrature formula" Comp. Math. Math. Phys. , 1 : 1 (1961) pp. 68–82 Zh. Vychisl. Mat. i Mat. Fiz. , 1 : 1 (1961) pp. 64–77|
|||N.S. Bakhvalov, "Approximate computation of multiple integrals" Vestnik Moskov. Gos. Univ. Ser. Mat. Mekh. Astronom. Fiz. Khim. , 4 (1959) pp. 3–18 (In Russian)|
|||J.M. Hammersley, D.C. Handscomb, "Monte-Carlo methods" , Methuen (1964)|
|||, Monte-Carlo methods and problems of radiative transport , Moscow (1967)|
|||G.I. Marchuk, et al., "The Monte-Carlo method in atmospheric optics" , Novosibirsk (1976) (In Russian)|
|||J. Spanier, E. Gelbard, "Monte-Carlo principles and neutron transport problems" , Addison-Wesley (1969)|
|||V.V. Chavchanidze, Izv. Akad. Nauk SSSR Ser. Fiz. , 19 : 6 (1955) pp. 629–638|
|||, The penetration of radiation through non-uniformity in protection , Moscow (1968) (In Russian)|
|||A.D. Frank-Kamenetskii, Atomnaya Energiya , 16 : 2 (1964) pp. 119–122|
|||M.H. Kalos, Nuclear Sci. and Eng. , 33 (1968) pp. 284–290|
|||, Monte-Carlo methods and their application. Reports III All-Union Conf. Monte-Carlo Methods , Novosibirsk (1971)|
|||I.M. Gelfand, A.S. Frolov, N.N. Chentsov, "The computation of continuous integrals by the Monte-Carlo method" Izv. Vuz. Ser. Mat. , 5 (1958) pp. 32–45 (In Russian)|
|||A.S. Frolov, N.N. Chentsov, "On the calculation of definite integrals dependent on a parameter by the Monte-Carlo method" USSR Comp. Math. Math. Phys. , 2 : 4 (1962) pp. 802–807 Zh. Vychisl. Mat. i. Mat. Fiz. , 2 : 4 (1962) pp. 714–717|
|||N.M. Korobov, "Number-theoretic methods in applied analysis" , Moscow (1963) (In Russian)|
|||V.S. Vladimirov, "Monte-Carlo methods as applied to the calculation of the lowest eigenvalue and the associated eigenfunction of a linear integral equation" Theor. Probab. Appl. , 1 : 1 (1956) pp. 101–116 Teor. Veroyatnost. i. Primenen. , 1 : 1 (1956) pp. 113–130|
|||J.H. Curtiss, "Monte-Carlo methods for the iteration of linear operators" J. Math. Phys. , 32 : 4 (1954) pp. 209–232|
|||M.E. Muller, "Some continuous Monte-Carlo methods for the Dirichlet problem" Ann. Math. Stat. , 27 : 3 (1956) pp. 569–589|
For an up-to-date account of the use of random processes in solving (numerically) the classical equations of mathematical physics cf. [a2].
|[a1]||P. Bratley, B.L. Fox, L.E. Schrage, "A guide to simulation" , Springer (1987)|
|[a2]||S.M. Ermakov, V.V. Nekrutkin, A.S. Sipin, "Random processes for the classical equations of mathematical physics" , Kluwer (1989) (Translated from Russian)|
Monte-Carlo method. G.A. Mikhailov (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Monte-Carlo_method&oldid=15336