Namespaces
Variants
Actions

Mathematical analysis

From Encyclopedia of Mathematics
Jump to: navigation, search


The part of mathematics in which functions (cf. Function) and their generalizations are studied by the method of limits (cf. Limit). The concept of limit is closely connected with that of an infinitesimal quantity, therefore it could be said that mathematical analysis studies functions and their generalizations by infinitesimal methods.

The name "mathematical analysis" is a short version of the old name of this part of mathematics, "infinitesimal analysis"; the latter more fully describes the content, but even it is an abbreviation (the name "analysis by means of infinitesimals" would characterize the subject more precisely). In classical mathematical analysis the objects of study (analysis) were first and foremost functions. "First and foremost" because the development of mathematical analysis has led to the possibility of studying, by its methods, forms more complicated than functions: functionals, operators, etc.

Everywhere in nature and technology one meets motions and processes which are characterized by functions; the laws of natural phenomena also are usually described by functions. Hence the objective importance of mathematical analysis as a means of studying functions.

Mathematical analysis, in the broad sense of the term, includes a very large part of mathematics. It includes differential calculus; integral calculus; the theory of functions of a real variable (cf. Functions of a real variable, theory of); the theory of functions of a complex variable (cf. Functions of a complex variable, theory of); approximation theory; the theory of ordinary differential equations (cf. Differential equation, ordinary); the theory of partial differential equations (cf. Differential equation, partial); the theory of integral equations (cf. Integral equation); differential geometry; variational calculus; functional analysis; harmonic analysis; and certain other mathematical disciplines. Modern number theory and probability theory use and develop methods of mathematical analysis.

Nevertheless, the term "mathematical analysis" is often used as a name for the foundations of mathematical analysis, which unifies the theory of real numbers (cf. Real number), the theory of limits, the theory of series, differential and integral calculus, and their immediate applications such as the theory of maxima and minima, the theory of implicit functions (cf. Implicit function), Fourier series, and Fourier integrals (cf. Fourier integral).

Functions.

Mathematical analysis began with the definition of a function by N.I. Lobachevskii and P.G.L. Dirichlet. If to each number $ x $, from some set $ F $ of numbers, is associated by some rule a number $ y $, then this defines a function

$$ y = f ( x) $$

of one variable $ x $. A function of $ n $ variables,

$$ f ( x) = f ( x _ {1} \dots x _ {n} ), $$

is defined similarly, where $ x = ( x _ {1} \dots x _ {n} ) $ is a point of an $ n $- dimensional space; one also considers functions

$$ f ( x) = \ ( x _ {1} , x _ {2} ,\dots ) $$

of points $ x = ( x _ {1} , x _ {2} ,\dots) $ of some infinite-dimensional space. These, however, are usually called functionals.

Elementary functions.

In mathematical analysis the elementary functions are of fundamental importance. Basically, in practice, one operates with the elementary functions and more complicated functions are approximated by them. The elementary functions can be considered not only for real but also for complex $ x $; then the conception of these functions becomes in some sense, complete. In this connection an important branch of mathematics has arisen, called the theory of functions of a complex variable, or the theory of analytic functions (cf. Analytic function).

Real numbers.

The concept of a function is essentially founded on the concept of a real (rational or irrational) number. The latter was finally formulated only at the end of the 19th century. In particular, it established a logically irreproachable connection between numbers and points of a geometrical line, which gave a formal foundation for the ideas of R. Descartes (mid 17th century), who introduced into mathematics rectangular coordinate systems and the representation of functions by graphs.

Limits.

In mathematical analysis a means of studying functions is the limit. One distinguishes between the limit of a sequence and the limit of a function. These concepts were finally formulated only in the 19th century; however, the idea of a limit had been studied by the ancient Greeks. It suffices to say that Archimedes (3rd century B.C.) was able to calculate the area of a segment of a parabola by a process which one would call a limit transition (see Exhaustion, method of).

Continuous functions.

An important class of functions studied in mathematical analysis is formed by the continuous functions (cf. Continuous function). One of the possible definitions of this notion is: A function $ y = f ( x) $, of a variable $ x $ from an open interval $ ( a , b ) $, is called continuous at the point $ x $ if

$$ \lim\limits _ {\Delta x \rightarrow 0 } \ \Delta y = \ \lim\limits _ {\Delta x \rightarrow 0 } \ [ f ( x + \Delta x ) - f ( x) ] = 0 . $$

A function is continuous on the open interval $ ( a , b ) $ if it is continuous at each of its points; its graph is then a curve which is continuous in the everyday sense of the word.

Derivative and differential.

Among the continuous functions those having a derivative must be distinguished. The derivative of a function

$$ y = f ( x) ,\ \ a < x < b , $$

at a point $ x $ is its rate of change at that point, that is, the limit

$$ \tag{1 } \lim\limits _ {\Delta x \rightarrow 0 } \ \frac{\Delta y }{\Delta x } = \ \lim\limits _ {\Delta x \rightarrow 0 } \ \frac{f ( x + \Delta x ) - f ( x) }{\Delta x } = \ f ^ { \prime } ( x) . $$

If $ y $ is the coordinate at the time $ x $ of a point moving along the coordinate axis, then $ f ^ { \prime } ( x) $ is its instantaneous velocity at the time $ x $.

From the sign of $ f ^ { \prime } $ one can judge the nature of variation of $ f $: If $ f ^ { \prime } > 0 $( $ f ^ { \prime } < 0 $) in an interval $ ( c , d ) $, then $ f $ is increasing (decreasing) on this interval. If a function attains a local extremum (a maximum or a minimum) at $ x $ and has a derivative at this point, then the latter is equal to zero, $ f ^ { \prime } ( x) = 0 $.

The equality (1) can be replaced by the equivalent equality

$$ \frac{\Delta y }{\Delta x } = \ f ^ { \prime } ( x) + \epsilon ( \Delta x ) ,\ \ \epsilon ( \Delta x ) \rightarrow 0 \textrm{ as } \Delta x \rightarrow 0 , $$

or

$$ \Delta y = f ^ { \prime } ( x) \Delta x + \Delta x \epsilon ( \Delta x ) , $$

where $ \epsilon ( \Delta x ) $ is an infinitesimal as $ \Delta x \rightarrow 0 $; that is, if $ f $ has a derivative at $ x $, then its increment at this point decomposes into two terms. The first

$$ \tag{2 } d y = f ^ { \prime } ( x) \Delta x $$

is a linear function of $ \Delta x $( is proportional to $ \Delta x $), the second term tends to zero more rapidly than $ \Delta x $.

The quantity (2) is called the differential of the function corresponding to the increment $ \Delta x $. For small $ \Delta x $ it is possible to regard $ \Delta y $ as approximately equal to $ d y $:

$$ \Delta y \approx d y . $$

These arguments about differentials are characteristic of mathematical analysis. They have been extended to functions of several variables and to functionals.

For example, if a function

$$ z = f ( x _ {1} \dots x _ {n} ) = f ( x) $$

of $ n $ variables has continuous partial derivatives (cf. Partial derivative) at a point $ x = ( x _ {1} \dots x _ {n} ) $, then its increment $ \Delta z $ corresponding to increments $ \Delta x _ {1} \dots \Delta x _ {n} $ of the independent variables can be written in the form

$$ \tag{3 } \Delta z = \sum_{k=1}^{n} \frac{\partial f }{\partial x _ {k} } \Delta x _ {k} + \sqrt { \sum_{k=1}^ { n } \Delta x _ {k} ^ {2} } \epsilon ( \Delta x ) , $$

where $ \epsilon ( \Delta x ) \rightarrow 0 $ as $ \Delta x = ( \Delta x _ {1} \dots \Delta x _ {n} ) \rightarrow 0 $, that is, if all $ \Delta x _ {k} \rightarrow 0 $. Here the first term on the right-hand side in (3) is the differential $ d z $ of $ f $. It depends linearly on $ \Delta x $ and the second term tends to zero more rapidly than $ \Delta x $ as $ \Delta x \rightarrow 0 $.

Suppose one is given a functional (see Variational calculus)

$$ J ( x) = \ \int\limits _ { t _ {0} } ^ { {t _ 1 } } L ( t , x , x ^ \prime ) d t , $$

extended over the class $ \mathfrak M $ of functions $ x $ having continuous derivatives on the closed interval $ [ t _ {0} , t _ {1} ] $ and satisfying the boundary conditions $ x ( t _ {0} ) = x _ {0} $, $ x ( t _ {1} ) = x _ {1} $, where $ x _ {0} $ and $ x _ {1} $ are given numbers. Let, further, $ \mathfrak M _ {0} $ be the class of functions $ h $ having continuous derivatives on $ [ t _ {0} , t _ {1} ] $ and such that $ h ( t _ {0} ) = h ( t _ {1} ) = 0 $. Obviously, if $ x \in \mathfrak M $ and $ h \in \mathfrak M _ {0} $, then $ x + h \in \mathfrak M $.

In variational calculus it has been proved that under certain conditions on $ L $ the increment of $ J ( x) $ can be written in the form

$$ \tag{4 } J ( x + h ) - J ( x) = \ \int\limits _ { t _ {0} } ^ { {t _ 1 } } \left ( \frac{\partial L }{\partial x } - \frac{d }{d t } \left ( \frac{\partial L }{\partial x } \right ) \right ) h ( t) d t + o ( \| h \| ) $$

as $ \| h \| \rightarrow 0 $, where

$$ \| h \| = \ \max _ {t _ {0} \leq t \leq t _ {1} } \ | h ( t) | + \max _ {t _ {0} \leq t \leq t _ {1} } \ | h ^ \prime ( t) | , $$

and, thus, the second term on the right-hand side of (4) tends to zero more rapidly than $ \| h \| $, whereas the first term depends linearly on $ h \in \mathfrak M _ {0} $. The first term in (4) is called the variation of the functional $ J ( x , h ) $ and is denoted by $ \delta J ( x, h ) $.

Integrals.

Side by side with the derivative, the integral has a fundamental significance in mathematical analysis. One distinguishes indefinite and definite integrals.

The indefinite integral is closely connected with primitive functions. A function $ F $ is called a primitive function of a function $ f $ on the interval $ ( a , b ) $ if, on this interval, $ F ^ { \prime } = f $.

The definite (Riemann) integral of a function $ f $ on an interval $ [ a , b ] $ is the limit

$$ \lim\limits \ \sum_{j=0}^ { N-1} f ( \xi _ {j} ) ( x _ {j+1} - x _ {j} ) = \ \int\limits _ { a } ^ { b } f ( x) d x $$

as $ \max ( x _ {j+1} - x _ {j} ) \rightarrow 0 $; here $ a = x _ {0} < x _ {1} < \dots < x _ {N} = b $ and $ x _ {j} \leq \xi _ {j} \leq x _ {j+1} $ are arbitrary.

If $ f $ is positive and continuous on $ [ a , b ] $, its integral on this segment is equal to the area of the figure bounded by the curve $ y = f ( x) $, the $ x $- axis and the lines $ x = a $ and $ x = b $.

The class of Riemann-integrable functions contains all continuous functions on $ [ a , b ] $ and some discontinuous ones. But they are all necessarily bounded. For a slowly-growing unbounded function, and also for certain functions on unbounded intervals, the so-called improper integral has been introduced, requiring a double limit transition in its definition.

The concept of a Riemann integral of a function of one variable can be extended to functions of several variables (see Multiple integral).

On the other hand, the needs of mathematical analysis have led to a generalization of the integral in quite another direction, in the form of the Lebesgue integral or, more generally, the Lebesgue–Stieltjes integral. Essential in the definition of these integrals is the introduction for certain sets, called measurable, of their measure and, on this foundation, the notion of a measurable function. For measurable functions the Lebesgue–Stieltjes integral has been introduced. In this connection a broad range of different measures has been considered, together with the associated classes of measurable sets and functions. This provides an opportunity to adapt this or that integral to a definite concrete problem.

Newton–Leibniz formula.

There is a connection between derivatives and integrals, expressed by the Newton–Leibniz formula (theorem):

$$ \int\limits _ { a } ^ { b } f ( x) d x = F ( b) - F ( a) . $$

Here $ f $ is a continuous function on $ [ a , b ] $ and $ F $ is its primitive function.

Taylor's formulas and series.

Along with derivatives and integrals, the most important ideas (research tools) in mathematical analysis are the Taylor formula and Taylor series. If a function $ f ( x) $, $ a < x < b $, has continuous derivatives up to and including order $ n $ in a neighbourhood of a point $ x _ {0} $, then it can be approximated in this neighbourhood by the polynomial

$$ P _ {n} ( x) = \ f ( x _ {0} ) + \frac{f ^ { \prime } ( x _ {0} ) }{1 ! } ( x - x _ {0} ) + \dots + \frac{f ^ { ( n) } ( x _ {0} ) }{n ! } ( x - x _ {0} ) ^ {n} , $$

called its Taylor polynomial (of degree $ n $), in powers of $ x - x _ {0} $:

$$ f ( x) \approx P _ {n} ( x) $$

(Taylor's formula); here the error of approximation,

$$ R _ {n} ( x) = f ( x) - P _ {n} ( x), $$

tends to zero faster than $ ( x - x _ {0} ) ^ {n} $ as $ x \rightarrow x _ {0} $:

$$ R _ {n} ( x) = o ( ( x - x _ {0} ) ^ {n} ) \ \textrm{ as } x \rightarrow x _ {0} . $$

Thus, in a neighbourhood of $ x _ {0} $, $ f $ can be approximated to any degree of accuracy by very simple functions (polynomials), which for their calculation require only the arithmetic operations of addition, subtraction and multiplication.

Of special importance are the so-called analytic functions in a fixed neighbourhood of $ x _ {0} $; they have an infinite number of derivatives, $ R _ {n} ( x) \rightarrow 0 $ as $ n \rightarrow \infty $ in the neighbourhood, and they may be represented by the infinite Taylor series

$$ f ( x) = f ( x _ {0} ) + \frac{f ^ { \prime } ( x _ {0} ) }{1 ! } ( x - x _ {0} ) + \dots . $$

Taylor expansions are also possible, under certain conditions, for functions of several variables, functionals and operators.

Historical information.

Up to the 17th century mathematical analysis was a collection of solutions to disconnected particular problems; for example, in the integral calculus, the problems of the calculation of the areas of figures, the volumes of bodies with curved boundaries, the work done by a variable force, etc. Each problem, or special group of problems, was solved by its own method, sometimes complicated and tedious and sometimes even brilliant (regarding the prehistory of mathematical analysis see Infinitesimal calculus). Mathematical analysis as a unified and systematic whole was put together in the works of I. Newton, G. Leibniz, L. Euler, J.L. Lagrange, and other scholars in the 17th century and 18th century, and its foundations, the theory of limits, was laid by A.L. Cauchy at the beginning of the 19th century. A deep analysis of the original ideas of mathematical analysis was connected with the development in the 19th century and 20th century of set theory, measure theory and the theory of functions of a real variable, and has led to a variety of generalizations.

References

[1] Ch.J. de la Valleé-Poussin, "Cours d'analyse infinitésimales" , 1–2 , Libraire Univ. Louvain (1923–1925)
[2] V.A. Il'in, E.G. Poznyak, "Fundamentals of mathematical analysis" , 2 , MIR (1982) (Translated from Russian)
[3] V.A. Il'in, V.A. Sadovnichii, B.Kh. Sendov, "Mathematical analysis" , Moscow (1979) (In Russian)
[4] L.D. Kudryavtsev, "A course in mathematical analysis" , 1–3 , Moscow (1988–1989) (In Russian)
[5] S.M. Nikol'skii, "A course of mathematical analysis" , 1–2 , MIR (1977) (Translated from Russian)
[6] E.T. Whittaker, G.N. Watson, "A course of modern analysis" , Cambridge Univ. Press (1952) pp. Chapt. 6
[7] G.M. Fichtenholz, "Differential und Integralrechnung" , 1–3 , Deutsch. Verlag Wissenschaft. (1964)

Comments

In 1961, A. Robinson provided truly infinitesimal methods in analysis with a logical foundation, and so vindicated the founders of the calculus, Leibniz in particular, against the now usual "-d" analysis. This new old way to look at analysis is spreading since twenty years and might become of first importance in a few years. See [a4] and Non-standard analysis.

References

[a1] E.A. Bishop, "Foundations of constructive analysis" , McGraw-Hill (1967)
[a2] G.E. Shilov, "Mathematical analysis" , 1–2 , M.I.T. (1974) (Translated from Russian)
[a3] R. Courant, H. Robbins, "What is mathematics?" , Oxford Univ. Press (1980)
[a4] N. Cutland (ed.) , Nonstandard analysis and its applications , Cambridge Univ. Press (1988)
[a5] G.H. Hardy, "A course of pure mathematics" , Cambridge Univ. Press (1975)
[a6] E.C. Titchmarsh, "The theory of functions" , Oxford Univ. Press (1979)
[a7] W. Rudin, "Principles of mathematical analysis" , McGraw-Hill (1976) pp. 75–78
[a8] K.R. Stromberg, "Introduction to classical real analysis" , Wadsworth (1981)
How to Cite This Entry:
Mathematical analysis. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Mathematical_analysis&oldid=55025
This article was adapted from an original article by S.M. Nikol'skii (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article