Namespaces
Variants
Actions

Difference between revisions of "Algorithm, complexity of description of an"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
Line 1: Line 1:
 +
<!--
 +
a0117901.png
 +
$#A+1 = 40 n = 0
 +
$#C+1 = 40 : ~/encyclopedia/old_files/data/A011/A.0101790 Algorithm, complexity of description of an,
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
''program-size complexity''
 
''program-size complexity''
  
Line 5: Line 17:
 
The complexity of description of a [[Normal algorithm|normal algorithm]] is usually understood to mean the length of its encoding, i.e. the length of all its substitution formulas when written in one line (a special separating character is interposed between the individual formulas). The complexity of description of a [[Turing machine|Turing machine]] usually means the number of its internal states and external symbols. A Turing machine may also be characterized by the number of commands of the machine in question. In the case of recursive functions (cf. [[Recursive function|Recursive function]]) given by recursion schemes, the number of letters in these schemes usually serves as the measure of their complexity.
 
The complexity of description of a [[Normal algorithm|normal algorithm]] is usually understood to mean the length of its encoding, i.e. the length of all its substitution formulas when written in one line (a special separating character is interposed between the individual formulas). The complexity of description of a [[Turing machine|Turing machine]] usually means the number of its internal states and external symbols. A Turing machine may also be characterized by the number of commands of the machine in question. In the case of recursive functions (cf. [[Recursive function|Recursive function]]) given by recursion schemes, the number of letters in these schemes usually serves as the measure of their complexity.
  
An axiomatic definition of the complexity of description of an algorithm has been proposed [[#References|[2]]]. Below the application of this definition to Turing machines will be discussed. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a0117901.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a0117902.png" /> be the natural numbering of Turing machines characterized by the fact that the machine itself (i.e. its program) may be effectively reconstituted from its number, and the number of the machine may be effectively reconstituted from the machine (i.e. from its program). A general recursive function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a0117903.png" /> is a measure of the complexity of the machines (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a0117904.png" /> being the complexity of the machine <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a0117905.png" />) if and only if: a) for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a0117906.png" /> there exists not more than a finite number of machines with complexity <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a0117907.png" />; and b) there exists an effective procedure which makes it possible to determine, for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a0117908.png" />, all the machines with complexity <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a0117909.png" />.
+
An axiomatic definition of the complexity of description of an algorithm has been proposed [[#References|[2]]]. Below the application of this definition to Turing machines will be discussed. Let $  (M _ {i} ) $,
 +
$  i = 0, 1 \dots $
 +
be the natural numbering of Turing machines characterized by the fact that the machine itself (i.e. its program) may be effectively reconstituted from its number, and the number of the machine may be effectively reconstituted from the machine (i.e. from its program). A general recursive function $  s $
 +
is a measure of the complexity of the machines ( $  s (i) $
 +
being the complexity of the machine $  M _ {i} $)  
 +
if and only if: a) for any $  y $
 +
there exists not more than a finite number of machines with complexity $  y $;  
 +
and b) there exists an effective procedure which makes it possible to determine, for any $  y $,  
 +
all the machines with complexity $  y $.
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a01179010.png" /> be an arbitrary measure of the complexity of a Turing machine. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a01179011.png" /> is an arbitrary, effectively (i.e. algorithmically) enumerable infinite subclass of Turing machines, then there exists a machine <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a01179012.png" /> belonging to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a01179013.png" />, and there exists a machine <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a01179014.png" /> (which may or may not belong to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a01179015.png" />) such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a01179016.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a01179017.png" /> compute the same function, and the complexity of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a01179018.png" /> is much lower than that of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a01179019.png" />. Hence it follows, in particular, that there exist primitive recursive functions whose lowest complexity in the primitive recursive form (i.e. through schemes of primitive recursion) is much higher than their lowest complexity in the general recursive form (i.e. through schemes of general recursion). Let the complexity of normal algorithms and of Turing machines mean, respectively, the length of the encoding and the number of internal states. Then it will be possible to realize any function of the algebra of logic in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a01179020.png" /> variables (cf. [[Boolean function|Boolean function]]) [[#References|[3]]]: a) by a normal algorithm in an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a01179021.png" />-letter alphabet with complexity <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a01179022.png" />; and b) by a Turing machine with an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a01179023.png" />-letter external alphabet with complexity <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a01179024.png" />. The study of the complexity of algorithms solving the finite restrictions of algorithmically unsolvable problems (the so-called bounded algorithmic problems) began in the 1960s. A.A. Markov considered the following problem: To construct, for any function of the algebra of logic in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a01179025.png" /> variables, an encoding of a normal algorithm in the alphabet <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a01179026.png" /> which realizes the function and which has least complexity among all such algorithms. It has been shown [[#References|[1]]], [[#References|[4]]] that the complexity of a normal algorithm which solves this problem is of the order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a01179027.png" />. Another problem which has been studied is the complexity of algorithms which solve, for the first <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a01179028.png" /> natural numbers, the problem of belonging to a recursively enumerable set (the complexity of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a01179030.png" />-segments of recursively enumerable sets). It was shown that in the case of normal algorithms, the order of such a complexity is not more than <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a01179031.png" />, and that this estimate cannot be reduced in the general case. There also exist sets defined by simple diagonalization of which the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a01179032.png" />-segments have complexity order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a01179033.png" />. It was also shown that if the time of operation of algorithms is general-recursively bounded, then the complexity of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a01179034.png" />-segments of recursively enumerable sets may increase exponentially, and the exponent may attain value <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a01179035.png" />.
+
Let $  s $
 +
be an arbitrary measure of the complexity of a Turing machine. If $  U $
 +
is an arbitrary, effectively (i.e. algorithmically) enumerable infinite subclass of Turing machines, then there exists a machine $  T $
 +
belonging to $  U $,  
 +
and there exists a machine $  T  ^  \prime  $(
 +
which may or may not belong to $  U $)  
 +
such that $  T  ^  \prime  $
 +
and $  T $
 +
compute the same function, and the complexity of $  T  ^  \prime  $
 +
is much lower than that of $  T $.  
 +
Hence it follows, in particular, that there exist primitive recursive functions whose lowest complexity in the primitive recursive form (i.e. through schemes of primitive recursion) is much higher than their lowest complexity in the general recursive form (i.e. through schemes of general recursion). Let the complexity of normal algorithms and of Turing machines mean, respectively, the length of the encoding and the number of internal states. Then it will be possible to realize any function of the algebra of logic in $  N $
 +
variables (cf. [[Boolean function|Boolean function]]) [[#References|[3]]]: a) by a normal algorithm in an $  m $-
 +
letter alphabet with complexity $  \sim 2  ^ {N} / \mathop{\rm log} _ {2}  m $;  
 +
and b) by a Turing machine with an $  m $-
 +
letter external alphabet with complexity $  \sim 2  ^ {N} / N (m - 1 ) $.  
 +
The study of the complexity of algorithms solving the finite restrictions of algorithmically unsolvable problems (the so-called bounded algorithmic problems) began in the 1960s. A.A. Markov considered the following problem: To construct, for any function of the algebra of logic in $  N $
 +
variables, an encoding of a normal algorithm in the alphabet $  \Phi = \{ 0, 1, a, b, c \} $
 +
which realizes the function and which has least complexity among all such algorithms. It has been shown [[#References|[1]]], [[#References|[4]]] that the complexity of a normal algorithm which solves this problem is of the order $  2  ^ {N} $.  
 +
Another problem which has been studied is the complexity of algorithms which solve, for the first $  n $
 +
natural numbers, the problem of belonging to a recursively enumerable set (the complexity of $  n $-
 +
segments of recursively enumerable sets). It was shown that in the case of normal algorithms, the order of such a complexity is not more than $  \mathop{\rm log} _ {2}  n $,  
 +
and that this estimate cannot be reduced in the general case. There also exist sets defined by simple diagonalization of which the $  n $-
 +
segments have complexity order $  n $.  
 +
It was also shown that if the time of operation of algorithms is general-recursively bounded, then the complexity of $  n $-
 +
segments of recursively enumerable sets may increase exponentially, and the exponent may attain value $  n $.
  
 
Complexity of description of an algorithm is used in formalizations of the problem of the minimum complexity of an algorithm which constructs some finite object. This minimum complexity is often named simply the complexity of the finite object (for this particular specification of the complexity of description of an algorithm).
 
Complexity of description of an algorithm is used in formalizations of the problem of the minimum complexity of an algorithm which constructs some finite object. This minimum complexity is often named simply the complexity of the finite object (for this particular specification of the complexity of description of an algorithm).
  
A definition of the complexity of finite objects was first proposed by A.N. Kolmogorov (cf. [[Algorithmic information theory|Algorithmic information theory]]). It was found that the following asymptotically exact relations exist between the complexity <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a01179036.png" /> according to Kolmogorov, the complexity <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a01179037.png" /> of the same objects when expressed through the length of the encoding of a normal algorithm into an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a01179038.png" />-letter alphabet, and the complexity <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a01179039.png" /> expressed by the number of internal states of a Turing machine with an external <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a01179040.png" />-letter alphabet:
+
A definition of the complexity of finite objects was first proposed by A.N. Kolmogorov (cf. [[Algorithmic information theory|Algorithmic information theory]]). It was found that the following asymptotically exact relations exist between the complexity $  K(x) $
 +
according to Kolmogorov, the complexity $  M _ {m} (x) $
 +
of the same objects when expressed through the length of the encoding of a normal algorithm into an $  m $-
 +
letter alphabet, and the complexity $  T _ {m} (x) $
 +
expressed by the number of internal states of a Turing machine with an external $  m $-
 +
letter alphabet:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a011/a011790/a01179041.png" /></td> </tr></table>
+
$$
 +
M _ {m} ( x )  \sim 
 +
\frac{K ( x ) }{ \mathop{\rm log} _ {2}  m }
 +
,\ \
 +
T _ {m} ( x )  \sim 
 +
\frac{K ( x ) }{( m - 1 )  \mathop{\rm log} _ {2} \
 +
K ( x ) }
 +
.
 +
$$
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  A.A. Markov,  "Normal algorithms connected with the computation of Boolean functions"  ''Izv. Akad. Nauk SSSR Ser. Mat.'' , '''31'''  (1967)  pp. 161–208  (In Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  M. Blum,  "A machine independent theory of the complexity of recursive functions"  ''J. ACM'' , '''14'''  (1967)  pp. 322–336</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  V.A. Kuz'min,  "Realization of functions of the algebra of logic by automata, normal algorithms and Turing machines"  ''Problemy Kibernet.'' , '''13'''  (1965)  pp. 75–96  (In Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  N.V. Petri,  "Algorithms connected with predicates and Boolean functions"  ''Soviet Math. Dokl.'' , '''10''' :  2  (1969)  pp. 294–297  ''Dokl. Akad. Nauk SSSR'' , '''185''' :  1  (1969)  pp. 37–39</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  A.K. Zvonkin,  L.A. Levin,  "The complexity of finite objects and the developments of the concepts of information and randomness by means of the theory of algorithms"  ''Russian Math. Surveys'' , '''25''' :  6  (1970)  pp. 82–124  ''Uspekhi Mat. Nauk'' , '''25''' :  6  (1970)  pp. 85–127</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  A.A. Markov,  "Normal algorithms connected with the computation of Boolean functions"  ''Izv. Akad. Nauk SSSR Ser. Mat.'' , '''31'''  (1967)  pp. 161–208  (In Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  M. Blum,  "A machine independent theory of the complexity of recursive functions"  ''J. ACM'' , '''14'''  (1967)  pp. 322–336</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  V.A. Kuz'min,  "Realization of functions of the algebra of logic by automata, normal algorithms and Turing machines"  ''Problemy Kibernet.'' , '''13'''  (1965)  pp. 75–96  (In Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  N.V. Petri,  "Algorithms connected with predicates and Boolean functions"  ''Soviet Math. Dokl.'' , '''10''' :  2  (1969)  pp. 294–297  ''Dokl. Akad. Nauk SSSR'' , '''185''' :  1  (1969)  pp. 37–39</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  A.K. Zvonkin,  L.A. Levin,  "The complexity of finite objects and the developments of the concepts of information and randomness by means of the theory of algorithms"  ''Russian Math. Surveys'' , '''25''' :  6  (1970)  pp. 82–124  ''Uspekhi Mat. Nauk'' , '''25''' :  6  (1970)  pp. 85–127</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
 
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  J. Hartmanis,  J.E. Hopcroft,  "An overview of the theory of computational complexity"  ''J. ACM'' , '''18'''  (1971)  pp. 444–475</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  J. Hartmanis,  J.E. Hopcroft,  "An overview of the theory of computational complexity"  ''J. ACM'' , '''18'''  (1971)  pp. 444–475</TD></TR></table>

Latest revision as of 16:10, 1 April 2020


program-size complexity

A measure which characterizes the length of description of an algorithm. The complexity of description of an algorithm is defined differently, depending on the specific definition. At present (1987) there is as yet no generally valid definition of the concept, and the most frequently occurring cases are reviewed below.

The complexity of description of a normal algorithm is usually understood to mean the length of its encoding, i.e. the length of all its substitution formulas when written in one line (a special separating character is interposed between the individual formulas). The complexity of description of a Turing machine usually means the number of its internal states and external symbols. A Turing machine may also be characterized by the number of commands of the machine in question. In the case of recursive functions (cf. Recursive function) given by recursion schemes, the number of letters in these schemes usually serves as the measure of their complexity.

An axiomatic definition of the complexity of description of an algorithm has been proposed [2]. Below the application of this definition to Turing machines will be discussed. Let $ (M _ {i} ) $, $ i = 0, 1 \dots $ be the natural numbering of Turing machines characterized by the fact that the machine itself (i.e. its program) may be effectively reconstituted from its number, and the number of the machine may be effectively reconstituted from the machine (i.e. from its program). A general recursive function $ s $ is a measure of the complexity of the machines ( $ s (i) $ being the complexity of the machine $ M _ {i} $) if and only if: a) for any $ y $ there exists not more than a finite number of machines with complexity $ y $; and b) there exists an effective procedure which makes it possible to determine, for any $ y $, all the machines with complexity $ y $.

Let $ s $ be an arbitrary measure of the complexity of a Turing machine. If $ U $ is an arbitrary, effectively (i.e. algorithmically) enumerable infinite subclass of Turing machines, then there exists a machine $ T $ belonging to $ U $, and there exists a machine $ T ^ \prime $( which may or may not belong to $ U $) such that $ T ^ \prime $ and $ T $ compute the same function, and the complexity of $ T ^ \prime $ is much lower than that of $ T $. Hence it follows, in particular, that there exist primitive recursive functions whose lowest complexity in the primitive recursive form (i.e. through schemes of primitive recursion) is much higher than their lowest complexity in the general recursive form (i.e. through schemes of general recursion). Let the complexity of normal algorithms and of Turing machines mean, respectively, the length of the encoding and the number of internal states. Then it will be possible to realize any function of the algebra of logic in $ N $ variables (cf. Boolean function) [3]: a) by a normal algorithm in an $ m $- letter alphabet with complexity $ \sim 2 ^ {N} / \mathop{\rm log} _ {2} m $; and b) by a Turing machine with an $ m $- letter external alphabet with complexity $ \sim 2 ^ {N} / N (m - 1 ) $. The study of the complexity of algorithms solving the finite restrictions of algorithmically unsolvable problems (the so-called bounded algorithmic problems) began in the 1960s. A.A. Markov considered the following problem: To construct, for any function of the algebra of logic in $ N $ variables, an encoding of a normal algorithm in the alphabet $ \Phi = \{ 0, 1, a, b, c \} $ which realizes the function and which has least complexity among all such algorithms. It has been shown [1], [4] that the complexity of a normal algorithm which solves this problem is of the order $ 2 ^ {N} $. Another problem which has been studied is the complexity of algorithms which solve, for the first $ n $ natural numbers, the problem of belonging to a recursively enumerable set (the complexity of $ n $- segments of recursively enumerable sets). It was shown that in the case of normal algorithms, the order of such a complexity is not more than $ \mathop{\rm log} _ {2} n $, and that this estimate cannot be reduced in the general case. There also exist sets defined by simple diagonalization of which the $ n $- segments have complexity order $ n $. It was also shown that if the time of operation of algorithms is general-recursively bounded, then the complexity of $ n $- segments of recursively enumerable sets may increase exponentially, and the exponent may attain value $ n $.

Complexity of description of an algorithm is used in formalizations of the problem of the minimum complexity of an algorithm which constructs some finite object. This minimum complexity is often named simply the complexity of the finite object (for this particular specification of the complexity of description of an algorithm).

A definition of the complexity of finite objects was first proposed by A.N. Kolmogorov (cf. Algorithmic information theory). It was found that the following asymptotically exact relations exist between the complexity $ K(x) $ according to Kolmogorov, the complexity $ M _ {m} (x) $ of the same objects when expressed through the length of the encoding of a normal algorithm into an $ m $- letter alphabet, and the complexity $ T _ {m} (x) $ expressed by the number of internal states of a Turing machine with an external $ m $- letter alphabet:

$$ M _ {m} ( x ) \sim \frac{K ( x ) }{ \mathop{\rm log} _ {2} m } ,\ \ T _ {m} ( x ) \sim \frac{K ( x ) }{( m - 1 ) \mathop{\rm log} _ {2} \ K ( x ) } . $$

References

[1] A.A. Markov, "Normal algorithms connected with the computation of Boolean functions" Izv. Akad. Nauk SSSR Ser. Mat. , 31 (1967) pp. 161–208 (In Russian)
[2] M. Blum, "A machine independent theory of the complexity of recursive functions" J. ACM , 14 (1967) pp. 322–336
[3] V.A. Kuz'min, "Realization of functions of the algebra of logic by automata, normal algorithms and Turing machines" Problemy Kibernet. , 13 (1965) pp. 75–96 (In Russian)
[4] N.V. Petri, "Algorithms connected with predicates and Boolean functions" Soviet Math. Dokl. , 10 : 2 (1969) pp. 294–297 Dokl. Akad. Nauk SSSR , 185 : 1 (1969) pp. 37–39
[5] A.K. Zvonkin, L.A. Levin, "The complexity of finite objects and the developments of the concepts of information and randomness by means of the theory of algorithms" Russian Math. Surveys , 25 : 6 (1970) pp. 82–124 Uspekhi Mat. Nauk , 25 : 6 (1970) pp. 85–127

Comments

References

[a1] J. Hartmanis, J.E. Hopcroft, "An overview of the theory of computational complexity" J. ACM , 18 (1971) pp. 444–475
How to Cite This Entry:
Algorithm, complexity of description of an. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Algorithm,_complexity_of_description_of_an&oldid=14048
This article was adapted from an original article by Ya.M. Barzdin' (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article