Computational complexity classes
Computational complexity measures the amount of computational resources, such as time and space, that are needed to compute a function.
In the 1930s many models of computation were invented, including Church's -calculus (cf. -calculus), Gödel's recursive functions, Markov algorithms (cf. also Algorithm) and Turing machines (cf. also Turing machine). All of these independent efforts to precisely define the intuitive notion of "mechanical procedure" were proved equivalent. This led to the universally accepted Church thesis, which states that the intuitive concept of what can be "automatically computed" is appropriately captured by the Turing machine (and all its variants); cf. also Church thesis; Computable function.
Computational complexity studies the inherent difficulty of computational problems. Attention is confined to decision problems, i.e., sets of binary strings, , where . In a decision problem the input binary string is accepted if and rejected if (cf. also Decision problem).
For any function from the positive integers to itself, the complexity class is the set of decision problems that are computable by a deterministic, multi-tape Turing machine in steps for inputs of length .
Polynomial time () is the set of decision problems accepted by Turing machines using at most some polynomial number of steps, .
Intuitively, a decision problem is "feasible" if it can be computed with an "affordable" amount of time and hardware, on all "reasonably sized" instances. is a mathematically elegant and useful approximation to the set of feasible problems. Most natural problems that are in have small exponents and multiplicative constants in their running time and thus are also feasible. This includes sorting, inverting matrices, pattern matching, linear programming, network flow, graph connectivity, shortest paths, minimal spanning trees, strongly connected components, testing planarity, and convex hull.
The complexity class is the set of decision problems that are accepted by a non-deterministic multi-tape Turing machine in steps for inputs of length . A non-deterministic Turning machine may make one of several possible moves at each step. One says that it "accepts an input" if at least one of its possible computations on that input is accepting. The time is the maximum number of steps that any computation may take on this input, not the exponentially greater number of steps required to simulate all possible computations on that input.
Non-deterministic polynomial time () is the set of decision problems accepted by a non-deterministic Turing machine in some polynomial time, . For example, let be the set of Boolean formulas that have some assignment of their Boolean variables making them true (cf. also Boolean algebra). The formula is in because the assignment that makes and true and false satisfies . Given a formula with Boolean variables, a non-deterministic machine can non-deterministically write a binary string of length and then check if the assignment it has written down satisfies the formula, accepting if it does. Thus . The decision versions of many important optimization problems, including travelling salesperson, graph colouring, clique, knapsack, bin packing and processor scheduling, are in .
The complexity class is the set of decision problems that are accepted by a deterministic Turing machine that uses at most tape cells for inputs of length . It is assumed that the input tape is read-only and space is only charged for the other tapes. Thus it makes sense to talk about space less than .
Similarly one defines the non-deterministic space classes, . On inputs of length , an Turing machine uses at most tape cells on each of its computations. One says it "accepts an input" if at least one of its computations on that input is accepting. The main space complexity classes are polynomial space, ; Logspace, ; and non-deterministic logspace, .
For each of the above resources (deterministic and non-deterministic time and space) there is a hierarchy theorem saying that more of the given resource enables one to compute more decision problems. These theorems are proved by diagonalization arguments: use the greater amount of the resource to simulate all machines using the smaller amount, and do something different from the simulated machine in each case.
See also [a11], [a10], [a4], [a20]. In the statement of these theorems the notion of a space- or time-constructible function is used. Recall that a function from the positive integers to itself is space constructible (respectively, time constructible) if and only if there is a deterministic Turing machine running in space (respectively, time ) that on every input of length computes the number in binary. Every reasonable function is constructible.
The following are hierarchy theorems:
1) For all space constructible , if
then is strictly contained in , and is strictly contained in .
2) For all time constructible , if
then is strictly contained in .
3) For all time constructible , if
then is strictly contained in
The slightly stronger requirement in the last of the above theorems has to do with the extra factor of time that is required for a two-tape Turing machine to simulate a machine with more than two tapes. If one restricts attention to multi-tape Turing machines with a fixed number of tapes, then a strict hierarchy theorem for deterministic time results [a8].
Much less is know when comparing different resources: One can simulate in by simulating all the non-deterministic computations in turn. One can simulate in because a computation can be in one of at most possible configurations.
Savitch's theorem provides a non-trivial relationship between deterministic and non-deterministic space [a19]: For ,
For each decision problem , its complement is also a decision problem. For each complexity class , define its complementary class by
For deterministic classes , such as , and , it is obvious that . However, this is much less clear for non-deterministic classes. Since the definitions of and were made, it was widely believed that and . However, in 1987 the latter was shown to be false [a25], [a18]. In fact, the following Immerman–Szelepcsényi theorem holds: For , .
Even though complexity classes are all defined as sets of decision problems, one can define functions computable in a complexity class as follows. For a complexity class , define to be the set of all polynomially-bounded functions such that each bit of (thought of as a decision problem) is in and .
One compares the complexity of decision problems by reducing one to the other as follows. Let . is reducible to () if and only if there exists a function such that for all , if and only if . The question whether is a member of is thus reduced to the question of whether is a member of . If , then . Conversely, if , then .
The complexity classes, i.e., , , , , , and , are all closed under reductions in the following sense: If is one of the above classes, , and , then .
A problem is complete for complexity class if and only if and for all , . If is closed under reductions, then any complete problem, say , characterizes the complexity class, because for all , implies .
A non-deterministic machine can be thought of as a parallel machine with weak communication. At each step, each processor may create copies of itself and set them to work on slightly different problems. If any of these offspring ever reports acceptance to , then in turn reports acceptance to its parent. Each processor thus reports the "or" of its children.
A. Chandra, D.C. Kozen and L. Stockmeyer generalized non-deterministic Turing machines to alternating Turing machines, in which there are "or" states, reporting the "or" of their children, and there are "and" states, reporting the "and" of their children. They proved the following alternation theorem [a3]: For and ,
In particular, and .
Let the class (respectively, ) be the set of decision problems accepted by alternating machines simultaneously using space and time (respectively, time and making at most alternations between existential and universal states, and starting with existential). Thus . The polynomial-time hierarchy () is the set of decision problems accepted in polynomial time by alternating Turing machines making a bounded number of alternations between existential and universal states, .
is the set of decision problems recognizable by a parallel random access machine (a PRAM) using polynomially much hardware and parallel time . is often studied using uniform classes of acyclic circuits of polynomial size and depth . is characterized via alternating complexity in the following way,
A decision problem is complex if and only if it has a complex description. This leads to a characterization of complexity via logic. The input object, e.g., a string or a graph, is thought of as a (finite) logical structure. R. Fagin characterized as the set of second-order expressible properties (), N. Immerman and M.Y. Vardi characterized as the set of properties expressible in first-order logic plus a least-fixed-point operator (). In fact, all natural complexity classes have natural descriptive characterizations [a7], [a24], [a22], [a5], [a23].
A probabilistic Turing machine is one that may flip a series of fair coins as part of its computation. A decision problem is in bounded probabilistic polynomial time () if and only if there is a probabilistic polynomial-time Turing machine such that for all inputs , if , then , and if , then .
It follows by running polynomially many times that the probability of error can be made at most for any fixed and inputs of length . It is believed that . An important problem in that is not known to be in is primality, [a21].
Recent work (1998) on randomness and cryptography has led to a new and surprising characterization of via interactive proofs [a1]. This in turn has led to tight bounds on how closely many optimization problems can be approximated in polynomial time, assuming that [a13].
It is known that , so by the space hierarchy theorem (and thus of course and ) is strictly contained in . Strong lower bounds have been proved on some circuit complexity classes below [a2], [a6], [a12], [a17]; but even now (2001), thirty years after the introduction of the classes and , no other inequality concerning the following containments, including that is not equal to , is known:
For further reading, an excellent textbook on computational complexity is [a16].
|[a1]||S. Arora, C. Lund, R. Motwani, M. Sudan, M. Szegedy, "Proof verification and the hardness of approximation problems" J. Assoc. Comput. Mach. , 45 : 3 (1998) pp. 501–555|
|[a2]||M. Ajtai, " formulae on finite structures" Ann. Pure Appl. Logic , 24 (1983) pp. 1–48|
|[a3]||A. Chandra, L. Stockmeyer, "Alternation" , Proc. 17th IEEE Symp. Found. Computer Sci. (1976) pp. 151–158|
|[a4]||S.A. Cook, "A hierarchy for nondeterministic time complexity" J. Comput. Syst. Sci. , 7 : 4 (1973) pp. 343–353|
|[a5]||H.-D. Ebbinghaus, J. Flum, "Finite model theory" , Springer (1995)|
|[a6]||M. Furst, J.B. Saxe, M. Sipser, "Parity, circuits, and the polynomial-time hierarchy" Math. Systems Th. , 17 (1984) pp. 13–27|
|[a7]||R. Fagin, "Generalized first-order spectra and polynomial-time recognizable sets" R. Karp (ed.) , Complexity of Computation (SIAM-AMS Proc.) , 7 , Amer. Math. Soc. (1974) pp. 27–41|
|[a8]||M. Fürer, "The tight deterministic time hierarchy" , 14th ACM STOC Symp. (1982) pp. 8–16|
|[a9]||M.R. Garey, D.S. Johnson, "Computers and intractability" , Freeman (1979)|
|[a10]||J. Hartmanis, P.M. Lewis, R.E. Stearns, "Hierarchies of memory limited computations" , Sixth Ann. IEEE Symp. Switching Circuit Theory and Logical Design (1965) pp. 179–190|
|[a11]||J. Hartmanis, R. Stearns, "On the computational complexity of algorithms" Trans. Amer. Math. Soc. , 117 : 5 (1965) pp. 285–306|
|[a12]||J. Hastad, "Almost optimal lower bounds for small depth circuits" , 18th ACM STOC Symp. (1986) pp. 6–20|
|[a13]||"Approximation algorithms for NP hard problems" D. Hochbaum (ed.) , PWS (1997)|
|[a14]||N. Jones, E. Lien, W. Laaser, "New problems complete for nondeterministic logspace" Math. Systems Th. , 10 (1976) pp. 1–17|
|[a15]||R. Greenlaw, H.J. Hoover, L. Ruzzo, "Limits to parallel computation: P-completeness theory" , Oxford Univ. Press (1995)|
|[a16]||C.H. Papadimitriou, "Complexity" , Addison-Wesley (1994)|
|[a17]||A.A. Razborov, "Lower bounds on the size of bounded depth networks over a complete basis with logical addition" Math. Notes , 41 (1987) pp. 333–338 Mat. Zametki , 41 (1987) pp. 598–607|
|[a18]||R. Szelepcsényi, "The method of forced enumeration for nondeterministic automata" Acta Inform. , 26 (1988) pp. 279–284|
|[a19]||W. Savitch, "Relationships between nondeterministic and deterministic tape complexities" J. Comput. Syst. Sci. , 4 (1970) pp. 177–192|
|[a20]||J.I. Seiferas, M.J. Fischer, A.R. Meyer, "Refinements of nondeterministic time an space hierarchies" , Proc. Fourteenth Ann. IEEE Symp. Switching and Automata Theory (1973) pp. 130–137|
|[a21]||R. Solovay, V. Strassen, "A fast Monte-Carlo test for primality" SIAM J. Comput. , 6 (1977) pp. 84–86|
|[a22]||M.Y. Vardi, "Complexity of relational query languages" , 14th Symp. Theory of Computation (1982) pp. 137–146|
|[a23]||N. Immerman, "Descriptive complexity" , Graduate Texts in Computer Sci. , Springer (1999)|
|[a24]||N. Immerman, "Relational queries computable in polynomial time" , 14th ACM STOC Symp. (1982) pp. 147–152 (Revised version: Inform. & Control 68 (1986), 86-104)|
|[a25]||N. Immerman, "Nondeterministic space is closed under complementation" SIAM J. Comput. , 17 : 5 (1988) pp. 935–938|
PSPACE. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=PSPACE&oldid=41939