Namespaces
Variants
Actions

Difference between revisions of "Games, theory of"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
Line 1: Line 1:
 +
<!--
 +
g0432801.png
 +
$#A+1 = 118 n = 0
 +
$#C+1 = 118 : ~/encyclopedia/old_files/data/G043/G.0403280 Games, theory of
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
The theory of mathematical models for making optimal decisions under conditions of conflict.
 
The theory of mathematical models for making optimal decisions under conditions of conflict.
  
 
==Formal definition of a game.==
 
==Formal definition of a game.==
A conflict is a phenomenon in which it is known who is taking part in the phenomenon and how, what the outcomes of it are, and also who is interested in the outcomes and in what way. Thus, for a formal description of a conflict it is necessary to indicate: 1) the set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g0432801.png" /> of elements participating in it (called the coalitions of action); 2) the family of sets <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g0432802.png" /> of strategies of each of the coalitions of action; 3) the set of situations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g0432803.png" />; 4) the set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g0432804.png" /> of participants' interests (called the coalitions of interests); and 5) the family of binary relations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g0432805.png" /> on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g0432806.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g0432807.png" />) expressing the preferences of coalitions of interests between the situations. The system
+
A conflict is a phenomenon in which it is known who is taking part in the phenomenon and how, what the outcomes of it are, and also who is interested in the outcomes and in what way. Thus, for a formal description of a conflict it is necessary to indicate: 1) the set $  \mathfrak K _ {a} $
 +
of elements participating in it (called the coalitions of action); 2) the family of sets $  \mathfrak x _ {K} $
 +
of strategies of each of the coalitions of action; 3) the set of situations $  \mathfrak x \subset  \prod _ {K \in \mathfrak K _ {a}  } \mathfrak x _ {K} $;  
 +
4) the set $  \mathfrak K _ {i} $
 +
of participants' interests (called the coalitions of interests); and 5) the family of binary relations $  \succ _ {K} $
 +
on $  \mathfrak x \times \mathfrak x $(
 +
$  K \in \mathfrak K _ {i} $)  
 +
expressing the preferences of coalitions of interests between the situations. The system
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g0432808.png" /></td> </tr></table>
+
$$
 +
< \mathfrak K _ {a} ,\
 +
\{ \mathfrak x _ {K} \} _ {K \in \mathfrak K _ {a}  } ,\
 +
\mathfrak x , \mathfrak K _ {i} ,\
 +
\{ \succ _ {K} \} _ {K \in \mathfrak K _ {i}  } >
 +
$$
  
 
is called a game.
 
is called a game.
Line 13: Line 37:
 
Games in which there are no coalitions of interests, or for which there is only one such coalition, are the objects of study of the purely descriptive mathematical theory or of traditional theories of optimization. Games in the proper sense of the word have at least two coalitions of interests.
 
Games in which there are no coalitions of interests, or for which there is only one such coalition, are the objects of study of the purely descriptive mathematical theory or of traditional theories of optimization. Games in the proper sense of the word have at least two coalitions of interests.
  
Usually in game theory, both the coalitions of action and the coalitions of interests are atomic (discrete) and are simply subsets of some set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g0432809.png" />, the elements of which are called players. Formerly the set of players in a game was assumed to be finite, but later (in the 1970's), games with infinite and even non-atomic sets of players began to be studied.
+
Usually in game theory, both the coalitions of action and the coalitions of interests are atomic (discrete) and are simply subsets of some set $  I $,  
 +
the elements of which are called players. Formerly the set of players in a game was assumed to be finite, but later (in the 1970's), games with infinite and even non-atomic sets of players began to be studied.
  
 
For games with a single coalition of action, the set of all situations may be taken to be the set of strategies of this unique coalition of action, and no further mention is made of strategies. Such games are therefore called non-strategic games. All remaining games, those with two or more coalitions of action, are called strategic games.
 
For games with a single coalition of action, the set of all situations may be taken to be the set of strategies of this unique coalition of action, and no further mention is made of strategies. Such games are therefore called non-strategic games. All remaining games, those with two or more coalitions of action, are called strategic games.
  
A wide class of non-strategic games can be obtained as follows. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328010.png" /> be a set of players, and set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328011.png" />. To every player <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328012.png" /> is associated a one-dimensional Euclidean line <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328013.png" />, to be taken as a scale of measure of his utility, and to every coalition of interests <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328014.png" /> is associated the product <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328015.png" />. Finally, sets <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328016.png" /> are introduced for every <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328017.png" />, together with a set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328018.png" /> (the elements of which are called imputations), and by definition it is assumed that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328019.png" /> for imputations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328020.png" /> ( "the imputation x dominates the imputation y with respect to the coalition K" ) if and only if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328021.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328022.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328023.png" />. Such games are called games without side payments. They describe the situation where every coalition of interests <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328024.png" /> can forcibly guarantee for its members the components of an arbitrary vector in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328025.png" /> as pay-offs (cf. also [[Gain function|Gain function]]) and the usual considerations of utility dictate non-rationality of any choice of imputations outside <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328026.png" />. Usually the set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328027.png" /> is subjected to some natural conditions; 1) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328028.png" /> is a non-empty closed convex set; 2) if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328029.png" /> and if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328030.png" /> (inequality of vectors is to be understood componentwise), then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328031.png" /> ( "whoever is capable of more is capable of less" ); 3) if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328032.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328033.png" /> (coalitions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328034.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328035.png" /> can attain at least as much together as they can separately); 4) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328036.png" /> consists of all vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328037.png" /> for which there exists a vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328038.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328039.png" /> (thus <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328040.png" /> consists of all pay-off vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328041.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328042.png" /> can give its members not less than <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328043.png" />; <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328044.png" /> consists of all vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328045.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328046.png" /> can give its members exactly <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328047.png" />). In particular, if
+
A wide class of non-strategic games can be obtained as follows. Let $  I $
 +
be a set of players, and set $  \mathfrak K _ {i} = 2  ^ {I} $.  
 +
To every player $  i \in I $
 +
is associated a one-dimensional Euclidean line $  E  ^ {i} $,  
 +
to be taken as a scale of measure of his utility, and to every coalition of interests $  K \in \mathfrak K _ {i} $
 +
is associated the product $  E  ^ {K} = \prod _ {i \in K }  E  ^ {i} $.  
 +
Finally, sets $  v ( K) \subset  E  ^ {K} $
 +
are introduced for every $  K \in \mathfrak K _ {i} $,  
 +
together with a set $  H \subset  E  ^ {I} $(
 +
the elements of which are called imputations), and by definition it is assumed that $  x \succ _ {K} y $
 +
for imputations $  x, y \in E  ^ {i} $(  
 +
"the imputation x dominates the imputation y with respect to the coalition K" ) if and only if $  x, y \in ( v ( K) \times E ^ {I \setminus  K } ) \cap H $
 +
and $  x _ {i} > y _ {i} $
 +
for all $  i \in K $.  
 +
Such games are called games without side payments. They describe the situation where every coalition of interests $  K $
 +
can forcibly guarantee for its members the components of an arbitrary vector in $  v ( K) $
 +
as pay-offs (cf. also [[Gain function|Gain function]]) and the usual considerations of utility dictate non-rationality of any choice of imputations outside $  H $.  
 +
Usually the set $  v ( K) $
 +
is subjected to some natural conditions; 1) $  v ( K) $
 +
is a non-empty closed convex set; 2) if $  x _ {k} \in v ( K) $
 +
and if $  x _ {k} \geq  y _ {k} $(
 +
inequality of vectors is to be understood componentwise), then $  y _ {k} \in v ( K) $(  
 +
"whoever is capable of more is capable of less" ); 3) if $  K \cap L = \emptyset $,  
 +
then $  v ( K \cup L) \supset v ( K) \times v ( L) $(
 +
coalitions $  K $
 +
and $  L $
 +
can attain at least as much together as they can separately); 4) $  H $
 +
consists of all vectors $  x \in E  ^ {I} $
 +
for which there exists a vector $  y \in v ( I) $
 +
such that $  y \geq  x $(
 +
thus $  v ( I) $
 +
consists of all pay-off vectors $  x $
 +
such that $  I $
 +
can give its members not less than $  x $;  
 +
$  H $
 +
consists of all vectors $  x $
 +
such that $  I $
 +
can give its members exactly $  x $).  
 +
In particular, if
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328048.png" /></td> </tr></table>
+
$$
 +
V ( K)  = \
 +
\left \{ {x } : {
 +
\sum _ {i \in K } x _ {i} \leq  v ( K) } \right \}
 +
,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328049.png" /> is some real number, and if
+
where $  v ( K) $
 +
is some real number, and if
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328050.png" /></td> </tr></table>
+
$$
 +
= \left \{ {x } : {
 +
\sum _ {i \in I } x _ {i} = v ( I),\
 +
x _ {i} \geq  v ( i )  ( i \in I) } \right \}
 +
,
 +
$$
  
then a classical [[Cooperative game|cooperative game]] is obtained. In this case, the definition of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328051.png" /> means the possibility of a coalition <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328052.png" /> attaining with confidence a total pay-off <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328053.png" /> and distributing it arbitrarily among its members. The definition of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328054.png" /> means that the total sum distributed in the game is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328055.png" /> (distribution of a smaller sum is disadvantageous; more is simply impossible!), and every player <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328056.png" /> will agree only with a part <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328057.png" /> not less than the amount <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328058.png" /> that he is certain to receive independently.
+
then a classical [[Cooperative game|cooperative game]] is obtained. In this case, the definition of $  V ( K) $
 +
means the possibility of a coalition $  K $
 +
attaining with confidence a total pay-off $  v ( K) $
 +
and distributing it arbitrarily among its members. The definition of $  H $
 +
means that the total sum distributed in the game is $  v ( I) $(
 +
distribution of a smaller sum is disadvantageous; more is simply impossible!), and every player $  i $
 +
will agree only with a part $  x _ {i} $
 +
not less than the amount $  v ( i) $
 +
that he is certain to receive independently.
  
The main class of strategic games is that of non-cooperative games (cf. [[Non-cooperative game|Non-cooperative game]]), in which the set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328059.png" /> of players coincides with the set of coalitions of action <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328060.png" /> and with the set of coalitions of interests <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328061.png" />. Every player <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328062.png" /> has at his disposal a set of strategies <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328063.png" />, the set of all situations ( <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328064.png" />-tuples) can be taken to be the Cartesian product <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328065.png" />, and the preference relation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328066.png" /> is described in terms of the pay-off function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328067.png" />, where now <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328068.png" /> if and only if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328069.png" />. Thus, a non-cooperative game can be described as a triple <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328070.png" />. If all the sets of strategies <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328071.png" /> are finite, then the non-cooperative game <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328072.png" /> is said to be finite. Finite non-cooperative games with two players <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328073.png" /> are called bimatrix games (cf. [[Bimatrix game|Bimatrix game]]).
+
The main class of strategic games is that of non-cooperative games (cf. [[Non-cooperative game|Non-cooperative game]]), in which the set $  I $
 +
of players coincides with the set of coalitions of action $  \mathfrak K _ {a} $
 +
and with the set of coalitions of interests $  \mathfrak K _ {i} $.  
 +
Every player $  i \in I $
 +
has at his disposal a set of strategies $  \mathfrak x _ {i} $,  
 +
the set of all situations ( $  n $-
 +
tuples) can be taken to be the Cartesian product $  \mathfrak x = \prod _ {i \in I }  \mathfrak x _ {i} $,  
 +
and the preference relation $  \succ _ {i} $
 +
is described in terms of the pay-off function $  H _ {i} : \mathfrak x \rightarrow E  ^ {i} $,  
 +
where now $  x  ^  \prime  \succ _ {i} x  ^ {\prime\prime} $
 +
if and only if $  H _ {i} ( x  ^  \prime  ) > H _ {i} ( x  ^ {\prime\prime} ) $.  
 +
Thus, a non-cooperative game can be described as a triple $  \Gamma = \langle  I, \{ \mathfrak x _ {i} \} _ {i \in I }  , \{ H _ {i} \} _ {i \in I }  \rangle $.  
 +
If all the sets of strategies $  \mathfrak x _ {i} $
 +
are finite, then the non-cooperative game $  \Gamma $
 +
is said to be finite. Finite non-cooperative games with two players $  ( I = \{ \textrm{ I } , \textrm{ II } \} ) $
 +
are called bimatrix games (cf. [[Bimatrix game|Bimatrix game]]).
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328074.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328075.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328076.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328077.png" /> is called a [[Two-person zero-sum game|two-person zero-sum game]]. Every two-person zero-sum game can be given by a triple <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328078.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328079.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328080.png" /> are the sets of strategies of the players <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328081.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328082.png" /> respectively, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328083.png" /> is the pay-off function of player <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328084.png" />. Finite two-person zero-sum games are called matrix games (cf. [[Matrix game|Matrix game]]).
+
If $  I = \{ \textrm{ I } , \textrm{ II } \} $
 +
and $  H _ {\textrm{ I } }  ( x) = - H _ {\textrm{ II } }  ( x) $
 +
for all $  x \in \mathfrak x $,  
 +
then $  \Gamma $
 +
is called a [[Two-person zero-sum game|two-person zero-sum game]]. Every two-person zero-sum game can be given by a triple $  \langle  \mathfrak x , \mathfrak h , H\rangle $,  
 +
where $  \mathfrak x $
 +
and $  \mathfrak h $
 +
are the sets of strategies of the players $  \textrm{ I } $
 +
and $  \textrm{ II } $
 +
respectively, and $  H $
 +
is the pay-off function of player $  \textrm{ I } $.  
 +
Finite two-person zero-sum games are called matrix games (cf. [[Matrix game|Matrix game]]).
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328085.png" /> in a two-person zero-sum game, then every situation in such a game can be described as a point in the square <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328086.png" />; such games are called games on the unit square (cf. [[Game on the unit square|Game on the unit square]]).
+
If $  \mathfrak x = \mathfrak h = [ 0, 1] $
 +
in a two-person zero-sum game, then every situation in such a game can be described as a point in the square $  [ 0, 1] \times [ 0, 1] $;  
 +
such games are called games on the unit square (cf. [[Game on the unit square|Game on the unit square]]).
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328087.png" /> be a set of players; <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328088.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328089.png" />) the sets of their strategies; <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328090.png" /> a set the elements of which are called positions; <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328091.png" /> a set the elements of which have the meaning of moments in time; let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328092.png" />, that is, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328093.png" /> associates with every situation of the game a function on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328094.png" /> with values in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328095.png" />; every <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328096.png" />-image of a situation is called a play (the set of all plays is denoted by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328097.png" />); and, finally, let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328098.png" />. The system
+
Let $  I $
 +
be a set of players; $  x _ {i} $(
 +
$  i \in I $)  
 +
the sets of their strategies; $  X $
 +
a set the elements of which are called positions; $  T $
 +
a set the elements of which have the meaning of moments in time; let $  f: \mathfrak x \rightarrow 2 ^ {T \times X } $,  
 +
that is, $  f $
 +
associates with every situation of the game a function on $  T $
 +
with values in $  X $;  
 +
every $  f $-
 +
image of a situation is called a play (the set of all plays is denoted by $  \mathfrak P $);  
 +
and, finally, let $  h _ {i} : \mathfrak P \rightarrow E  ^ {i} $.  
 +
The system
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g04328099.png" /></td> </tr></table>
+
$$
 +
\Gamma  = \
 +
< I, \{ \mathfrak x _ {i} \} _ {i \in I }  ,\
 +
T, X , f, \mathfrak P ,\
 +
\{ h _ {i} \} _ {i \in I }  >
 +
$$
  
 
is called a general positional game. Eventually, the pay-off of every player in such a game is completely determined by the arising situations, that is, by the choice of strategies of all the players. Therefore, such games are non-cooperative games.
 
is called a general positional game. Eventually, the pay-off of every player in such a game is completely determined by the arising situations, that is, by the choice of strategies of all the players. Therefore, such games are non-cooperative games.
  
Suppose that in a general positional game <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g043280100.png" /> the element <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g043280101.png" /> is a finite-dimensional Euclidean space, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g043280102.png" /> is a set of real numbers, and suppose that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g043280103.png" /> is given. Suppose that situations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g043280104.png" /> are such that the system of differential equations
+
Suppose that in a general positional game $  \Gamma $
 +
the element $  X $
 +
is a finite-dimensional Euclidean space, $  T $
 +
is a set of real numbers, and suppose that $  \phi : \mathfrak x \times X \times T \rightarrow X $
 +
is given. Suppose that situations $  x \in \mathfrak x $
 +
are such that the system of differential equations
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g043280105.png" /></td> </tr></table>
+
$$
 +
 
 +
\frac{d \mathbf x }{dt }
 +
  = \
 +
\phi ( x, \mathbf x , t),\ \
 +
\mathbf x \in X,\ \
 +
t \in T
 +
$$
  
 
(regarded as a vector equation), has a unique solution for the given initial conditions. Then every situation determines a play, which in this situation is called a trajectory. Games defined in this way are called [[Differential games|differential games]].
 
(regarded as a vector equation), has a unique solution for the given initial conditions. Then every situation determines a play, which in this situation is called a trajectory. Games defined in this way are called [[Differential games|differential games]].
Line 52: Line 191:
 
The logical foundations of every solution are certain features of an ordinary extremum, formulated in accordance with the simultaneous extremalization of several functions. Thus, apart from the strive for the greatest values of pay-offs (in the arithmetical sense), solutions of games can express certain characteristics of stability and symmetry, and also various combinations of one with the other.
 
The logical foundations of every solution are certain features of an ordinary extremum, formulated in accordance with the simultaneous extremalization of several functions. Thus, apart from the strive for the greatest values of pay-offs (in the arithmetical sense), solutions of games can express certain characteristics of stability and symmetry, and also various combinations of one with the other.
  
Thus, non-dominatedness (cf. [[Domination|Domination]]) may be regarded as one of the features of optimality in non-strategic games, that is, the agreement to adopt as optimal only those situations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g043280106.png" /> for which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g043280107.png" /> never holds for any situation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g043280108.png" /> and any coalition of interests <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g043280109.png" /> (the set of situations that are optimal in this sense is called the core of the game, see [[Core in the theory of games|Core in the theory of games]]). Another optimality principle consists of a certain combination of internal and external stability; this means that a set of situations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g043280110.png" /> is said to be a solution if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g043280111.png" /> cannot happen for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g043280112.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g043280113.png" />, and if for every <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g043280114.png" /> there is a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g043280115.png" /> and a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g043280116.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g043280117.png" /> (such an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/g/g043/g043280/g043280118.png" /> is called a von Neumann–Morgenstern solution of the game). One can formalize the notion of a threat by some coalitions against others, as well as answering counter-threats, and one can declare as stable every situation where every threat can be countered by a counter-threat. The set of situations stable in this sense is called the kernel of the game (see [[Core in the theory of games|Core in the theory of games]]). It is also of interest to understand optimality as a distinctive  "fairness" , given by some axiom system. Such an axiom-scheme has been formulated for cooperative games, one that reduces to a unique imputation (situation) called the [[Shapley vector|Shapley vector]].
+
Thus, non-dominatedness (cf. [[Domination|Domination]]) may be regarded as one of the features of optimality in non-strategic games, that is, the agreement to adopt as optimal only those situations $  x $
 +
for which $  y \succ _ {K} x $
 +
never holds for any situation $  y $
 +
and any coalition of interests $  K $(
 +
the set of situations that are optimal in this sense is called the core of the game, see [[Core in the theory of games|Core in the theory of games]]). Another optimality principle consists of a certain combination of internal and external stability; this means that a set of situations $  R $
 +
is said to be a solution if $  x \succ _ {K} y $
 +
cannot happen for $  x, y \in R $
 +
and $  K \in \mathfrak K _ {i} $,  
 +
and if for every $  x \notin R $
 +
there is a $  z \in R $
 +
and a $  K \in \mathfrak K _ {i} $
 +
such that $  z \succ _ {K} x $(
 +
such an $  R $
 +
is called a von Neumann–Morgenstern solution of the game). One can formalize the notion of a threat by some coalitions against others, as well as answering counter-threats, and one can declare as stable every situation where every threat can be countered by a counter-threat. The set of situations stable in this sense is called the kernel of the game (see [[Core in the theory of games|Core in the theory of games]]). It is also of interest to understand optimality as a distinctive  "fairness" , given by some axiom system. Such an axiom-scheme has been formulated for cooperative games, one that reduces to a unique imputation (situation) called the [[Shapley vector|Shapley vector]].
  
 
In strategic games the main optimality principle is the idea of equilibrium. Here a situation is declared to be optimal if a player's pay-offs are not increased if he diverges from it. One and the same optimality principle, formulated for different classes of games, can again have different informal expressions. Thus, the principle of equilibrium just mentioned becomes the [[Minimax principle|minimax principle]] for two-person zero-sum games.
 
In strategic games the main optimality principle is the idea of equilibrium. Here a situation is declared to be optimal if a player's pay-offs are not increased if he diverges from it. One and the same optimality principle, formulated for different classes of games, can again have different informal expressions. Thus, the principle of equilibrium just mentioned becomes the [[Minimax principle|minimax principle]] for two-person zero-sum games.
Line 81: Line 233:
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  J. von Neumann,  O. Morgenstern,  "Theory of games and economic behavior" , Princeton Univ. Press  (1947)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  R.D. Luce,  H. Raiffa,  "Games and decisions. Introduction and critical survey" , Wiley  (1957)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  S. Karlin,  "Mathematical methods and theory in games, programming and economics" , Addison-Wesley  (1959)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  T.E.S. Raghavan,  "Some topics in two-person games" , Amer. Elsevier  (1971)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  N.N. Vorob'ev,  "Entwicklung der Spieltheorie" , Deutsch. Verlag Wissenschaft.  (1975)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top"> , ''The theory of games. Annotated index of publications up to 1968'' , Leningrad  (1976)  (In Russian)</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top">  N.N. Vorob'ev,  "Game theory. Lectures for economists and system scientists" , Springer  (1977)  (Translated from Russian)</TD></TR><TR><TD valign="top">[8]</TD> <TD valign="top">  J.P. Aubin,  "Mathematical methods of game and economic theory" , North-Holland  (1979)  (Translated from French)</TD></TR><TR><TD valign="top">[9]</TD> <TD valign="top"> , ''The theory of games. Annotated index of publications 1969–1974'' , Leningrad  (1980)  (In Russian)</TD></TR><TR><TD valign="top">[10]</TD> <TD valign="top">  J. Rosenmüller,  "The theory of games and markets" , North-Holland  (1981)  (Translated from German)</TD></TR><TR><TD valign="top">[11]</TD> <TD valign="top">  G. Owen,  "Game theory" , Acad. Press  (1982)</TD></TR><TR><TD valign="top">[12]</TD> <TD valign="top">  M. Shubik,  "Game theory in the social sciences" , M.I.T.  (1982)</TD></TR><TR><TD valign="top">[13]</TD> <TD valign="top">  M. Shubik,  "A game-theoretic approach to political economy" , M.I.T.  (1984)</TD></TR><TR><TD valign="top">[14]</TD> <TD valign="top">  N.N. Vorob'ev,  "Foundations of game theory. Non-cooperative games" , Leningrad  (1984)  (In Russian)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  J. von Neumann,  O. Morgenstern,  "Theory of games and economic behavior" , Princeton Univ. Press  (1947)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  R.D. Luce,  H. Raiffa,  "Games and decisions. Introduction and critical survey" , Wiley  (1957)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  S. Karlin,  "Mathematical methods and theory in games, programming and economics" , Addison-Wesley  (1959)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  T.E.S. Raghavan,  "Some topics in two-person games" , Amer. Elsevier  (1971)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  N.N. Vorob'ev,  "Entwicklung der Spieltheorie" , Deutsch. Verlag Wissenschaft.  (1975)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top"> , ''The theory of games. Annotated index of publications up to 1968'' , Leningrad  (1976)  (In Russian)</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top">  N.N. Vorob'ev,  "Game theory. Lectures for economists and system scientists" , Springer  (1977)  (Translated from Russian)</TD></TR><TR><TD valign="top">[8]</TD> <TD valign="top">  J.P. Aubin,  "Mathematical methods of game and economic theory" , North-Holland  (1979)  (Translated from French)</TD></TR><TR><TD valign="top">[9]</TD> <TD valign="top"> , ''The theory of games. Annotated index of publications 1969–1974'' , Leningrad  (1980)  (In Russian)</TD></TR><TR><TD valign="top">[10]</TD> <TD valign="top">  J. Rosenmüller,  "The theory of games and markets" , North-Holland  (1981)  (Translated from German)</TD></TR><TR><TD valign="top">[11]</TD> <TD valign="top">  G. Owen,  "Game theory" , Acad. Press  (1982)</TD></TR><TR><TD valign="top">[12]</TD> <TD valign="top">  M. Shubik,  "Game theory in the social sciences" , M.I.T.  (1982)</TD></TR><TR><TD valign="top">[13]</TD> <TD valign="top">  M. Shubik,  "A game-theoretic approach to political economy" , M.I.T.  (1984)</TD></TR><TR><TD valign="top">[14]</TD> <TD valign="top">  N.N. Vorob'ev,  "Foundations of game theory. Non-cooperative games" , Leningrad  (1984)  (In Russian)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====

Latest revision as of 19:41, 5 June 2020


The theory of mathematical models for making optimal decisions under conditions of conflict.

Formal definition of a game.

A conflict is a phenomenon in which it is known who is taking part in the phenomenon and how, what the outcomes of it are, and also who is interested in the outcomes and in what way. Thus, for a formal description of a conflict it is necessary to indicate: 1) the set $ \mathfrak K _ {a} $ of elements participating in it (called the coalitions of action); 2) the family of sets $ \mathfrak x _ {K} $ of strategies of each of the coalitions of action; 3) the set of situations $ \mathfrak x \subset \prod _ {K \in \mathfrak K _ {a} } \mathfrak x _ {K} $; 4) the set $ \mathfrak K _ {i} $ of participants' interests (called the coalitions of interests); and 5) the family of binary relations $ \succ _ {K} $ on $ \mathfrak x \times \mathfrak x $( $ K \in \mathfrak K _ {i} $) expressing the preferences of coalitions of interests between the situations. The system

$$ < \mathfrak K _ {a} ,\ \{ \mathfrak x _ {K} \} _ {K \in \mathfrak K _ {a} } ,\ \mathfrak x , \mathfrak K _ {i} ,\ \{ \succ _ {K} \} _ {K \in \mathfrak K _ {i} } > $$

is called a game.

The content of the theory of games consists in establishing the connections between the elements of each game and its "optimal" outcomes, first of all: a precization of the whole concept of optimality, the proof of the existence of optimal outcomes and their practical determination. The development of the theory of games leads to problems of studying the connections between different games, expressed in different calculi of games, and to the consideration of classes, spaces and categories of games.

Classification of games.

Games in which there are no coalitions of interests, or for which there is only one such coalition, are the objects of study of the purely descriptive mathematical theory or of traditional theories of optimization. Games in the proper sense of the word have at least two coalitions of interests.

Usually in game theory, both the coalitions of action and the coalitions of interests are atomic (discrete) and are simply subsets of some set $ I $, the elements of which are called players. Formerly the set of players in a game was assumed to be finite, but later (in the 1970's), games with infinite and even non-atomic sets of players began to be studied.

For games with a single coalition of action, the set of all situations may be taken to be the set of strategies of this unique coalition of action, and no further mention is made of strategies. Such games are therefore called non-strategic games. All remaining games, those with two or more coalitions of action, are called strategic games.

A wide class of non-strategic games can be obtained as follows. Let $ I $ be a set of players, and set $ \mathfrak K _ {i} = 2 ^ {I} $. To every player $ i \in I $ is associated a one-dimensional Euclidean line $ E ^ {i} $, to be taken as a scale of measure of his utility, and to every coalition of interests $ K \in \mathfrak K _ {i} $ is associated the product $ E ^ {K} = \prod _ {i \in K } E ^ {i} $. Finally, sets $ v ( K) \subset E ^ {K} $ are introduced for every $ K \in \mathfrak K _ {i} $, together with a set $ H \subset E ^ {I} $( the elements of which are called imputations), and by definition it is assumed that $ x \succ _ {K} y $ for imputations $ x, y \in E ^ {i} $( "the imputation x dominates the imputation y with respect to the coalition K" ) if and only if $ x, y \in ( v ( K) \times E ^ {I \setminus K } ) \cap H $ and $ x _ {i} > y _ {i} $ for all $ i \in K $. Such games are called games without side payments. They describe the situation where every coalition of interests $ K $ can forcibly guarantee for its members the components of an arbitrary vector in $ v ( K) $ as pay-offs (cf. also Gain function) and the usual considerations of utility dictate non-rationality of any choice of imputations outside $ H $. Usually the set $ v ( K) $ is subjected to some natural conditions; 1) $ v ( K) $ is a non-empty closed convex set; 2) if $ x _ {k} \in v ( K) $ and if $ x _ {k} \geq y _ {k} $( inequality of vectors is to be understood componentwise), then $ y _ {k} \in v ( K) $( "whoever is capable of more is capable of less" ); 3) if $ K \cap L = \emptyset $, then $ v ( K \cup L) \supset v ( K) \times v ( L) $( coalitions $ K $ and $ L $ can attain at least as much together as they can separately); 4) $ H $ consists of all vectors $ x \in E ^ {I} $ for which there exists a vector $ y \in v ( I) $ such that $ y \geq x $( thus $ v ( I) $ consists of all pay-off vectors $ x $ such that $ I $ can give its members not less than $ x $; $ H $ consists of all vectors $ x $ such that $ I $ can give its members exactly $ x $). In particular, if

$$ V ( K) = \ \left \{ {x } : { \sum _ {i \in K } x _ {i} \leq v ( K) } \right \} , $$

where $ v ( K) $ is some real number, and if

$$ H = \left \{ {x } : { \sum _ {i \in I } x _ {i} = v ( I),\ x _ {i} \geq v ( i ) ( i \in I) } \right \} , $$

then a classical cooperative game is obtained. In this case, the definition of $ V ( K) $ means the possibility of a coalition $ K $ attaining with confidence a total pay-off $ v ( K) $ and distributing it arbitrarily among its members. The definition of $ H $ means that the total sum distributed in the game is $ v ( I) $( distribution of a smaller sum is disadvantageous; more is simply impossible!), and every player $ i $ will agree only with a part $ x _ {i} $ not less than the amount $ v ( i) $ that he is certain to receive independently.

The main class of strategic games is that of non-cooperative games (cf. Non-cooperative game), in which the set $ I $ of players coincides with the set of coalitions of action $ \mathfrak K _ {a} $ and with the set of coalitions of interests $ \mathfrak K _ {i} $. Every player $ i \in I $ has at his disposal a set of strategies $ \mathfrak x _ {i} $, the set of all situations ( $ n $- tuples) can be taken to be the Cartesian product $ \mathfrak x = \prod _ {i \in I } \mathfrak x _ {i} $, and the preference relation $ \succ _ {i} $ is described in terms of the pay-off function $ H _ {i} : \mathfrak x \rightarrow E ^ {i} $, where now $ x ^ \prime \succ _ {i} x ^ {\prime\prime} $ if and only if $ H _ {i} ( x ^ \prime ) > H _ {i} ( x ^ {\prime\prime} ) $. Thus, a non-cooperative game can be described as a triple $ \Gamma = \langle I, \{ \mathfrak x _ {i} \} _ {i \in I } , \{ H _ {i} \} _ {i \in I } \rangle $. If all the sets of strategies $ \mathfrak x _ {i} $ are finite, then the non-cooperative game $ \Gamma $ is said to be finite. Finite non-cooperative games with two players $ ( I = \{ \textrm{ I } , \textrm{ II } \} ) $ are called bimatrix games (cf. Bimatrix game).

If $ I = \{ \textrm{ I } , \textrm{ II } \} $ and $ H _ {\textrm{ I } } ( x) = - H _ {\textrm{ II } } ( x) $ for all $ x \in \mathfrak x $, then $ \Gamma $ is called a two-person zero-sum game. Every two-person zero-sum game can be given by a triple $ \langle \mathfrak x , \mathfrak h , H\rangle $, where $ \mathfrak x $ and $ \mathfrak h $ are the sets of strategies of the players $ \textrm{ I } $ and $ \textrm{ II } $ respectively, and $ H $ is the pay-off function of player $ \textrm{ I } $. Finite two-person zero-sum games are called matrix games (cf. Matrix game).

If $ \mathfrak x = \mathfrak h = [ 0, 1] $ in a two-person zero-sum game, then every situation in such a game can be described as a point in the square $ [ 0, 1] \times [ 0, 1] $; such games are called games on the unit square (cf. Game on the unit square).

Let $ I $ be a set of players; $ x _ {i} $( $ i \in I $) the sets of their strategies; $ X $ a set the elements of which are called positions; $ T $ a set the elements of which have the meaning of moments in time; let $ f: \mathfrak x \rightarrow 2 ^ {T \times X } $, that is, $ f $ associates with every situation of the game a function on $ T $ with values in $ X $; every $ f $- image of a situation is called a play (the set of all plays is denoted by $ \mathfrak P $); and, finally, let $ h _ {i} : \mathfrak P \rightarrow E ^ {i} $. The system

$$ \Gamma = \ < I, \{ \mathfrak x _ {i} \} _ {i \in I } ,\ T, X , f, \mathfrak P ,\ \{ h _ {i} \} _ {i \in I } > $$

is called a general positional game. Eventually, the pay-off of every player in such a game is completely determined by the arising situations, that is, by the choice of strategies of all the players. Therefore, such games are non-cooperative games.

Suppose that in a general positional game $ \Gamma $ the element $ X $ is a finite-dimensional Euclidean space, $ T $ is a set of real numbers, and suppose that $ \phi : \mathfrak x \times X \times T \rightarrow X $ is given. Suppose that situations $ x \in \mathfrak x $ are such that the system of differential equations

$$ \frac{d \mathbf x }{dt } = \ \phi ( x, \mathbf x , t),\ \ \mathbf x \in X,\ \ t \in T $$

(regarded as a vector equation), has a unique solution for the given initial conditions. Then every situation determines a play, which in this situation is called a trajectory. Games defined in this way are called differential games.

The class of general positional games includes positional games and dynamic games (including stochastic games, recursive games and games of survival; cf. Positional game; Dynamic game; Recursive game; Game of survival; Stochastic game).

Main results in the theory of games.

The fundamental problem in the theory of games is connected with optimality principles, which must reflect in sufficient measure meaningful representations of optimality, and secondly must be realizable for sufficiently wide and naturally occurring classes of games. These two requirements on optimality principles contradict one another to a certain extent, and thus for many interesting natural classes of games, the theory of games has not yet developed corresponding optimality principles. At the same time classes of games are known that are provided, with equal success, by different optimality principles. Thus, the construction and analysis of optimality principles are an essential part of the theory of games. Every optimality principle is realized (not necessarily uniquely) as a set of situation-optima. This set (which may be empty) is called a solution (cf. also Solution in game theory).

The logical foundations of every solution are certain features of an ordinary extremum, formulated in accordance with the simultaneous extremalization of several functions. Thus, apart from the strive for the greatest values of pay-offs (in the arithmetical sense), solutions of games can express certain characteristics of stability and symmetry, and also various combinations of one with the other.

Thus, non-dominatedness (cf. Domination) may be regarded as one of the features of optimality in non-strategic games, that is, the agreement to adopt as optimal only those situations $ x $ for which $ y \succ _ {K} x $ never holds for any situation $ y $ and any coalition of interests $ K $( the set of situations that are optimal in this sense is called the core of the game, see Core in the theory of games). Another optimality principle consists of a certain combination of internal and external stability; this means that a set of situations $ R $ is said to be a solution if $ x \succ _ {K} y $ cannot happen for $ x, y \in R $ and $ K \in \mathfrak K _ {i} $, and if for every $ x \notin R $ there is a $ z \in R $ and a $ K \in \mathfrak K _ {i} $ such that $ z \succ _ {K} x $( such an $ R $ is called a von Neumann–Morgenstern solution of the game). One can formalize the notion of a threat by some coalitions against others, as well as answering counter-threats, and one can declare as stable every situation where every threat can be countered by a counter-threat. The set of situations stable in this sense is called the kernel of the game (see Core in the theory of games). It is also of interest to understand optimality as a distinctive "fairness" , given by some axiom system. Such an axiom-scheme has been formulated for cooperative games, one that reduces to a unique imputation (situation) called the Shapley vector.

In strategic games the main optimality principle is the idea of equilibrium. Here a situation is declared to be optimal if a player's pay-offs are not increased if he diverges from it. One and the same optimality principle, formulated for different classes of games, can again have different informal expressions. Thus, the principle of equilibrium just mentioned becomes the minimax principle for two-person zero-sum games.

The formulas and algorithms enabling one to find solutions of games can also refer to a number of optimality principles (with restricted domain of applicability). For example, in a matrix game with diagonal pay-off matrix, an optimality principle could be a choice by the players of strategies with probabilities inversely proportional to the corresponding diagonal entries of the matrix. The realizability of an optimality principle for some classes of games consists of the existence, for all games in the class, of corresponding (non-empty) sets of optima. Their absence in the originally given situations can be overcome by extending the sets of strategies (and, therefore, the sets of situations constructible from them). For non-cooperative games, mixed (randomized) strategies are fruitfully employed.

The majority of proofs of existence theorems for solutions in the theory of games have a non-constructive character (many of them are based on fixed-point theorems) and do not contain algorithms for finding solutions. Therefore, particular analytic and numerical methods for the location of solutions are important. Moreover, the problem of finding practical solutions of games frequently turns out to be very complicated, and it has proved possible to find solutions only for very restricted classes of games.

Several directions can be distinguished in the development of the calculi of games. One has investigated the possibilities of finding and describing (at least partially) a solution of a game on the basis of solutions of another game that has in some sense or other a simpler structure. Elementary examples of such reductions are the natural rejection of dominated strategies in non-cooperative games, their symmetrization, as well as the approximation of infinite games by finite ones. The construction of a cooperative model of a non-cooperative game proposed by J. von Neumann can be viewed as a reduction process also. Converse results are also known, indicating the principle non-reducibility of the solution of games of one or another class to the solution of the games of a more restricted class. For example, one can cite examples of non-cooperative three-person games such that it is impossible to compute the solution using rational operations on the data, while the solution of bimatrix games can always brought about by the use of rational operations.

Fixing some elements in games of a certain class (for example, the set of players and the sets of their strategies in the class of all non-cooperative games) and discerning the games by the other elements (in the given example — by the pay-off functions of the players) leads to the consideration of spaces of games, accomplished by means of functional analysis and topology. Introduction of a probability measure into such spaces of games leads to stochastic games.

Connections of the theory of games with other areas of mathematics.

The theory of games is closely connected with other areas of mathematics. As a general theory of sets with several binary relations, it is close to algebra. A very diverse mathematical apparatus is used in the theory of games, and a large number of traditional mathematical problems have game-theoretical generalizations. Optimal behaviours in strategic games usually turn out to be randomized, which means that probabilistic-theoretical concepts naturally occur in the theory of games, and are used in it, and at one time it was even acceptable to regard the theory of games as part of probability theory.

Mathematical models of operations research (being models of making optimal decisions) are naturally divided according to three levels: determinate, stochastic and indeterminate, according to the degree of informedness of the subjects making decisions. Making a decision under conditions of indeterminacy can be interpreted as a conflict of the subject making a decision against "nature" , and thus as a game. In essence, all multi-criterion problems (cf. Multi-criterion problem) belong to the theory of games.

Therefore, the theory of games in its methodological foundations and practical orientations can be regarded as part of operations research.

The theory of games is one of the parts of the mathematical apparatus of cybernetics. In dynamic games, strategies are expressed as functions of the "information states" of the players, so that in the process of a game the players can gain or lose information. This establishes a connection between the theory of games and information theory.

Applications of the theory of games.

In addition to various connections within mathematics, the theory of games has numerous applications outside it. They principally touch on those branches of knowledge and forms of practical activity that have to do immediately with conflict: military affairs, market economy (in particular, the theory of games is applicable to questions of the struggle of firms for markets, to the phenomena of oligopoly, to the planning of advertizing companies, to the fixing of prices in competitive markets, in market playing, etc.), law, etc. The theory of games can also be used to investigate various phenomena in the conditions of a planned economy (for example, the problems of centralization and decentralization of the control of production, the overcoming of departmental conflicts, optimal planning according to many indicators, planning under conditions of uncertainty generated, for example, by technical progress, etc.).

Historical sketch.

The birth of the theory of games as a mathematical discipline can be traced to a letter of B. Pascal to P. Fermat dated 29 July 1654, which can also be regarded as the beginning of the mathematical theory of probability. Subsequently, isolated ideas that can be considered to be game-theoretical were stated by Waldegrave in 1712 (the discovery of optimal mixed strategies for the game of "Le Here" ), D. Bernoulli in 1732 (an analysis of the "St. Petersburg gameSt. Petersburg game" ), P. Laplace in 1814 (a consideration of optimality principles), and J. Bertrand in 1888 (a game-theoretical approach to baccarat). Several essentially game-theoretical assertions have been considered in an equivalent form in other branches of mathematics: in the theory of best approximations (P.L. Chebyshev), the geometry of convex polytopes (H. Minkowski), and in the theory of linear inequalities (E. Stiemke). In 1911, E. Zermelo described a game-theoretical approach to chess; in 1912, E. Borel began a systematic study of matrix games, having proved the existence in certain cases of optimal mixed strategies. In 1928 the work by von Neumann Zur Theorie der Gesellschaftsspiele, containing the fundamental ideas of the contemporary theory of games, appeared. These ideas were developed in detail by von Neumann and O. Morgenstern [1]. Since then the theory of games has entered into a number of branches of contemporary mathematics.

References

[1] J. von Neumann, O. Morgenstern, "Theory of games and economic behavior" , Princeton Univ. Press (1947)
[2] R.D. Luce, H. Raiffa, "Games and decisions. Introduction and critical survey" , Wiley (1957)
[3] S. Karlin, "Mathematical methods and theory in games, programming and economics" , Addison-Wesley (1959)
[4] T.E.S. Raghavan, "Some topics in two-person games" , Amer. Elsevier (1971)
[5] N.N. Vorob'ev, "Entwicklung der Spieltheorie" , Deutsch. Verlag Wissenschaft. (1975)
[6] , The theory of games. Annotated index of publications up to 1968 , Leningrad (1976) (In Russian)
[7] N.N. Vorob'ev, "Game theory. Lectures for economists and system scientists" , Springer (1977) (Translated from Russian)
[8] J.P. Aubin, "Mathematical methods of game and economic theory" , North-Holland (1979) (Translated from French)
[9] , The theory of games. Annotated index of publications 1969–1974 , Leningrad (1980) (In Russian)
[10] J. Rosenmüller, "The theory of games and markets" , North-Holland (1981) (Translated from German)
[11] G. Owen, "Game theory" , Acad. Press (1982)
[12] M. Shubik, "Game theory in the social sciences" , M.I.T. (1982)
[13] M. Shubik, "A game-theoretic approach to political economy" , M.I.T. (1984)
[14] N.N. Vorob'ev, "Foundations of game theory. Non-cooperative games" , Leningrad (1984) (In Russian)

Comments

In the West, usually no distinction is made between coalitions of action and coalitions of interests; also the concept of situation is not introduced.

Proofs based on fixed-point theorems are no longer regarded as non-constructive. The Lemke–Howson algorithm [a7] for computing equilibrium points of bimatrix games has led to a variety of computational procedures; see also [a8].

Originally, most research in game theory was concentrated on the so-called normal or strategic form. In this form all possible sequences of decisions of every player are set out against each other. For a two-player game this results in a matrix structure. In such a formulation dynamic aspects are completely suppressed.

Opposite to the treatment in normal form is the treatment of games "in extensive form" , see [a1]. The general idea in this formulation is that a game evolves according to a road or tree structure; at every crossing or branching a decision has to be made as how to proceed. A positional game is usually referred to as a feedback game in the Western literature.

The theory of zero-sum differential games was created by R. Isaacs [a1], about the same time as optimal control theory was developed. Isaacs' work had a great impact; [a3] deals with similar research in the USSR. Later on the role of information ( "who knows what when" ) got a more central place [a2]. In [a4] a long article has been published with a review of the existing game theory literature up to 1981.

References

[a1] R. Isaacs, "Differential games" , Wiley (1965)
[a2] T. Basar, G.J. Olsder, "Dynamic noncooperative game theory" , Acad. Press (1982)
[a3] N.N. Krasovaskii, A.I. Subbotin, "Game theoretical control problems" , Springer (1988) (Translated from Russian)
[a4] M. Shubik (ed.) , The mathematics of conflict , North-Holland (1983)
[a5] T. Driessen, "Cooperative games, solutions and applications" , Kluwer (1988)
[a6] E. van Damme, "Stability and perfection of Nash equilibria" , Springer (1987)
[a7] C.E. Lemke, J.T., jr. Howson, "Equilibrium points of bimatrix games" SIAM J. Appl. Math. , 12 (1964) pp. 413–423
[a8] H.E. Scarf, "The computation of economic equilibria" , Yale Univ. Press (1973)
[a9] J. Szép, F. Forgó, "Introduction to the theory of games" , Reidel (1985) pp. 171; 199
How to Cite This Entry:
Games, theory of. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Games,_theory_of&oldid=16069
This article was adapted from an original article by N.N. Vorob'ev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article