Namespaces
Variants
Actions

Two-person zero-sum game

From Encyclopedia of Mathematics
Jump to: navigation, search

A game played by two opponents with strictly opposite interests. This means, formally, that on passing from one game situation to another, an increase in the pay-off of one player results in a numerically equal decrease in the pay-off of the other, so that in any situation the sum of the pay-offs of the players is constant (this sum may be considered as zero, since the pay-off of one player is equal to the loss of the other). For this reason two-person zero-sum games are also called two-person games with zero sum, or antagonistic games. The mathematical concept of a two-person zero-sum game — pay-off functions which are numerically equal and opposite in sign — is a formal concept, which differs from the corresponding philosophical concept. If, in a two-person zero-sum game, one of the players manages to increase his pay-off by a definite amount of money as a result of agreements and negotiations, his opponent will have lost an equal sum. Consequently, any agreement would be disadvantageous to one of the players, and therefore impossible. Real conflict situations, which may be adequately modelled by two-person zero-sum games, are some (but not all) military operations, sport matches and parlour games, as well as situations which involve bilateral decision making under strict competition. Games played against nature and, in general, decision making under uncertainty conditions (cf. Statistical game) may be regarded as two-person zero-sum games if it is assumed that the real laws of nature, which are unknown to the player, will produce effects least favourable to the player.

The definition of a two-person zero-sum game in normal form (cf. Games, theory of) amounts to defining sets of strategies $A$ and $B$ of players I and II respectively, and of the pay-off function $H$ of player I, defined on the set $A\times B$ of all situations (the pay-off function of player II is $-H$ by definition). Formally, a two-person zero-sum game $\Gamma$ is given by a triplet $\Gamma=\langle A,B,H\rangle$. Play consists in the players choosing their strategies $a\in A$, $b\in B$, after which player I obtains the sum $H(a,b)$ from player II. Such a definition of a two-person zero-sum game is sufficiently general to include all variants of two-person zero-sum games, including dynamic games (cf. Dynamic game), differential games and positional games (cf. Positional game), provided that the sets of strategies and the pay-off function are properly described. A rational choice of actions (strategies) of the players in the course of a two-person zero-sum game is based on a minimax principle: If

$$\max_{a\in A}\inf_{b\in B}H(a,b)=\min_{b\in B}\sup_{a\in A}H(a,b),\tag{1}$$

or

$$\sup_{a\in A}\inf_{b\in B}H(a,b)=\inf_{b\in B}\sup_{a\in A}H(a,b),\tag{1prm}$$

the game $\Gamma$ has optimal strategies ($\epsilon$-optimal strategies, respectively) for both players (cf. Strategy (in game theory)). The common value of both parts of equation \ref{1prm} is called the value of the game $\Gamma$. However, equations \ref{1} or \ref{1prm} may not be valid even in the simplest cases. For example, in a matrix game with pay-off matrix

$$\begin{Vmatrix}-1&\phantom{-}1\\\phantom{-}1&-1\end{Vmatrix}$$

the following equalities are valid:

$$\max_i\min_ja_{ij}=-1,\quad\min_j\max_ia_{ij}=1.$$

For this reason the sets of players' strategies are extended to a set of mixed strategies, which consists in a random choice of initial ("pure") strategies by the players while the pay-off function is defined as the mathematical expectation of the pay-off under conditions of application of the mixed strategies. In the above example, the optimal mixed strategies of both players are the choices of the two strategies with probabilities $1/2$, the value of the game in the mixed strategies being zero. If the sets $A$ and $B$ are finite, the two-person zero-sum game is called a matrix game, for which the value of the game and optimal mixed strategies of each player exist in all cases. If both sets $A$ and $B$ are infinite, optimal (and even $\epsilon$-optimal) strategies can fail to exist (cf. Infinite game).

References

[1] S. Karlin, "Mathematical methods and theory in games, programming and economics" , Addison-Wesley (1959)
[2] T. Parthasarathy, T.L. Raghavan, "Some topics in two-person games" , Elsevier (1971)


Comments

References

[a1] N.H. Vorob'ev, "Game theory. Lectures for economists and system scientists" , Springer (1977) (Translated from Russian)
[a2] G. Owen, "Game theory" , Acad. Press (1982)
How to Cite This Entry:
Two-person zero-sum game. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Two-person_zero-sum_game&oldid=34414
This article was adapted from an original article by E.B. Yanovskaya (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article