Namespaces
Variants
Actions

Difference between revisions of "Stochastic game"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
Line 1: Line 1:
 +
<!--
 +
s0901001.png
 +
$#A+1 = 4 n = 0
 +
$#C+1 = 4 : ~/encyclopedia/old_files/data/S090/S.0900100 Stochastic game
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
A [[Dynamic game|dynamic game]] in which the transition distribution function does not depend on the history of the game, i.e.
 
A [[Dynamic game|dynamic game]] in which the transition distribution function does not depend on the history of the game, i.e.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090100/s0901001.png" /></td> </tr></table>
+
$$
 +
F( x _ {k}  \mid  x _ {1} , s ^ {( x _ {1} ) } \dots x _ {k-} 1 , s ^ {( x _ {k-} 1 ) } )
 +
= F( x _ {k}  \mid  x _ {k-} 1 , s ^ {( x _ {k-} 1 ) } ).
 +
$$
  
Stochastic games were first defined by L.S. Shapley [[#References|[1]]], who studied two-person zero-sum stochastic games with real pay-off (Shapley games). In Shapley games, both the set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090100/s0901002.png" /> of states of the game and the sets of pure strategies of the players are finite, and at any step and for any choice of alternatives made by the players there is a non-zero probability of terminating the game. As a result of this condition, the game terminates with probability 1 in a finite number of steps, and the mathematical expectation of each player's pay-off is finite. Any such game has a value, and both players have stationary optimal strategies, i.e. strategies in which the player's choice of the elementary strategy at every stage of the game depends only on the current situation. Shapley also discovered a procedure by which it is possible to find both the value of the game and the optimal strategies.
+
Stochastic games were first defined by L.S. Shapley [[#References|[1]]], who studied two-person zero-sum stochastic games with real pay-off (Shapley games). In Shapley games, both the set $  X $
 +
of states of the game and the sets of pure strategies of the players are finite, and at any step and for any choice of alternatives made by the players there is a non-zero probability of terminating the game. As a result of this condition, the game terminates with probability 1 in a finite number of steps, and the mathematical expectation of each player's pay-off is finite. Any such game has a value, and both players have stationary optimal strategies, i.e. strategies in which the player's choice of the elementary strategy at every stage of the game depends only on the current situation. Shapley also discovered a procedure by which it is possible to find both the value of the game and the optimal strategies.
  
 
Stochastic games which differ from Shapley games in that they can be infinite have also been studied; they are called stochastic games with limiting mean pay-off, i.e. two-person zero-sum stochastic games with
 
Stochastic games which differ from Shapley games in that they can be infinite have also been studied; they are called stochastic games with limiting mean pay-off, i.e. two-person zero-sum stochastic games with
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090100/s0901003.png" /></td> </tr></table>
+
$$
 +
h _ {1} ( P)  = - h _ {2} ( P)  = \
 +
\lim\limits _ {n \rightarrow \infty }  \sup  \sum _ { k= } 1 ^ { n } 
 +
\frac{1}{n}
  
The existence of the value of such a game and of stationary optimal strategies under hypotheses of ergodicity of the Markov chain that arises when any stationary strategies are substituted in the transition functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090100/s0901004.png" /> have been proved. These results have been generalized to cases where restrictions on the number of states and elementary strategies have been removed and to the case of other forms of pay-off.
+
h _ {i} ( x _ {k} , s ^ {( x _ {k} ) } ).
 +
$$
 +
 
 +
The existence of the value of such a game and of stationary optimal strategies under hypotheses of ergodicity of the Markov chain that arises when any stationary strategies are substituted in the transition functions $  F( x _ {k}  \mid  x _ {k-} 1 , s ^ {( x _ {k-} 1 ) } ) $
 +
have been proved. These results have been generalized to cases where restrictions on the number of states and elementary strategies have been removed and to the case of other forms of pay-off.
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  L.S. Shapley,  "Stochastic games"  ''Proc. Nat. Acad. Sci.'' , '''39'''  (1953)  pp. 1095–1100</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  D. Gillette,  "Stochastic games with zero stop probabilities" , ''Contributions to the theory of games'' , '''3''' , Princeton Univ. Press  (1957)  pp. 179–187</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  T. Parthasarathy,  M. Stern,  "Markov games: A survey"  E. Roxin (ed.)  P. Liers (ed.)  R. Sternberg (ed.) , ''Differential Games and Control Theory'' , M. Dekker  (1977)  pp. 1–46</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  K. Vrieze,  "Zero-sum stochastic games" , ''Surveys in Game Theory and Related Topics'' , ''Tracts'' , '''39''' , CWI  (1987)  pp. 103–132</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  V.K. Domanskii,  ''Kibernetik'' , '''1'''  (1988)  pp. 26–49</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  L.S. Shapley,  "Stochastic games"  ''Proc. Nat. Acad. Sci.'' , '''39'''  (1953)  pp. 1095–1100</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  D. Gillette,  "Stochastic games with zero stop probabilities" , ''Contributions to the theory of games'' , '''3''' , Princeton Univ. Press  (1957)  pp. 179–187</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  T. Parthasarathy,  M. Stern,  "Markov games: A survey"  E. Roxin (ed.)  P. Liers (ed.)  R. Sternberg (ed.) , ''Differential Games and Control Theory'' , M. Dekker  (1977)  pp. 1–46</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  K. Vrieze,  "Zero-sum stochastic games" , ''Surveys in Game Theory and Related Topics'' , ''Tracts'' , '''39''' , CWI  (1987)  pp. 103–132</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  V.K. Domanskii,  ''Kibernetik'' , '''1'''  (1988)  pp. 26–49</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====

Revision as of 08:23, 6 June 2020


A dynamic game in which the transition distribution function does not depend on the history of the game, i.e.

$$ F( x _ {k} \mid x _ {1} , s ^ {( x _ {1} ) } \dots x _ {k-} 1 , s ^ {( x _ {k-} 1 ) } ) = F( x _ {k} \mid x _ {k-} 1 , s ^ {( x _ {k-} 1 ) } ). $$

Stochastic games were first defined by L.S. Shapley [1], who studied two-person zero-sum stochastic games with real pay-off (Shapley games). In Shapley games, both the set $ X $ of states of the game and the sets of pure strategies of the players are finite, and at any step and for any choice of alternatives made by the players there is a non-zero probability of terminating the game. As a result of this condition, the game terminates with probability 1 in a finite number of steps, and the mathematical expectation of each player's pay-off is finite. Any such game has a value, and both players have stationary optimal strategies, i.e. strategies in which the player's choice of the elementary strategy at every stage of the game depends only on the current situation. Shapley also discovered a procedure by which it is possible to find both the value of the game and the optimal strategies.

Stochastic games which differ from Shapley games in that they can be infinite have also been studied; they are called stochastic games with limiting mean pay-off, i.e. two-person zero-sum stochastic games with

$$ h _ {1} ( P) = - h _ {2} ( P) = \ \lim\limits _ {n \rightarrow \infty } \sup \sum _ { k= } 1 ^ { n } \frac{1}{n} h _ {i} ( x _ {k} , s ^ {( x _ {k} ) } ). $$

The existence of the value of such a game and of stationary optimal strategies under hypotheses of ergodicity of the Markov chain that arises when any stationary strategies are substituted in the transition functions $ F( x _ {k} \mid x _ {k-} 1 , s ^ {( x _ {k-} 1 ) } ) $ have been proved. These results have been generalized to cases where restrictions on the number of states and elementary strategies have been removed and to the case of other forms of pay-off.

References

[1] L.S. Shapley, "Stochastic games" Proc. Nat. Acad. Sci. , 39 (1953) pp. 1095–1100
[2] D. Gillette, "Stochastic games with zero stop probabilities" , Contributions to the theory of games , 3 , Princeton Univ. Press (1957) pp. 179–187
[3] T. Parthasarathy, M. Stern, "Markov games: A survey" E. Roxin (ed.) P. Liers (ed.) R. Sternberg (ed.) , Differential Games and Control Theory , M. Dekker (1977) pp. 1–46
[4] K. Vrieze, "Zero-sum stochastic games" , Surveys in Game Theory and Related Topics , Tracts , 39 , CWI (1987) pp. 103–132
[5] V.K. Domanskii, Kibernetik , 1 (1988) pp. 26–49

Comments

In 1981, J.F. Mertens and A. Neyman proved the existence of the value for arbitrary stochastic games with limiting mean pay-off, [a2].

There has been a lot of research on the asymptotic theory of stochastic games, using, e.g., discounted pay-offs. See [a1][a3].

References

[a1] T. Bewley, E. Kohlberg, "The asymptotic theory of stochastic games" Math. Oper. Res. , 1 (1976) pp. 197–208
[a2] J.F. Mertens, A. Neyman, "Stochastic games have a value" Proc. Nat. Acad. Sci. , 79 (1982) pp. 2145–2146
[a3] T.E.S. Raghavan (ed.) T.S. Ferguson (ed.) T. Parthasarathy (ed.) O.J. Vrieze (ed.) , Stochastic games and related topics (in honour of L.S. Shapley) , Kluwer (1991)
How to Cite This Entry:
Stochastic game. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Stochastic_game&oldid=48849
This article was adapted from an original article by V.K. Domanskii (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article