Difference between revisions of "The Chicken Game"

From Simulace.info
Jump to: navigation, search
(Payoff matrix)
(Payoff matrix)
 
(19 intermediate revisions by the same user not shown)
Line 6: Line 6:
  
 
=Nuclear stalemate=
 
=Nuclear stalemate=
Bertrand Russell saw in chicken a metaphor for the nuclear stalemate. His 1959 book, Common Sense and Nuclear Warfare, not only describes the game but offers mordant comments on those who play the geopolitical version of it. Incidentally, the game Russell describes is now considered the "canonical" chicken, at least in game theory, rather than the off-the-cliff version of the movie <ref name="poundstone">POUNDSTONE, William. Prisoner´s dilemma. New York: Anchor Books, c1992, xi, 294 s. ISBN 03-854-1580-X. </ref>.
+
Bertrand Russell saw in chicken a metaphor for the nuclear stalemate. His 1959 book, Common Sense and Nuclear Warfare, not only describes the game but offers mordant comments on those who play the geopolitical version of it. Incidentally, the game Russell describes is now considered the "canonical" chicken, at least in game theory, rather than the off-the-cliff version of the movie <ref name="poundstone">POUNDSTONE, William. Prisoner´s dilemma. New York: Anchor Books, c1992, xi, 294 s. ISBN 03-854-1580-X. </ref>
  
 
‘Since the nuclear stalemate became apparent, the Governments of East and West have adopted the policy which Mr. Dulles calls "brinkmanship." This is a policy adapted from a sport which, I am told, is practised by some youthful degenerates. This sport is called "Chicken!" It is played by choosing a long straight road with a white line down the middle and starting two very fast cars towards each other from opposite ends. Each car is expected to keep the wheels of one side on the white line. As they approach each other, mutual destruction becomes more and more imminent. If one of them swerves from the white line before the other, the other, as he passes, shouts "Chicken!" and the one who has swerved becomes an object of contempt....’
 
‘Since the nuclear stalemate became apparent, the Governments of East and West have adopted the policy which Mr. Dulles calls "brinkmanship." This is a policy adapted from a sport which, I am told, is practised by some youthful degenerates. This sport is called "Chicken!" It is played by choosing a long straight road with a white line down the middle and starting two very fast cars towards each other from opposite ends. Each car is expected to keep the wheels of one side on the white line. As they approach each other, mutual destruction becomes more and more imminent. If one of them swerves from the white line before the other, the other, as he passes, shouts "Chicken!" and the one who has swerved becomes an object of contempt....’
  
‘As played by irresponsible boys, this game is considered decadent and immoral, though only the lives of the players are risked. But when the game is played by eminent statesmen, who risk not only their own lives but those of many hundreds of millions of human beings, it is thought on both sides that the statesmen on one side are displaying a high degree of wisdom and courage, and only the statesmen on the other side are reprehensible. This, of course, is absurd. Both are to blame for playing such an incredibly dangerous game. The game may be played without misfortune a few times, but sooner or later it will come to be felt that loss of face is more dreadful than nuclear annihilation. The moment will come when neither side can face the derisive cry of "Chicken!" from the other side. When that moment is come, the statesmen of both sides will plunge the world into destruction.’ <ref name="russell">RUSSELL, Bertrand. Common sense and nuclear warfare. New York: Routledge, 2001, xxvii, 77 p. ISBN 04-152-4994-5.</ref>.
+
‘As played by irresponsible boys, this game is considered decadent and immoral, though only the lives of the players are risked. But when the game is played by eminent statesmen, who risk not only their own lives but those of many hundreds of millions of human beings, it is thought on both sides that the statesmen on one side are displaying a high degree of wisdom and courage, and only the statesmen on the other side are reprehensible. This, of course, is absurd. Both are to blame for playing such an incredibly dangerous game. The game may be played without misfortune a few times, but sooner or later it will come to be felt that loss of face is more dreadful than nuclear annihilation. The moment will come when neither side can face the derisive cry of "Chicken!" from the other side. When that moment is come, the statesmen of both sides will plunge the world into destruction.’ <ref name="russell">RUSSELL, Bertrand. Common sense and nuclear warfare. New York: Routledge, 2001, xxvii, 77 p. ISBN 04-152-4994-5.</ref>
  
 
=Principles=
 
=Principles=
Line 21: Line 21:
 
Being chicken is the next to worst outcome, but still better than dying.
 
Being chicken is the next to worst outcome, but still better than dying.
  
There is a cooperative outcome in chicken. It's not so bad if both players swerve. Both come out alive, and no one can call the other a chicken. The payoff table might look like the following. The numbers represent arbitrary points, starting with 0 for the worst outcome, 1 for the next-to-worst outcome, and so on <ref name="poundstone">William Poundstone, Prisoner's Dilemma, Doubleday, NY 1992</ref>.
+
There is a cooperative outcome in chicken. It's not so bad if both players swerve. Both come out alive, and no one can call the other a chicken. The payoff table might look like the following. The numbers represent arbitrary points, starting with 0 for the worst outcome, 1 for the next-to-worst outcome, and so on.
 +
 
 +
The outcome where you drive straight and the other swerves is also an equilibrium point. What actually happens when this game is played? It's hard to say. Under Nash's theory, either of the two of the equilibrium points is an equally "rational" outcome. Each player is hoping for a different equilibrium point, and unfortunately the outcome may not be an equilibrium point at all. Each player can choose to drive straight – on grounds that it is consistent with a rational, Nash-equilibrium solution – and rationally crash. <ref name="poundstone">William Poundstone, Prisoner's Dilemma, Doubleday, NY 1992</ref>  
  
 
==Payoff matrix==
 
==Payoff matrix==
The game of chicken has two Nash equilibriums (boldface, lower left and upper right cells). This is another case where the Nash theory leaves something to be desired. You don't want two solutions, any more than you want two heads. The equilibrium points are the cases where one player swerves and the other doesn't (lower left and upper right)<ref name="poundstone">William Poundstone, Prisoner's Dilemma, Doubleday, NY 1992</ref>..
+
The game of chicken has two [[Nash equilibrium]]s (boldface, lower left and upper right cells). This is another case where the Nash theory leaves something to be desired. You don't want two solutions, any more than you want two heads. The equilibrium points are the cases where one player swerves and the other doesn't (lower left and upper right).<ref name="poundstone">William Poundstone, Prisoner's Dilemma, Doubleday, NY 1992</ref>  
  
 
{| class="wikitable"
 
{| class="wikitable"
Line 53: Line 55:
 
| -100, -100
 
| -100, -100
 
|}
 
|}
 +
 +
=Application=
 +
* evolutionary game theory: stable strategy
 +
 +
==Evolutionary Game Theory==
 +
There are two approaches to evolutionary game theory. The first approach derives from the work of Maynard Smith and Price and employs the concept of an <b>evolutionarily stable strategy</b> as the principal tool of analysis. The second approach constructs an explicit model of the process by which the frequency of strategies change in the population and studies properties of the evolutionary dynamics within that model.<ref name="alexander">Alexander, J. McKenzie, "Evolutionary Game Theory", The Stanford Encyclopedia of Philosophy (Fall 2009 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/fall2009/entries/game-evolutionary/></ref>
 +
 +
The first approach can thus be thought of as providing a static conceptual analysis of evolutionary stability. “Static” because, although definitions of evolutionary stability are given, the definitions advanced do not typically refer to the underlying process by which behaviours (or strategies) change in the population. The second approach, in contrast, does not attempt to define a notion of evolutionary stability: once a model of the population dynamics has been specified, all of the standard stability concepts used in the analysis of dynamical systems can be brought to bear.
 +
 +
As an example of the first approach, consider the problem of the Hawk-Dove game, analyzed by Maynard Smith and Price in “The Logic of Animal Conflict.” In this game, two individuals compete for a resource of a fixed value V. (In biological contexts, the value V of the resource corresponds to an increase in the Darwinian fitness of the individual who obtains the resource; in a cultural context, the value V of the resource would need to be given an alternate interpretation more appropriate to the specific model at hand.) Each individual follows exactly one of two strategies described below:
 +
 +
<div><b>Hawk</b> Initiate aggressive behaviour, not stopping until injured or until one's opponent backs down.</div>
 +
<div><b>Dove</b> Retreat immediately if one's opponent initiates aggressive behaviour.
 +
</div>
 +
If we assume that (1) whenever two individuals both initiate aggressive behaviour, conflict eventually results and the two individuals are equally likely to be injured, (2) the cost of the conflict reduces individual fitness by some constant value C, (3) when a Hawk meets a Dove, the Dove immediately retreats and the Hawk obtains the resource, and (4) when two Doves meet the resource is shared equally between them, the fitness payoffs for the Hawk-Dove game can be summarized according to the following matrix:
 +
{| class="wikitable"
 +
|
 +
| hawk
 +
| dove
 +
|-
 +
| hawk
 +
| 1/2(V-C)
 +
| V
 +
|-
 +
| dove
 +
| 0
 +
| V/2
 +
|}
 +
 +
(The payoffs listed in the matrix are for that of a player using the strategy in the appropriate row, playing against someone using the strategy in the appropriate column. For example, if you play the strategy Hawk against an opponent who plays the strategy Dove, your payoff is V; if you play the strategy Dove against an opponent who plays the strategy Hawk, your payoff is 0.)
 +
 +
In order for a strategy to be evolutionarily stable, it must have the property that if almost every member of the population follows it, no mutant (that is, an individual who adopts a novel strategy) can successfully invade. This idea can be given a precise characterization as follows: Let ΔF(s1,s2) denote the change in fitness for an individual following strategy s1 against an opponent following strategy s2, and let F(s) denote the total fitness of an individual following strategy s; furthermore, suppose that each individual in the population has an initial fitness of F0. If σ is an evolutionarily stable strategy and μ a mutant attempting to invade the population, then
 +
 +
    F(σ) = F0 + (1−p)ΔF(σ,σ) + pΔF(σ,μ)
 +
 +
    F(μ) = F0 + (1−p)ΔF(μ,σ) + pΔF(μ,μ)
 +
 +
where p is the proportion of the population following the mutant strategy μ.
 +
 +
Since σ is evolutionarily stable, the fitness of an individual following σ must be greater than the fitness of an individual following μ (otherwise the mutant following μ would be able to invade), and so F(σ) > F(μ). Now, as p is very close to 0, this requires that either that
 +
 +
    ΔF(σ,σ) > ΔF(μ,σ)
 +
 +
or that
 +
 +
    ΔF(σ,σ) = ΔF(μ,σ) and ΔF(σ,μ) > ΔF(μ,μ)
 +
 +
(This is the definition of an ESS that Maynard Smith and Price give.) In other words, what this means is that a strategy σ is an ESS if one of two conditions holds: (1) σ does better playing against σ than any mutant does playing against σ, or (2) some mutant does just as well playing against σ as σ, but σ does better playing against the mutant than the mutant does.
 +
 +
Given this characterization of an evolutionarily stable strategy, one can readily confirm that, for the Hawk-Dove game, the strategy Dove is not evolutionarily stable because a pure population of Doves can be invaded by a Hawk mutant. If the value V of the resource is greater than the cost C of injury (so that it is worth risking injury in order to obtain the resource), then the strategy Hawk is evolutionarily stable. In the case where the value of the resource is less than the cost of injury, there is no evolutionarily stable strategy if individuals are restricted to following pure strategies, although there is an evolutionarily stable strategy if players may use mixed strategies.
 +
 +
In the years following the original work of Maynard Smith and Price, alternate analytic solution concepts have been proposed. Of these, two important ones are the idea of an evolutionarily stable set (see Thomas 1984, 1985a,b), and the idea of a “limit ESS” (see Selten 1983, 1988). The former provides a setwise generalization of the concept of an evolutionarily stable strategy, and the latter extends the concept of an evolutionarily stable strategy to the context of two-player extensive form games.<ref name="alexander">Alexander, J. McKenzie, "Evolutionary Game Theory", The Stanford Encyclopedia of Philosophy (Fall 2009 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/fall2009/entries/game-evolutionary/></ref>
 +
 +
=Variations=
 +
 +
* As you speed toward possible doom, you are informed that the approaching driver is your long-lost identical twin. Neither of you suspected the other's existence, but you are quickly briefed that both of you dress alike – are Cubs fans – have a rottweiler named Max. And hey, look at the car coming toward you – another 1957 firecracker-red convertible. Evidently, the twin thinks exactly the way you do. Does this change things?
 +
 +
* This time you are a perfectly logical being (whatever that is) and so is the other driver. There is only one "logical" thing to do in a chicken dilemma. Neither of you is capable of being mistaken about what to do.
 +
 +
* There is no other driver, it's a big mirror placed across the highway. If you don't swerve you smash into the mirror and die.
 +
 +
All these cases stack the deck in favor of swerving. Provided the other driver is almost certain to do whatever you do, that's the better strategy. Of course, there is no such guarantee in general.
 +
 +
Strangely enough, an irrational player has the upper hand in chicken. Take these variations:
 +
 +
* The other driver is suicidal and wants to die.
 +
 +
* The other driver is a remote-controlled dummy whose choice is made randomly. There is a 50 percent chance he will swerve and a 50 percent chance he will drive straight.
 +
 +
The suicidal driver evidently can be counted on to drive straight (the possibly fatal strategy). You'd rationally have to swerve. The random driver illustrates another difference between chicken and the prisoner's dilemma. With an opponent you can't second-guess, you might be inclined to play it safe and swerve. Of the two strategies in chicken, swerving (cooperation) has the maximum minimum. In the prisoner's dilemma, defection is safer. <ref name="poundstone">William Poundstone, Prisoner's Dilemma, Doubleday, NY 1992</ref>
  
 
=Chicken and Prisoner=
 
=Chicken and Prisoner=

Latest revision as of 12:32, 27 January 2013

Chicken game is also known as Game of Chicken, Chicken, Hawk-dove. Besides Prisoner's dilemma, Stag Hunt, Battle of the Sexes, Chicken game is one of well-known analysed games from Game theory.

History

The Chicken story comes from the deadly teenage game of the 1950s, in which two teens (or groups of teens) drove their cars straight at each other to find out who would flinch first. The first to grab the wheel and swerve "lost" by showing that s/he lacked courage. Nevertheless, if one swerved and the other didn't, as in the upper right and lower left comers, the joint welfare of both parties was at its highest: the "hawk" could preen in his or her show of valor, while even the losing "dove" or "chicken" would still be alive, if embarrassed. The worst case, of course, was when nobody swerved and the cars crashed (lower right corner). If both swerved (upper left), the crash would not occur, but no one would be able to claim bravery, so that the "resource" of preening would go unexploited. Thus in Chicken as in Battle of the Sexes, there are two jointly maximizing results, but those results have unequal payoffs to the two players. The difference is that in Battle of the Sexes, the jointly maximizing solutions require both parties to follow a single strategy, even though one prefers it and the other does not. In Chicken, on the other hand, the parties must choose opposite strategies, with one deferring to the other to avoid the crash, while the other drives through and claims the reward [1].

Nuclear stalemate

Bertrand Russell saw in chicken a metaphor for the nuclear stalemate. His 1959 book, Common Sense and Nuclear Warfare, not only describes the game but offers mordant comments on those who play the geopolitical version of it. Incidentally, the game Russell describes is now considered the "canonical" chicken, at least in game theory, rather than the off-the-cliff version of the movie [2]

‘Since the nuclear stalemate became apparent, the Governments of East and West have adopted the policy which Mr. Dulles calls "brinkmanship." This is a policy adapted from a sport which, I am told, is practised by some youthful degenerates. This sport is called "Chicken!" It is played by choosing a long straight road with a white line down the middle and starting two very fast cars towards each other from opposite ends. Each car is expected to keep the wheels of one side on the white line. As they approach each other, mutual destruction becomes more and more imminent. If one of them swerves from the white line before the other, the other, as he passes, shouts "Chicken!" and the one who has swerved becomes an object of contempt....’

‘As played by irresponsible boys, this game is considered decadent and immoral, though only the lives of the players are risked. But when the game is played by eminent statesmen, who risk not only their own lives but those of many hundreds of millions of human beings, it is thought on both sides that the statesmen on one side are displaying a high degree of wisdom and courage, and only the statesmen on the other side are reprehensible. This, of course, is absurd. Both are to blame for playing such an incredibly dangerous game. The game may be played without misfortune a few times, but sooner or later it will come to be felt that loss of face is more dreadful than nuclear annihilation. The moment will come when neither side can face the derisive cry of "Chicken!" from the other side. When that moment is come, the statesmen of both sides will plunge the world into destruction.’ [3]

Principles

Chicken readily translates into an abstract game. Strictly speaking, game theory's chicken dilemma occurs at the last possible moment of a game of highway chicken. Each driver has calculated his reaction time and his car's turning radius (which are assumed identical for both cars and both drivers); there comes a moment of truth in which each must decide whether or not to swerve. This decision is irrevocable and must be made in ignorance of the other driver's decision. There is no time for one driver's last-minute decision to influence the other driver's decision. In its simultaneous, life or death simplicity, chicken is one of the purest examples of von Neumann's concept of a game.

The way players rank outcomes in highway chicken is obvious. The worst thing that can happen is for both players not to swerve. Then – BAM!! – the coroner picks both out of a Corvette dashboard.

The best thing that can happen, the real point of the game, is to show your machismo by not swerving and letting the other driver swerve. You survive to gloat, and the other guy is "chicken."

Being chicken is the next to worst outcome, but still better than dying.

There is a cooperative outcome in chicken. It's not so bad if both players swerve. Both come out alive, and no one can call the other a chicken. The payoff table might look like the following. The numbers represent arbitrary points, starting with 0 for the worst outcome, 1 for the next-to-worst outcome, and so on.

The outcome where you drive straight and the other swerves is also an equilibrium point. What actually happens when this game is played? It's hard to say. Under Nash's theory, either of the two of the equilibrium points is an equally "rational" outcome. Each player is hoping for a different equilibrium point, and unfortunately the outcome may not be an equilibrium point at all. Each player can choose to drive straight – on grounds that it is consistent with a rational, Nash-equilibrium solution – and rationally crash. [2]

Payoff matrix

The game of chicken has two Nash equilibriums (boldface, lower left and upper right cells). This is another case where the Nash theory leaves something to be desired. You don't want two solutions, any more than you want two heads. The equilibrium points are the cases where one player swerves and the other doesn't (lower left and upper right).[2]

chicken/drive swerve (chicken) straight (drive)
swerve (chicken) tie lose,win
straight (drive) win,lose death
chicken/drive swerve (chicken) straight (drive)
swerve (chicken) 0, 0 -1, +1
straight (drive) +1, -1 -100, -100

Application

  • evolutionary game theory: stable strategy

Evolutionary Game Theory

There are two approaches to evolutionary game theory. The first approach derives from the work of Maynard Smith and Price and employs the concept of an evolutionarily stable strategy as the principal tool of analysis. The second approach constructs an explicit model of the process by which the frequency of strategies change in the population and studies properties of the evolutionary dynamics within that model.[4]

The first approach can thus be thought of as providing a static conceptual analysis of evolutionary stability. “Static” because, although definitions of evolutionary stability are given, the definitions advanced do not typically refer to the underlying process by which behaviours (or strategies) change in the population. The second approach, in contrast, does not attempt to define a notion of evolutionary stability: once a model of the population dynamics has been specified, all of the standard stability concepts used in the analysis of dynamical systems can be brought to bear.

As an example of the first approach, consider the problem of the Hawk-Dove game, analyzed by Maynard Smith and Price in “The Logic of Animal Conflict.” In this game, two individuals compete for a resource of a fixed value V. (In biological contexts, the value V of the resource corresponds to an increase in the Darwinian fitness of the individual who obtains the resource; in a cultural context, the value V of the resource would need to be given an alternate interpretation more appropriate to the specific model at hand.) Each individual follows exactly one of two strategies described below:

Hawk Initiate aggressive behaviour, not stopping until injured or until one's opponent backs down.
Dove Retreat immediately if one's opponent initiates aggressive behaviour.

If we assume that (1) whenever two individuals both initiate aggressive behaviour, conflict eventually results and the two individuals are equally likely to be injured, (2) the cost of the conflict reduces individual fitness by some constant value C, (3) when a Hawk meets a Dove, the Dove immediately retreats and the Hawk obtains the resource, and (4) when two Doves meet the resource is shared equally between them, the fitness payoffs for the Hawk-Dove game can be summarized according to the following matrix:

hawk dove
hawk 1/2(V-C) V
dove 0 V/2

(The payoffs listed in the matrix are for that of a player using the strategy in the appropriate row, playing against someone using the strategy in the appropriate column. For example, if you play the strategy Hawk against an opponent who plays the strategy Dove, your payoff is V; if you play the strategy Dove against an opponent who plays the strategy Hawk, your payoff is 0.)

In order for a strategy to be evolutionarily stable, it must have the property that if almost every member of the population follows it, no mutant (that is, an individual who adopts a novel strategy) can successfully invade. This idea can be given a precise characterization as follows: Let ΔF(s1,s2) denote the change in fitness for an individual following strategy s1 against an opponent following strategy s2, and let F(s) denote the total fitness of an individual following strategy s; furthermore, suppose that each individual in the population has an initial fitness of F0. If σ is an evolutionarily stable strategy and μ a mutant attempting to invade the population, then

   F(σ) = F0 + (1−p)ΔF(σ,σ) + pΔF(σ,μ)
   F(μ) = F0 + (1−p)ΔF(μ,σ) + pΔF(μ,μ)

where p is the proportion of the population following the mutant strategy μ.

Since σ is evolutionarily stable, the fitness of an individual following σ must be greater than the fitness of an individual following μ (otherwise the mutant following μ would be able to invade), and so F(σ) > F(μ). Now, as p is very close to 0, this requires that either that

   ΔF(σ,σ) > ΔF(μ,σ)

or that

   ΔF(σ,σ) = ΔF(μ,σ) and ΔF(σ,μ) > ΔF(μ,μ)

(This is the definition of an ESS that Maynard Smith and Price give.) In other words, what this means is that a strategy σ is an ESS if one of two conditions holds: (1) σ does better playing against σ than any mutant does playing against σ, or (2) some mutant does just as well playing against σ as σ, but σ does better playing against the mutant than the mutant does.

Given this characterization of an evolutionarily stable strategy, one can readily confirm that, for the Hawk-Dove game, the strategy Dove is not evolutionarily stable because a pure population of Doves can be invaded by a Hawk mutant. If the value V of the resource is greater than the cost C of injury (so that it is worth risking injury in order to obtain the resource), then the strategy Hawk is evolutionarily stable. In the case where the value of the resource is less than the cost of injury, there is no evolutionarily stable strategy if individuals are restricted to following pure strategies, although there is an evolutionarily stable strategy if players may use mixed strategies.

In the years following the original work of Maynard Smith and Price, alternate analytic solution concepts have been proposed. Of these, two important ones are the idea of an evolutionarily stable set (see Thomas 1984, 1985a,b), and the idea of a “limit ESS” (see Selten 1983, 1988). The former provides a setwise generalization of the concept of an evolutionarily stable strategy, and the latter extends the concept of an evolutionarily stable strategy to the context of two-player extensive form games.[4]

Variations

  • As you speed toward possible doom, you are informed that the approaching driver is your long-lost identical twin. Neither of you suspected the other's existence, but you are quickly briefed that both of you dress alike – are Cubs fans – have a rottweiler named Max. And hey, look at the car coming toward you – another 1957 firecracker-red convertible. Evidently, the twin thinks exactly the way you do. Does this change things?
  • This time you are a perfectly logical being (whatever that is) and so is the other driver. There is only one "logical" thing to do in a chicken dilemma. Neither of you is capable of being mistaken about what to do.
  • There is no other driver, it's a big mirror placed across the highway. If you don't swerve you smash into the mirror and die.

All these cases stack the deck in favor of swerving. Provided the other driver is almost certain to do whatever you do, that's the better strategy. Of course, there is no such guarantee in general.

Strangely enough, an irrational player has the upper hand in chicken. Take these variations:

  • The other driver is suicidal and wants to die.
  • The other driver is a remote-controlled dummy whose choice is made randomly. There is a 50 percent chance he will swerve and a 50 percent chance he will drive straight.

The suicidal driver evidently can be counted on to drive straight (the possibly fatal strategy). You'd rationally have to swerve. The random driver illustrates another difference between chicken and the prisoner's dilemma. With an opponent you can't second-guess, you might be inclined to play it safe and swerve. Of the two strategies in chicken, swerving (cooperation) has the maximum minimum. In the prisoner's dilemma, defection is safer. [2]

Chicken and Prisoner

Mutual defection (the crash when both players drive straight) is the most feared outcome in chicken. In the prisoner's dilemma, cooperation while the other player defects (being the sucker) is the worst outcome.

The players of a prisoner's dilemma are better off defecting, no matter what the other does. One is inclined to view the other player's decision as a given (possibly the other prisoner has already spilled his guts, and the police are withholding this information). Then the question becomes, why not take the course that is guaranteed to produce the higher payoff?

This train of thought is less compelling in chicken. The player of chicken has a big stake in guessing what the other player is going to do. A curious feature of chicken is that both players want to do the opposite of whatever the other is going to do. If you knew with certainty that your opponent was going to swerve, you would want to drive straight. And if you knew he was going to drive straight, you would want to swerve – better chicken than dead. When both players want to be contrary, how do you decide? [2].

References

  1. Rose, Carol M., "Game Stories" (2010). Faculty Scholarship Series. Paper 1728. online
  2. 2.0 2.1 2.2 2.3 2.4 POUNDSTONE, William. Prisoner´s dilemma. New York: Anchor Books, c1992, xi, 294 s. ISBN 03-854-1580-X. Cite error: Invalid <ref> tag; name "poundstone" defined multiple times with different content Cite error: Invalid <ref> tag; name "poundstone" defined multiple times with different content Cite error: Invalid <ref> tag; name "poundstone" defined multiple times with different content Cite error: Invalid <ref> tag; name "poundstone" defined multiple times with different content
  3. RUSSELL, Bertrand. Common sense and nuclear warfare. New York: Routledge, 2001, xxvii, 77 p. ISBN 04-152-4994-5.
  4. 4.0 4.1 Alexander, J. McKenzie, "Evolutionary Game Theory", The Stanford Encyclopedia of Philosophy (Fall 2009 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/fall2009/entries/game-evolutionary/>