Gibbons game theory for applied economist pdf




















This listing are populated with some of the most similar and applicable term comparable to your current title and put together into a compact listing to your comfort by our platform. Hopefully you could find something useful by offering you more possibilities. Millions discover their favorite reads on issuu every month.

Give your content the digital home it deserves. Place your e-signature to the PDF page. Click on Done to confirm the alterations.

Save the record or print out your PDF version. Submit immediately towards the receiver. Get form. Get Form. Format Related content. Related links form. Get This Form Now! If you believe that this page should be taken down, please follow our DMCA take down process here. Ensure the security of your data and transactions. S21 with the probabilities pzll Player 1 S expected payoff from the mixed strategy P11 given in 1 for every probability distribution P1 over S1, and Pi must satisfy 1.

Thus, for the mixed strategy 1 p11 ,.. To complement Figure 1. The results are summarized in probability on all other pure strategies is also a best response for Figure 1. If is the entire interval [0, 1]. It is worth emphasizing that such a mixed-strategy Nash equi- librium does not rely on any player flipping coins, rolling dice, or otherwise choosing a strategy at random.

Rather, we interpret player j's mixed strategy as a statement of player i's uncertainty Tails about player j's choice of a pure strategy. In baseball, for ex- 1 q ample, the pitcher might decide whether to throw a fastball or a Tails Heads curve based on how weil each pitch was thrown during pregame practice.

If the batter understands how the pitcher will make a choice but did not observe the pitcher' s practice, then the batter Figure 1. As a second example of a mixed-strategy Nash equilibrium, consider the Battle of the Sexes from Section l.

Let q, 1-q be the mixed strategy in which Pat plays Opera with probability q, Fight and let r, 1- r be the mixed strategy in which Chris plays Opera with probability r. Similarly, if Chris plays r, 1 - r then Pat's Figure 1. As shown in Figure 1. The other two inter- sections represent the pure-strategy Nash equilibria Fight, Fight and Opera, Opera described in Section l.

We tum next to a graphical argument that any such In any game, a Nash equilibrium involving pure or mixed game has a Nash equilibrium possibly involving mixed strate- strategies appears as an intersection of the players' best-response gies.

There even when some or all of the players have more than two pure are two important comparisons: x versus z, and y versus w. Based strategies. Recall from the previous shaped i. Flipping and rotating q suchthat Down is optimal for player 1, and m case u there 1s these four figures, as was done to produce Figure 1. Letting r, 1 - r denote Figures 1. In the latter figures, r' is defined anal- a mixed strategy for player 1, where r is the probability that 1 ogously to q' in Figure 1.

Checking all In cases iii and iv , neither Up nor Down is strictly domi- sixteen possible pairs of best-response correspondences is left as nated. Thus, Up must be optimal for some values of q and Down an exercise. Instead, we describe the qualitative features that can optimal for others. Then in result. In both cases, any value of r is optimal equilibria and a single mixed-strategy equilibrium. These best-response correspondences are given in Figure 1.

Case i Case iz case 3. The Prisoners' Dilemma is an example of case 1 ; it re- Figure 1. If the above ar- guments for two-by-two games are stated mathematically rather than graphically, then they can be generalized to apply to n-player r games with arbitrary finite strategy spaces. On the contrary, in Figure 1.

The relevant correspondence is the n-player best- response correspondence. The relevant fixed-point theorem is due j x to Kakutani , who generalized Brouwer's theorem to allow for well-behaved correspondences as weil as functions.

The n-player best-response correspondence is computed from the n individual players' best-response correspondences as fol- lows. Consider an arbitrary combination of mixed strategies x' 1 X p 1 , Foreach player i, derive i's best response s to the other players' mixed strategies pb Then con- struct the set of all possible combinations of one such best response for each player. Formally, derive each player's best-response Figure 1. A combination of mixed strategies pi, This completes step 1.

The role of priate generalizations of the limit from the left, the limit from the continuity in Brouwer's fixed-point theorem can be seen by mod- right, and all the values in between.

The reason for this is that, ifying f x in Figure 1. In Figure 1. Because each player's best-response corre- interval in between. If f x' in Figure 1. Nash's Theorem guarantees that an equilibrium exists in a broad dass of games, but none of the applications analyzed in 16 The value of f x' is indicated by the solid circle. The open circle indicates Section 1. The dotted line is included only to indicate has infinite strategy spaces.

Players 1 and 2 are bargaining over how to split one dollar. What are the pure-strategy Nash equilibria of this 1. On the assumptions underlying iterated elimination of strictly dominated strategies and Nash equilibrium, and on the inter- Section 1.

Suppose there are n firms in the Cournot oligopoly model. Brandenburger That which they choose, at a cost, prior to choosing prices. Following Cournot, suppose that the settlement can depend on the information content of the parties' firms choose their quantities simultaneously. What is the Nash offers, in both final-offer and conventional arbitration.

Finally, on equilibrium? What happens as n approaches infinity? Consider the following two finite versions of the Cournot Maskin No other quantities are feasible. What is agame in normal form? What is a strictly dominated ated. What is a pure-strategy Nash or a third quantity, q'. Find a value for q' such that the game is equilibrium in anormal-form game? A, in the sense that qc, qc is a unique Nash equilibrium and both firms are worse off 1. In the following normal-form game, what strategies survive in equilibrium than they could be if they cooperated, but neither iterated elimination of strictly dominated strategies?

What are the firm has a strictly dominated strategy. M 3,4 1,2 2,3 1. Suppose that the quantity that con- 1. Show L R that if the firms choose prices simultaneously, then the unique Nash equilibrium is that both firms charge the price c. Each of the candidates for a single office simultaneously chooses a cam- paign platform i.

Each of two firms has one job opening. Imagine that there are two workers, each of whom can apply to only one firm. Assume that any candtdates who choose the same platform equally split the votes cast for that platform, and that ties among the leading vote-getters are resolved Worker 2 by coin flips.

See Hotelling for an early model along these Apply to Apply to lines. Show that PropositionBin Appendix l. C holds for mixed- 1. What is a mixed strategy in a normal-form game? What is a as well as pure-strategy Nash equilibria: the strategies played with mixed-strategy Nash equilibrium in a normal-form game? Show that there are no mixed-strategy Nash equilibria in the three normal-form games analyzed in Section 1. Solve for the mixed-strategy Nash equilibria in the game in Aumann, R. Problem of Perfection.

Marktform und Gleichgewicht. Vi- Bayesian Rationality. Bertrand, J. Brandenburger, A. Cournot, A. Recherehes sur les Principes Mathematiques de la Theorie des Richesses. Edited by N. New York: Macmillan, Dasgupta, P. Farber, H. Friedman, J. Gibbons, R. Hardin, G. Harsanyi, J. Hotelling, H. Hume, D. A Treatise of Human Nature.

London: J. Kakutani, S. Kreps, D. Montgomery, J. Nash, J. We again restriet at- tention to games with complete information i.

In Sections 2. The central issue in all dynamic games is credibility. As an example of a noncredible threat, consider the following two-move game. Second, player 2 observes player 1's move and then chooses whether or not to explode a grenade that will kill both players. Thus, player 1 should peated games, and we analyze Friedman's model of col- pay player 2 nothing. The grenade game belongs to this dass, as do Stack- general dynamic game of complete information, whether with per- elberg's model of duopoly and Leontief's model of fect or imperfect information.

We define the extensive-form repre- wage and employment determination in a unionized firm. We sentation of agame and relate it to the normal-form representation define the backwards-induction outcome of such games and briefly introduced in Chapter 1.

We also define subgame-perfect Nash discuss its relation to Nash equilibrium deferring the main discus- equilibrium for general games. The main point of both this sec- sion of this relation until Section 2. We solve for this outcome tion and the chapter as a whole isthat a dynamic game of com- in the Stackelberg and Leontief models.

We also derive the analo- plete information may have many Nash equilibria, but some of gaus outcome in Rubinstein's bargaining model, although these may involve noncredible threats or promises.

The subgame- this game has a potentially infinite sequence of moves and so does perfect Nash equilibria are those that pass a credibility test. In Section 2. As will be explained in Section 2. We define the Subgame- 2. A Theory: Backwards Induction perfeet outcome of such games, which is the natural extension of The grenade game is a member of the following dass of simple backwards induction to these games. We solve for this outcome games of complete and perfect information: in Diamond and Dybvig's model of bank runs, in a model of tariffs and imperfect international competition, and in Lazear 1.

Player 1 chooses an action a1 from the feasible set A 1. Player 2 observes a1 and then chooses an action a2 from the of players plays a given game repeatedly, with the outcomes of all feasible set Az.

The theme of the analysisisthat credible threats and promises about future behavior can influence current behavior. We define subgame-perfect Nash equilibrium for repeated games and relate it to the backwards- induction and subgame-perfect outcomes defined in Sections 2. Some moves by player 1 could even end the game, 1 might wonder whether an opponent who threatens to explode a grenade is crazy.

We model such doubts as incomplete information-player 1 is w1thout player 2 getting a move; for such values of a1 , the set of feasible actions unsure about player 2's payoff function. See Chapter 3.

Az al contains only one element, so player 2 has no choice to make. In Other economic problems can be modeled by allowing for a Ionger Section 2.

A we will see that the verbal description in 1 - 3 sequence of actions, either by adding more players or by allowing is the extensive-form representation of the game. We will relate players to move more than once.

Rubinstein's bargaining game, the extensive- and normal-form representations, but we will find discussed in Section 2.

The key that for dynamic games the extensive-form representation is of- features of a dynamic game of complete and perfect information ten more convenient. B we will define Subgame- are that i the moves occur in sequence, ii all previous moves perfeet Nash equilibrium: a Nash equilibrium is subgame-perfect are observed before the next move is chosen, and iii the play- if it does not involve a noncredible threat, in a sense to be made ers' payoffs from each feasible combination of moves are common precise.

We will find that there may be multiple Nash equilib- knowledge. When player 2 gets the move at the second stage of the with the backwards-induction outcome.

This is an example of game, he or she will face the following problem, given the action the observation in Section l. C that some games have multiple a1 previously chosen by player 1: Nash equilibria but have one equilibrium that stands out as the compelling solution to the game. Consider the Assurne that for each a1 in A 1, player 2's optimization problern has following three-move game, in which player 1 moves twice: a unique solution, denoted by R 2 a 1.

This is player 2's reaction or best response to player 1's action. Since player 1 can solve 2's problern as weil as 2 can, player 1 should anticipate player 2' s 1. Player 1 chooses L or R, where L ends the game with payoffs reaction to each action a 1 that 1 might take, so 1's problern at the of 2 to player 1 and 0 to player 2.

Player 2 observes 1's choice. If 1 chose R then 2 chooses L' or R', where L' ends the game with payoffs of 1 to both Assurne that this optimization problern for player 1 also has a players. We will call aj,Rz ai the back- wards-induction outcome of this game.

The backwards-induction outcome does not involve noncredible threats: player 1 anticipates 3. Player 1 observes 2's choice and recalls his or her own choice that player 2 will respond optimally to any action a1 that 1 might in the first stage. If the earlier choices were Rand R' then 1 choose, by playing Rz at ; player 1 gives no credence to threats chooses L" or R", both of which end the game, L" with pay- by player 2 to respond in ways that will not be in 2' s self-interest offs of 3 to player 1 and 0 to player 2 and R" with analogaus when the second stage arrives.

Recall that in Chapter 1 we used the normal-form represen- tation to study static games of complete information, and we fo- All these words can be translated into the following succinct game cused on the notion of Nash equilibrium as a solution concept tree. This is the extensive-form representation of the game, to be for such games. The top payoff in the pair however, we have made no mention of either the normal-form of payoffs at the end of each branch of the game tree is player 1's, representation or Nash equilibrium.

Another 0 L' R' possibility is that it is common knowledge that player 2 is rational 1 but not that player 1 is rational: if 1 is rational but thinks that 2 thinks that 1 might not be rational, then 1 might choose R in the 1 L" R" 1 first stage, hoping that 2 will think that 1 is not rational and so play R' in the hope that 1 will play R" in the third stage.

Backwards induction assumes that 1's choice of R could be explained along 3 0 these lines. For some games, however, it may be more reasonable 0 2 to assume that 1 played R because 1 is indeed irrational. In such games, backwards induction loses much of its appeal as a predic- To compute the backwards-induction outcome of this game, tion of play, just as Nash equilibrium does in games where game we begin at the third stage i.

Here theory does not provide a unique solution and no convention will player 1 faces a choice between a payoff of 3 from L" and a payoff develop. Thus, at the second stage, player 2 anticipates that if the game reaches the third stage then 1 will play 2. B Stackelberg Model of Duopoly L", which would yield a payoff of 0 for player 2. The second-stage choice for player 2 therefore is between a payoff of 1 from L' and Stackelberg proposed a dynamic model of duopoly in which a payoff of 0 from R', so L' is optimal.

Thus, at the first stage, a dominant or leader firm moves first and a subordinate or player 1 anticipates that if the game reaches the second stage then follower firm moves second. At some points in the history of 2 will play L', which would yield a payoff of 1 for player 1. The the U. It is Straightforward to from Land a payoff of 1 from R, so L is optimal. Following Stackelberg, we will come of this game is for player 1 to choose L in the first stage, develop the model under the assumption that the firms choose thereby ending the game.

Even though backwards induction pre- quantities, as in the Cournotmodel where the firms' choices are dicts that the game will end in the first stage, an important part simultaneous, rather than sequential as here. We leave it as an of the argument concerns what would happen if the game did exercise to develop the analogaus sequential-move model in which not end in the first stage.

In the second stage, for example, when firms choose prices, as they do simultaneously in the Bertrand player 2 anticipates that if the game reaches the third stage then 1 model. This assumption may The timing of the game is as follows: 1 firm 1 chooses a seem inconsistent with the fact that 2 gets to move in the secorid quantity q1 2: 0; 2 firm 2 observes q1 and then chooses a quantity stage only if 1 deviates from the backwards-induction outcome of 3 Recall from the discussion of iterated elimination of strictly dominated strate- the game.

That is, it may seem that if 1 plays R in the first stage gies in Section l. B that it is common knowledge that the players are rational then 2 cannot assume in the second stage that 1 is rational, but if all the players are rational, and all the players know that all the players are this is not the case: if 1 plays R in the first stage then it cannot rational, and all the players know that all the players know that all the players be common knowledge that both players are rational, but there are rational, and so on, ad infinitum.

In the Stackelberg game, how- marginal cost of production fixed costs being zero. Thus, in the Stackelberg game, firm 1 could have achieved its R2 q1 solves Cournotprofit level but chose to do otherwise, so firm 1's profit in the Stackelberg game must exceed its profit in the Cournot game. The same equation for Rz ql appeared in the Cournotgame illustrates an important difference between in our analysis of the simultaneous-move Cournot game in Sec- single- and multi-person decision problems.

In single-person deci- tion 1. The difference is that here Rz ql is truly firm 2's reac- sion theory, having more information can never make the decision tion to firm 1's observed quantity, whereas in the Cournotanalysis maker worse off. In game theory, however, having more informa- R2 ql is firm 2's best response to a hypothesized quantity to be tion or, more precisely, having it known to the other players that simultaneously chosen by firm 1.

To see the effect this information has, consider stage of the game amounts to the modified sequential-move game in which firm 1 chooses q 1 , after which firm 2 chooses qz but does so without observing q 1. Thus, firm 2 should not believe that firm 1 has chosen its Stackelberg quantity. As noted in the previous section, however, sequential-move game.

Thus, "Stackelberg equilib- 4 Just as "Cournot equilibrium" and "Bertrand equilibrium" typically re- rium" can refer both to the sequential-move nature of the game and to the use fer to the Nash equilibria of the Coumot and Bertrand games, references to of a stronger solution concept than simply Nash equilibrium.

C Wages and Employment in a Unionized Firm In Leontief's model of the relationship between a firm and a monopoly union i. The union's utility function is U w, L , where w is the wage the union demands from the firm and L is employment. Assurne that U w, L increases in both wand L. Assurne that R L is increasing and concave. We can say Figure 2. Figure 2. The highest her action without knowledge of the others' choices. For further discussion of feasible profit level is attained by choosing L such that the isoprofit curve through this point, see Section 2.

This inefficiency L makes it puzzling that in practice firms seem to retain exclusive union's indifference curves control over employment. Allowing the firm and the union to bargain over the wage but leaving the firm with exclusive con- trol over employment yields a similar inefficiency. Espinosa and Figure 2. Rhee propose one answer to this puzzle, based on the fact that the union and the firm negotiate repeatedly over time of- ten every three years, in the United States.

See Sec- tion 2. Note the convention that 51 always goes to player Figure 2. In the infinite-horizon model we later consider, Players 1 and 2 are bargaining over one dollar.

They alternate the payoff 5 in the third period will represent player 1's payoff in in making offers: first player 1 makes a proposal that player 2 the game that remains if the third period is reached i.

We assume that each player will accept an 7The discouni factor 8 reflects the time-value of money. Player 2' s the be inning of one period can be put in the bank to earn mterest, say at ra. The value today of a future payoff in the next period is worth only 87r now, a payoff 7r to be rece1ved two peno s is called the present value of that payoff. Thus, if play reaches the third period, should it be reached, and then work backwards to second period, player 2 will offer s2 and player 1 will accept.

Player 1's first-period s, 1- s is exogenously imposed in the third period. Thus, in the backwards-induction because SH is the highest possible third-period payoff.

Parallel sr, 1 - si to player 2, who accepts. The timing is as de- player 1 can achieve in any backwards-induction outcome of the scribed previously, except that the exogenous settlement in step 3 game as a whole.

Because the game could go on infinitely, however, there is no last move at which to begin such an analysis. Fortunately, the following insight first applied 2. A Theory: Subgame Perfeetion both cases, player 1 makes the first offer, the players alternate in making subsequent offers, and the bargaining continues until one We now enrich the dass of games analyzed in the previous sec- player accepts an offer.

As in dynamic games of complete and perfect information, Since we have not formally defined a backwards-induction out- we continue to assume that play proceeds in a sequence of stages, come for this infinite-horizon bargairring game, our arguments with the moves in all previous stages observed before the next stage begins. Nonetheless, these games share players 3 and 4 will be given by a3 a1, a2 , a4 a 1, a2 , then the first- important features with the perfect-information games considered stage interaction between players 1 and 2 amounts to the following in the previous section.

Players 1 and 2 simultaneously choose actions a1 and a2 from mation: feasible sets A1 and A2, respectively. Players 1 and 2 simultaneously choose actions a1 and a2 from 2. Suppose ai, ai is the unique Nash equilibrium of this simultan- 2. Players 3 and 4 observe the outcome of the first stage, a1, a2 , eous-move game.

We will call ai,ai,a3 ai,ai ,a. This outcome is the natural analog of the backwards-induction outcome in games sible sets A3 and A4, respectively.

Play- ers 1 and 2 should not believe a threat by players 3 and 4 that the Many economic problems fit this description. S Three examples latter will respond with actions that are not a Nash equilibrium later discussed in detail are bank runs, tariffs and imperfect inter- in the remaining second-stage game, because when play actually national competition, and tournaments e.

Other want to carry out such a threat exactly because it is not a Nash economic problems can be modeled by allowing for a Ionger se- equilibrium of the gamethat remains at that stage. On the other quence of stages, either by adding players or by allowing players hand, suppose that player 1 is also player 3, and that player 1 does to move in more than one stage.

There could also be fewer play- not play ai in the first stage: player 4 may then want to reconsider ers: in some applications, players 3 and 4 are players 1 and 2; in the assumption that player 3 i. We solve a game from this dass by using an approach in the spirit of backwards induction, but this time the first step in work- 2.

B Bank Runs ing backwards from the end of the game involves solving a real game the simultaneous-move game between players 3 and 4 in Two investors have each deposited D with a bank.

The bank has stage two, given the outcome from stage one rather than solving invested these deposits in a long-term project. If the bank is forced a single-person optimization problern as in the previous section. A on There are two dates at which the investors can make with- drawals from the bank: date 1 is before the bank's investment 8 As in the previous section, the feasible action sets of players 3 and 4 in the matures; date 2 is after. In particular, there may be values of a 1 , az that end the game.

Finally, if neither investor withdraw r, r D, 2r- D makes a withdrawal at date 1 then the project matures and the investors make withdrawal decisions at date 2. If both investors don't 2r- D, D R,R make withdrawals at date 2 then each receives R and the game ends.

If only one investor makes a withdrawal at date 2 then that Figure 2. Finally, if neither investor makes a withdrawal at date 2 then the equilibria: 1 both investors withdraw, leading to a payoff of r, r ; bank returns R to each investor and the game ends. For now, however, we will proceed informally. Let the perfect outcomes and so does not quite fit within the dass of payoffs to the two investors at dates 1 and 2 as a function of their games defined in Section 2.

A : 1 both investors withdraw at withdrawal decisions at these dates be represented by the follow- date 1, yielding payoffs of r, r ; 2 both investors do not with- ing pair of normal-form games. Note well that the normal-form draw at date 1 but do withdraw at date 2, yielding payoffs of game for date 1 is nonstandard: if both investors choose not to R, R at date 2.

If investor 1 believes that investor 2 will withdraw at date 1 then investor 1's best response is to withdraw as well, withdraw don't even though both investors would be better off if they waited until withdraw r,r D,2r- D date 2 to withdraw. This bank-run game differs from the Prisoners' Dilemma discussed in Chapter 1 in an important respect: both don't 2r- D,D next stage games have a Nash equilibrium that Ieads to a socially inefficient Date 1 payoff; in the Prisoners' Dilemma this equilibrium is unique and in dominant strategies , whereas here there also exists a second equilibrium that is efficient.

Thus, this model does not predict withdraw don't when bankrunswill occur, but does show that they can occur as withdraw R,R 2R-D,D an equilibrium phenomenon. See Diamond and Dybvig for a richer model. C Tariffs and lmperfect International Competition We turn next to an application from international economics. Con- To analyze this game, we work backwards. Each country mal-form game at date 2.

Since there is no discounting, we foreign firm. The firm date 1, as in Figure 2. First, the governments simultaneously choose tariff rates, t 1 and t2. Third, pay- The results we derive are consistent with both of these assump- offs are profit to firm i and total welfare to government i, where tions. Both of the best-response functions 2.

If costs. In the equilibrium described by 2. Since firmj's cost is higher it wants to produce less. But if firm j is going to produce less, then the market-clearing price will be higher, so firm i wants to produce more, in which Since 1ri ti, tj, hi, ei, hj, ej can be written as the sum of firm i's prof- case firm j wants to produce even less.

Thus, in equilibrium, hj its on market i which is a function of hi and ej alone and firm i's increases in ti and e'! We now solve for the Nash equilibrium of this game between the govern- ments. If tj, t2 is a Nash equilibrium of this game be- home firm to produce zero for home consumption and to export tween the governments then, for each i, fi must solve the perfect-competition quantity to the other country.

Thus, given that firms i and j play the Nash equilibrium given in 2. First, the workers simultaneously for each i, independent of tj. Thus, in this model, choosing a choose nonnegative effort levels: ei ; 0.

In other models, such as when marginal costs are increasing, the mean. Third, the workers' outputs are observed but their effort governments' equilibrium strategies are not dominant strategies. The workers' wages therefore can depend on their outputs but not directly on their efforts.

Thus, the WH; the wage earned by the loser is WL. If the governments had chosen tariff rates We now translate this application into the terms of the dass equal to zero, however, then the aggregate quantity on each mar- of games discussed in Section 2.

The boss is player 1, whose ket. The workers are players 3 her, 1s stmply one-half the square of the aggregate quantity on and 4, who observe the wages chosen in the first stage and then market i is lower when the governments choose their dominant- simultaneously choose actions a3 and a4 , namely the effort choices strategy tariffs than it would be if they chose zero tariffs.

In fact, e1 and e2. Finally, the players' payoffs are as given earlier. Since outputs and so also wages are functions not only of the players actions but also so there is an incentive for the governments to sign a treaty in 10 which they commit to zero tariffs i. If negative To keep the exposition of this application simple, we ignore several technical details, such as conditions under which the worker's first-order condition is tariffs-that is, subsidies-are feasible, the social optimum is for sufficient.

The application can be skipped without l0ss of continuity. Suppose the boss has chosen the wages WH and WL. If e: is normally distributed with variance u 2 , for example, then 2. Sup- WH - wL, and the marginal increase in the probability of winning. More formally, we assume ipate in the tournament then she must choose wages that satisfy that the density f c: is atomless.

A and B will occur, respectively. At the optimum, 2. Substituting this into 2. We will call this repeated game the two-stage Pris- oners' Dilemma. It belongs to the dass of games analyzed in Sec- 2. Furthermore, the two-stage Prisoners' Dilemma a few 1deas require an infinite horizon.

We also define subgame- satisfies the assumption we made in Section 2. This definition is sible outcome of the first-stage game, a 1 , a 2 , the second-stage Simpler to express for the special case of repeated games than for gamethat remains between players 3 and 4 has a unique Nash the general dynamic games of complete information we consider equilibrium, denoted by a3 a1,a2 ,a4 a1,a2.

In fact, the two- in Section 2. We introduce it here so as to ease the exposition stage Prisoners' Dilemma satisfies this assumption in the following later. A we allowed for the possibility that the Nash equilibrium of the remaining second-stage game depends on the first-stage outcome--hence the notation a3 a1,a2 ,aJ a1,a2 2.

Suppose two players play this simultaneous-move game on the governments' tariff choices in the first stage. In the two- tw1ce, observmg the outcome of the first play before the second stage Prisoners' Dilemma, however, the unique equilibrium of the play begins, and suppose the payoff for the entire game is sim- second-stage game is Lt, L2 , regardless of the first-stage outcome.

A for comput- ing the subgame-perfect outcome of such a game, we analyze the first stage of the two-stage Prisoners' Dilemma by taking into ac- Player 2 count that the outcome of the game remaining in the second stage will be the Nash equilibrium of that remaining game--namely, L2 R2 L1 , L2 with payoff 1, 1.

Thus, the players' first-stage interac- L1 1,1 5,0 tion in the two-stage Prisoners' Dilemma amounts to the one-shot Player 1 game in Figure 2. The game in Figure 2. Thus, Figure 2. Cooperation-that is, R1, R2 --cannot be achieved in either stage of the subgame-perfect outcome. Here we temporarily 0,5 4,4 0,0 depart from the two-period case to allow for any finite number of repetitions, T.

A 1 through An, respectively, and payoffs are u1 a1, The payoffs for G T are simply the We will show that there is a subgame-perfect outcome of this re- sum of the payoffs from the T stage games. A, assume that m the fust stage the Proposition If the stage game G has a unique Nash equilibrium then, players anticipate that the second-stage outcome will be a Nash for any finite T, the repeated game G T has a unique subgame-perfect equilibrium of the stage game.

Since this stage game has more outcome: the Nash equilibrium of G is played in every stage. Suppose, sibility that the stage game G has multiple Nash equilibria, as in for example, that the players anticipate that R1, R2 will be the Figure 2. The strategies labeled L; and M; mirnie the Prisoners' second-stage outcome if the first-stage outcome is M1, M2 , but Dilemma from Figure 2.

The players' first-stage inter- equilibria: L1, L2 , as in the Prisoners' Dilemma, and now also R1, action then amounts to the one-shot game in Figure 2.

It is of course artificial to add an equilibrium to the Prisoners' 3, 3 has been added to the M1, M2 -cell and 1, 1 has been added Dilemma in this way, but our interest in this game is expositianal to the eight other cells. As in Figure 2. Thus, in this section we 14Strictly speaking, we have defined the notion of a subgame-perfect outcome only for the dass of games defined in Section 2.

The two-stage Prisoner' s 13 Analogous results hold if the stage game G is a dynamic game of complete Dilemma belongs to this dass because for each feasible outcome of the first- information. Suppose G is a dynamic game of complete and perfect information stage game there is a unique Nash equilibrium of the remaining second-stage from the dass defined in Section 2. If G has a unique backwards-induction game. Similarly, suppose G is a two- Nash equilibria. We will not formally extend the definition of a subgame-perfect stage game from the dass defined in Section 2.

If G has a unique subgame- outcome so that it applies to all two-stage repeated games, both because the perfect outcome, then G T has a unique subgame-perfect outcome: the Subgame- change in the definition is minuscule and because even more general definitions perfeet outcome of G is played in every stage. B and 2. Let w,x , y,z denote an? The Nash equilibrium L 1 , L 2 in Fig- eight other first-stage outcomes occurs.

But playing L 1 , L 2 in the ure 2. Lz following anything but M1 , M 2 in the rium of the remaining stage game. Loosely put, it would seem first stage. Llkew1se, the Nash equilibrium R 1 , R 2 in Figure 2. Thus, as claimed earlier, coop- the one-shot game in which the payoff 3, 3 has been added to erabon can be achieved in the first stage of a subgame-perfect each cell of the stage game in Figure 2. This is an example of a more gen- response to Mj. The ideas we develop here to of G.

We return to this idea in the infinite-horizon analysis in the address renegotiation in this artificial game can also be applied to next section. A second point, however, is that subgame-perfection 15 This is loose usage because "renegotiate" suggests that communication or may not embody a strong enough definition of credibility.

In de- even bargaining occurs between the first and second stages. A stronger result 1s! More importantly, there is no Nash equilibrium x, y librium, there may be subgame-perfect outcomes of the infinitely m F1gure 2.

We say that R 1, R 2 Pareto- of G. We then consider the dass of infinitely repeated games analo- garne in Figure 2. For these dasses of ftmtely and mfmltely outcorne will be as follows: RJ, R2 if the first-stage outcome is repeated games, we define a player's strateg.



0コメント

  • 1000 / 1000