Cannabis Sativa

In game theory, grim trigger (also called the grim strategy or just grim) is a trigger strategy for a repeated game.

Initially, a player using grim trigger will cooperate, but as soon as the opponent defects (thus satisfying the trigger condition), the player using grim trigger will defect for the remainder of the iterated game. Since a single defect by the opponent triggers defection forever, grim trigger is the most strictly unforgiving of strategies in an iterated game.

In Robert Axelrod's book The Evolution of Cooperation, grim trigger is called "Friedman", for a 1971 paper by James W. Friedman, which uses the concept.[1][2]

The infinitely repeated prisoners' dilemma[edit]

The infinitely repeated prisoners’ dilemma is a well-known example for the grim trigger strategy. The normal game for two prisoners is as follows:

Prisoner B
Prisoner A
Stays Silent (Cooperate) Betray (Defect)
Stays Silent (Cooperate) 1, 1 -1, 2
Betray (Defect) 2, -1 0, 0

In the prisoners' dilemma, each player has two choices in each stage:

  1. Cooperate
  2. Defect for an immediate gain

If a player defects, he will be punished for the remainder of the game. In fact, both players are better off to stay silent (cooperate) than to betray the other, so playing (C, C) is the cooperative profile while playing (D, D), also the unique Nash equilibrium in this game, is the punishment profile.

In the grim trigger strategy, a player cooperates in the first round and in the subsequent rounds as long as his opponent does not defect from the agreement. Once the player finds that the opponent has betrayed in the previous game, he will then defect forever.

In order to evaluate the subgame perfect equilibrium (SPE) for the following grim trigger strategy of the game, strategy S* for players i and j is as follows:

  • Play C in every period unless someone has ever played D in the past
  • Play D forever if someone has played D in the past[3]

Then, the strategy is an SPE only if the discount factor is . In other words, neither Player 1 or Player 2 is incentivized to defect from the cooperation profile if the discount factor is greater than one half.[4]

To prove that the strategy is a SPE, cooperation should be the best response to the other player's cooperation, and the defection should be the best response to the other player's defection.[3]

Step 1: Suppose that D is never played so far.

  • Player i's payoff from C :
  • Player i's payoff from D :

Then, C is better than D if .

Step 2: Suppose that someone has played D previously, then Player j will play D no matter what.

  • Player i's payoff from C :
  • Player i's payoff from D :

Since , playing D is optimal.

The preceding argument emphasizes that there is no incentive to deviate (no profitable deviation) from the cooperation profile if , and this is true for every subgame. Therefore, the strategy for the infinitely repeated prisoners’ dilemma game is a Subgame Perfect Nash equilibrium.

In iterated prisoner's dilemma strategy competitions, grim trigger performs poorly even without noise, and adding signal errors makes it even worse. Its ability to threaten permanent defection gives it a theoretically effective way to sustain trust, but because of its unforgiving nature and the inability to communicate this threat in advance, it performs poorly.[5]

Grim trigger in international relations[edit]

Under the grim trigger in international relations perspective, a nation cooperates only if its partner has never exploited it in the past. Because a nation will refuse to cooperate in all future periods once its partner defects once, the indefinite removal of cooperation becomes the threat that makes such strategy a limiting case.[6]

Grim trigger in user-network interactions[edit]

Game theory has recently been used in developing future communications systems, and the user in the user-network interaction game employing the grim trigger strategy is one of such examples.[7] If the grim trigger is decided to be used in the user-network interaction game, the user stays in the network (cooperates) if the network maintains a certain quality, but punishes the network by stopping the interaction and leaving the network as soon as the user finds out the opponent defects.[8] Antoniou et al. explains that “given such a strategy, the network has a stronger incentive to keep the promise given for a certain quality, since it faces the threat of losing its customer forever.”[7]

Comparison with other strategies[edit]

Tit for tat and grim trigger strategies are similar in nature in that both are trigger strategy where a player refuses to defect first if he has the ability to punish the opponent for defecting. The difference, however, is that grim trigger seeks maximal punishment for a single defection while tit for tat is more forgiving, offering one punishment for each defection.[9]

See also[edit]

References[edit]

  1. ^ Friedman, James W. (1971). "A Non-cooperative Equilibrium for Supergames". Review of Economic Studies. 38 (1): 1–12. doi:10.2307/2296617. JSTOR 2296617.
  2. ^ The article on JSTOR
  3. ^ a b Acemoglu, Daron (November 2, 2009). "Repeated Games and Cooperation".
  4. ^ Levin, Jonathan (May 2006). "Repeated Games I: Perfect Monitoring" (PDF).
  5. ^ Axelrod, Robert (2000). "On Six Advances in Cooperation Theory" (PDF). Retrieved 2007-11-02. (page 13)
  6. ^ McGillivra, Fiona; Smith, Alastair (2000). "Trust and Cooperation Through Agent-specific Punishments". International Organization. 54 (4): 809–824. doi:10.1162/002081800551370. S2CID 22744046.
  7. ^ a b Antoniou, Josephina; Papadopoulou, Vicky (November 2009). "Cooperative user–network interactions in next generation communication networks". Computer Networks. 54 (13): 2239–2255. doi:10.1016/j.comnet.2010.03.013.
  8. ^ Antoniou, Josephina; Petros A, Ioannou (2016). Game Theory in Communication Networks: Cooperative Resolution of Interactive Networking Scenarios. CRC Press. ISBN 9781138199385.
  9. ^ Baurmann, Michael; Leist, Anton (May 2016). "On Six Advances in Cooperation Theory". Journal of Philosophy and Social Theory. 22 (1): 130–151.

Leave a Reply