C O O P E R A T I O N 2The Iterated Prisoner's DilemmaThis is where Robert Axelrod enters the story. He considered a possibility others had suggested: What if instead of a single chance to cooperate or defect, you and an ally had numerous opportunities on an ongoing basis? Would that affect your choice on any single interaction? Suppose you arrange to sell to a fence some stolen goods you regularly receive. To protect you both, it is agreed that while you are leaving the goods in one place, he will leave the payment in another place. At each exchange, each of you will have to decide whether to cooperate by leaving the goods or the money, or to defect by picking up the goods or the money without leaving anything of your own. Furthermore, each of you knows that this arrangement will continue until some unspecified time; neither of you knows when or if at some future date the exchanges will cease. Assume that the payoff values remain the same as in the basic Prisoner's Dilemma. Does your strategy of defection in the earlier one-shot Prisoner's Dilemma change in an environment of repeated interactions with the same individual? In 1979, Axelrod devised an ingenious way of testing this possibility (known as the "iterated" Prisoner's Dilemma). He contacted a number of persons in various fields--mathematicians, experts in conflict resolution, philosophers--explained the payoffs, and asked each of them to come up with a strategy for a player in an interated Prisoner's Dilemma tournament that could be encoded in a computer program. No limitation was placed on strategy length. One strategy might be as simple as "always defect." Others might take into account their memory of what the other player had done on previous turns. "Always cooperate but with a random ten percent chance each encounter of defecting" would be still another strategy, and so on. Axelrod collected 13 such strategies and encoded each of them in the form of a computer program. (He also added one strategy of his own, which randomly chose cooperation or defection on each turn.) He then began to pit each of the 14 strategies against every other strategy over 200 iterations. This would determine if any one strategy would prove to do well against all other strategies (as measured by average payoffs to that strategy). The winner was the shortest strategy submitted. It consisted of four lines of BASIC code submitted by psychology and philosophy professor Anatol Rapaport of the University of Toronto in Ontario, Canada. In its entirety it consisted of the following: Cooperate on the first turn, then in all subsequent turns do whatever the other player did on its previous turn. This strategy was dubbed "Tit for Tat".
Next: BackgroundThe Prisoner's DilemmaThe Iterated Prisoner's DilemmaThe "Ecological" Prisoner's DilemmaHow Cooperation WorksHow Tit for Tat WorksThe Principles of Tit for TatThe Implications of Tit for TatThe Future of CooperationHome
|