Aug 162012
 

Game theory book The Art of Strategy included a great Prisoner’s Dilemma experiment. The experimenters spoiled the game: they told both players that they would be deciding simultaneously, but in fact, they let Player 1 decide first, and then secretly approached Player 2 and told her Player 1’s decision, letting Player 2 consider this information when making her own choice.

The results: If you tell the second player that the first player defected, 3% still cooperate (apparently 3% of people are Jesus). If you tell the second player that the first player cooperated………only 16%  cooperate. When the same researchers in the same lab didn’t tell the second player anything, 37% cooperated.

For those unfamiliar with The Prisoners Dilema, this YouTube video from a British game show will give you an emotional understanding of it in under two minutes (you can keep watching for the awesome twist).

 

The previous was from this Less Wrong post. From the comments:

More surprising [IMO] is the fact that 16% co-operate when they know that it costs them to do so. I have no idea what that 16% were thinking.

In Reply:

This brings me back to the issue of self-identification.

Would you cooperate against yourself?

I realize there are Pansy Parkinson’s out there, I try not to know any. If we achieve an em-future it’ll be important for self-copying entities to have a value set which puts a strong emphasis on cooperating with oneself. The real question (as always) is what counts as me when the simple convention of a continuous physical body is no longer sufficient. As I’ve written before, I adopt a wide enough view that I suspect there may be some people alive right now that count as “close enough”.

Let’s assume I would cooperate against myself. If the only piece of information I have about this other person is they just cooperated, I know that they cooperated with me. I would also cooperate with me. On that axis, they are identical to myself. As I cooperate with myself, I will cooperate with them. (Added bonus: this strengthens the meta-individual)

If I know more about them I can better approximate who they are, and this might change my decision, especially if I expect our groups to come into conflict in the near future. But a simple “if they cooperate, I defect” strategy is self-destructive. A world of everyone running your decision algorithm would result in no player ever cooperating.

 Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

(required)

(required)

This site uses Akismet to reduce spam. Learn how your comment data is processed.