Should be: "Iterated Prisoners Dilemma Contains Strategies That Dominate Any Evolutionary Opponent at a Game as Simple as the Prisoner's Dilemma"
You can write a perfect tic-tac-toe program with relatively simple rules, but an evolutionary strategy will wipe the floor at Go. This kind of modelling has value but real life is incredibly complicated, complicated strategies destroy simple ones as the rules of the game become more complex. Our brains are extremely expensive organs, and they're built that way for a reason. I think people are way trigger-happy extrapolating models like this to the real world.
I don't recall where or when I read it, but there is a strategy that usually beats Tit-fot-tat.
Tit for tat, but also a small (random) chance of forgiving (giving a break) anyway. I believe the reason that won is that this allowed for recovery in the face of understandings and as long as the chance wasn't that large the cost also wasn't.
I didn't read the article but I assume the ultimatum consists of "I'll cooperate x% of the time if you cooperate 100% of the time. Otherise I'll defect 100%".
Normally in game theory, such statements are not seen as "credible", i.e. you assume the other person is bluffing and you go on to defect 100% of the time.
A big reason for cooperating in iterated prisoners dilemma in nature is that the benefits from cooperating with relatives is huge.
And in a pool of cooperative agents doing some "last-turn-defect" strategy while theoretically better than cooperate-always, is complicated with small payoff.
There is a great lecture from Robert Sapolsky/Stanford [2010] about behavioral evolution / prisoner's dilemma / tit for tat strategy: https://youtu.be/Y0Oa4Lp5fLE
Covered in some detail in Robert Axelrod's "Evolution of Cooperation" from 1984 [0] which is a book resulting from the original paper with W D Hamilton. Anatol Rappaport submitted "Tit for tat" as a strategy in a computerised tournament of programs adressing the Prisoners Dilemma, , and it wiped the floor with the opposition. I don't recall the full details, there was a second round with some restrictions, Rappaport simply submitted TfT again and it came out well even with constraints on it.
It's simple and demonstrates that the Nash equilibria of a game can deviate from the strategies that produce the best payoff. I think most pop sci corollaries people try to draw from it are overreaches at best.
I would agree that Prisoner's Dilemma is an exceptional case, except that it can be used.
Someone in a position of power may choose to set up a prisoner's dilemma - let's say, between you and your colleagues - to disadvantage you both, while giving an illusion of choice.
Reality is complex, so we use simplified models in theories. But then the results from the models don't generalize back to reality very well. I think the tradeoffs are well known, and I don't think there's any magical solution.
[+] [-] random32840|5 years ago|reply
You can write a perfect tic-tac-toe program with relatively simple rules, but an evolutionary strategy will wipe the floor at Go. This kind of modelling has value but real life is incredibly complicated, complicated strategies destroy simple ones as the rules of the game become more complex. Our brains are extremely expensive organs, and they're built that way for a reason. I think people are way trigger-happy extrapolating models like this to the real world.
[+] [-] julienfr112|5 years ago|reply
[+] [-] rstuart4133|5 years ago|reply
[+] [-] throwaway55537|5 years ago|reply
Forgiving: always give, even to those who don't cooperate
Tit-for-tat: cooperates by default but does not reward defectors/freeloaders. This often the best strategy.
Interesting how this applies to software licenses.
Permissive licenses are clearly forgiving actors.
Copyleft/protective licenses are a gentler version of tit-for-tat.
[+] [-] mjevans|5 years ago|reply
Tit for tat, but also a small (random) chance of forgiving (giving a break) anyway. I believe the reason that won is that this allowed for recovery in the face of understandings and as long as the chance wasn't that large the cost also wasn't.
[+] [-] thedudeabides5|5 years ago|reply
Interesting, if true, would the fist linkage between the Prisoner's Dilemma and the Utlimatum Game.
https://en.wikipedia.org/wiki/Ultimatum_game
[+] [-] im3w1l|5 years ago|reply
Normally in game theory, such statements are not seen as "credible", i.e. you assume the other person is bluffing and you go on to defect 100% of the time.
[+] [-] im3w1l|5 years ago|reply
And in a pool of cooperative agents doing some "last-turn-defect" strategy while theoretically better than cooperate-always, is complicated with small payoff.
[+] [-] scythe|5 years ago|reply
[+] [-] random32840|5 years ago|reply
[+] [-] remolueoend|5 years ago|reply
[+] [-] danaliv|5 years ago|reply
*Wherein the player does whatever their opponent did in the previous round.
[+] [-] hazeii|5 years ago|reply
[0] https://en.wikipedia.org/wiki/The_Evolution_of_Cooperation
[+] [-] rurban|5 years ago|reply
See eg. https://www.nature.com/news/physicists-suggest-selfishness-c... which describes that political struggle going on for the last 40 years. This was eg countered by https://www.researchgate.net/publication/236189156_The_Evolu...
[+] [-] raverbashing|5 years ago|reply
In most real world situations, the payoffs are much different than the PD ones.
Collaborate, and you may lose or win a little. Defect and for most cases, your payoff is 0.
[+] [-] danharaj|5 years ago|reply
[+] [-] learnstats2|5 years ago|reply
Someone in a position of power may choose to set up a prisoner's dilemma - let's say, between you and your colleagues - to disadvantage you both, while giving an illusion of choice.
[+] [-] eloff|5 years ago|reply
[+] [-] viburnum|5 years ago|reply