Tuesday, 27 March 2012

Paper - Social Dilemmas

Dawes, R.M., 1980. Social dilemmas. Annual review of psychology, 31, p.169-93. Available at: http://www.annualreviews.org/doi/abs/10.1146/annurev.ps.31.020180.001125 [Accessed March 27, 2012].

This review from 1980 gives an overview of social dilemma research for games with over 2 people. A social dilemma game being defined as one where each player gets a bigger payout for defecting than cooperating, unless either enough people cooperate (in which case a higher payout is achieved) or everyone defects (in which case a minimum payment is achieved).

(The normal prisoner's dilemma game is 2 player, 1 shot, which doesn't allow for retaliation or anonymity effects.)

Looks at Involvement - inter-subject involvement. Where financial rewards were offered there was a high degree of affect reported in the reviewed papers. The papers that were reviewed that used an abstract points system apparently didn't report the affect, which Dawes reckoned was probably indicative of it being absent given how noticeable it was to the other paper writers. Not too sure about that. But apparently there was much threatening and calling of 'fink'.

Communication (which I think kind of follows from involvement). The ability to communicate almost doubled the rate of cooperation. I think that reflects Ostrom's findings too - I need to re-read that. One paper broke communication down into different aspects - non-game related, game-related but keep your choice secret, and declare what you're going to choose. Apparently in the last condition everyone claimed they were going to cooperate, but not all of them did. Not hugely surprising.

Group size - Dawes has some trouble with the papers he reviewed over the group size effects. Although they all agree that cooperation is more likely in smaller groups, he found flaws with how the group sizes vs rewards had been balanced, and one that claimed to look at 1-person groups (huh?).

Public disclosure vs anonymity - publicly announcing your choice did apparently increase the level of cooperation.

Expectation of other's behaviour - Defectors were more accurate at guessing the overall defection rate, but no better than Cooperators at guessing who exactly would defect/cooperate. In fact, most people were rubbish at working that out.

Moralising - If they had a sermon read to them about the values of cooperating, then people were more likely to cooperate. I like that. That brings the salience thing in again.

In the conclusion he says something really interesting about cooperation - it's not so much the rules of the game that get people to cooperate, but "altruisms, norms, and conscience". He reckons that anything that increases the salience of those cooperative norms should increase cooperation (e.g. communication, public disclosure, moralising...). Which is exactly what I'm trying to do, using social identity theory to identify the game rules that should make group behaviour more salient, then changing those things and watching the results.

Monday, 26 March 2012

Chapter notes - The evolution of cooperation

Axelrod, R., 2001. The evolution of cooperation. In M. E. Hellman, ed. Breakthrough: Emerging New Thinking. Walker Publishing Company, Inc, pp. 185-193. Available at: http://www-ee.stanford.edu/~hellman/Breakthrough/book/chapters/axelrod.pdf.

This chapter is a revision of a different publication, but it's nice and concise. It goes through the results from a tournament where people programmed different strategies for the prisoner's dilemma game, and looks at what the indicators are for cooperation. I think the bit that is most useful/relevant to me is a quote about the length of time it takes for cooperative strategies to evolve. I think that would relate to the way we use the rules to suggest cooperative strategies are best.

(This could also hark back to why the Name of the Game paper saw such an effect - if cooperate with tit for tat is such a good strategy, suggesting that with the title would lead to people 'getting it' faster. Obviously that's pure speculation though...)

Paper - Repurposing an old game for an international world

Hofstede, G.J. & Tipton Murff, E.J., 2011. Repurposing an Old Game for an International World. Simulation & Gaming, 43(1), p.34-50. Available at: http://sag.sagepub.com/cgi/doi/10.1177/1046878110388250 [Accessed March 16, 2012].

The authors used a simulation game called "So long sucker" within a course with an even split of Taiwanese and American students. This is a pretty qualitative look at what happened.

SO LONG SUCKER was apparently written (by an American design team) in such a way that the only way to win would be to form coalitions and then double-cross your partners. Apparently that is exactly the way that the American students played it. However, when the Taiwanese students played it they played it in a very different way. They played really slowly, attempting to discuss all the moves and find the best outcomes for the entire group. As this wasn't possible, they ended up taking a long time over the games.

Hofstede et al use the framework of the dimensions of culture that Hofstede senior discovered to see if the measured values for the two countries could explain the differences in approach to the same game by the two different cultures. They found a good match, with the two cultures being very different on almost every aspect of the cultural dimensions in directions that would appear to tally.

For me, this is a crucial indication of not just the unwritten rules that come into effect when a game is played (particularly as one Taiwanese girl who had lived in the US for a while was able to switch to an American mode of playing - presumably using a different social identity) but also an example of a framework from the social sciences being used to explain the response to the game.

Paper - The name of the game

Liberman, V., Samuels, S.M. & Ross, L., 2004. The name of the game: predictive power of reputations versus situational labels in determining prisoner’s dilemma game moves. Personality & social psychology bulletin, 30(9), p.1175-85. Available at: http://www.ncbi.nlm.nih.gov/pubmed/15359020 [Accessed March 16, 2012].

The authors used a simple multi-round prisoner's dilemma game with two groups of players (Stanford undergraduates and Israeli fighter pilots). Each group was split in two and played the game by a different name. For one half the game was named in such a way as to suggest stock market/competitive conditions, and in the other to suggest community/cooperative conditions. (The Israeli experiment was conducted in Hebrew.)

In addition to this, each participant was rated as likely to cooperate or defect on the first round of the game by someone who knew them - Stanford they used the residential assistants, for the pilots they asked their instructors. The pilots were also asked to rate themselves. They were introduced to the rules of the game under one name and asked to rate, then asked to rate if the game name was changed to the other situation.

The results really strongly show that the rating gave no prediction of how likely the player was to cooperate or defect, while the game name had a significant effect. In both cases, the community-named game showed around 30-40+ more percentage points of cooperation in the first round than the other, with tit for tat strategies meaning that in both cases the cooperation levels fell, but remained higher in the cooperative game.

Given that the only thing to change was the name, that's amazing.

Apparently the ratings by external people didn't alter enough for the name-change. Even when it was made quite salient to the raters, they only changed by around 15-20 percent points. Interesting. So people underestimate the effect of the suggested norms on the individuals.