Thursday 10 March 2011

Review - Collective Action and the Evolution of Social Norms

1. Ostrom E. Collective Action and the Evolution of Social Norms. Journal of Economic Perspectives. 2000;14(3):137-158. Available at: http://pubs.aeaweb.org/doi/abs/10.1257/jep.14.3.137.

Starts out by explaining (briefly) the "zero contribution thesis" and how this contradicts the evidence we see in real life. Other evidence has been gathered by running "public good experiments", which examine the willingness of players to overcome collective action problems. If everyone is a 'Rational Egoist' - e.g. out for most profit for themselves - it makes sense that they will never contribute to public good because the best outcome for them is that they don't contribute, but receive the good from all the other contributions. Of course, if everyone reaches that conclusion, noone contributes.

Over many runnings of public good games, the following seven findings have been found repeatedly:

  1. Subjects contribute between 40-60% of their resources in either one shot games or the first round of finite games. 
  2. Contributions decay downwards, but stay well above zero as the game continues.
  3. If subjects believe others are likely to cooperate, they are more likely to also cooperate.
  4. Learning the game better actually leads to more cooperation.
  5. Face-to-face communication increases cooperation. There is less cooperation when the communication is via computers. 
  6. Subjects will use personal resources to punish free-riders.
  7. The rate of contribution is affected by contextual factors. 
These finding can't be explained with the zero contribution thesis, where all actors are rational egoists. They can be explained by adding different types of actors (Ostrom calls them norm-users) to the model: "Conditional Cooperators" and "Willing Punishers". Conditional cooperators are willing to cooperate as long as they believe others will, and if enough people do they will continue to cooperate. They will tend to trust others, but if others are deemed to be free-riding they will get disappointed and stop contributing, which in turn means other conditional cooperators will also stop. Willing punishers will punish free-riders, either verbally or if available via other means. If reward mechanisms for people who cooperate are available, they may use those instead. 

She goes on to discuss the evolutionary argument for the development of these types. It gets really interesting to me where she says that if you knew the personality type of the people you are playing with, you could predict which is the best strategy every time and rational egoists would not be a favoured adaptation. However, if nothing is known about the other players, the best strategy is not to trust. So you need to know enough about the people you are dealing with to be able to judge them as trustworthy to cooperate. This would be more difficult online, and I would suggest that guilds are a way to find players to trust to cooperate with. 

There's a further section about the institutional rules having a potentially damaging effect on the evolution of good social norms (which also helps to dictate whether people will cooperate or not). There is a discussion of the field studies on public good and self-organised collective action e.g. fisheries. Could that feed in to game design at all? 

Need to follow up on finding out about trust I think, along with some more about this collective action stuff. 

No comments:

Post a Comment