Thursday 24 March 2011

JUnit 4 (in Eclipse)

I'm sure this will be second nature in no time at all, but there are some fairly major differences between the way JUnit worked in version 3 compared to version 4, so I thought I'd write down what I've done and what I need to look into doing in the future.

In JUnit 4 you no longer have to extend any classes. The tests are identified by annotations. You do still need to import a load of JUnit classes to get the annotations and assertions right though.

So, I create my test classes in a test package that mirrors the name of the package under test with 'test' added to the end. I keep my test code in a second source file, called 'test' (to do this in Eclipse right-click on the project, go to properties, Java Build Path, Source, Add folder). My test classes are named after the class they are testing, again with the word 'Test' added to the end. This lets me reflect the same structure in the tests as the main code, so I can find things more easily.

If you are writing the tests after the class, you can choose which of the class methods you want to auto-generate code for. If you are writing the tests first, obviously this isn't available! I do a bit of both, depending on how good I'm feeling.

I haven't really been using anything other than that. To run the tests, I right-click on the test class in Eclipse, and choose Run as -> JUnit test. Eclipse then gives me a lovely interface that either shows a green bar if everything passes, or a red bar and a list of failures. Clicking on the failure takes me to the assertion that failed, so I can identify the exact test that failed. I haven't got into test suites yet, or using the @before or @after annotations.

One thing I think I do need to look into is the use of 'mock objects', particularly where my code is using SmartFox. My request handlers could do with testing, and I think this is the best way. At the moment I've limited myself to testing only my model, which is not great. It's a start, at least!

Wednesday 23 March 2011

Paper - Trust as a Social Reality

1. Lewis JD, Weigert A. Trust as a Social Reality. Social Forces. 1985;63(4):967. Available at: http://www.jstor.org/stable/2578601?origin=crossref.

Paper on the topic of trust from a sociological viewpoint. Makes the point that trust is only possible between two people (or things) and therefore is not a characteristic of an individual. Trust goes beyond what is known and can be rationally concluded. It is a way to simplify the complexity of society, by removing possible outcomes to our actions that could not be removed by using solely rational means (and equally, distrust does the same thing with different outcomes).

If we knew everything about each other, trust would be unnecessary.  The trust in a relationship can change as the relationship continues.

The writers split trust into three dimensions: 'cognitive trust' (basing the choice to trust on 'good reasons'), 'emotional trust' (trusting someone is an emotional bond between participants) and 'behavioural trust' (the way we act when we trust). All three components feed into each other: i.e. if someone behaves as though they trust us, we are more likely to cognitively decide to trust them, and 'trust-implying actions' can help to feed the emotional part of the trust relationship.

Having split the concept of trust into three, trust is then split into types with one of these components dominant. They give examples of 'cognitive trust' being in nuclear arms reduction negotiation, or 'emotional trust' between lovers. The point is also made that most situations make trust a mix - e.g. emotional trust with no level of cognitive trust is blind faith, and the converse would be cold-blooded prediction.

The paper continues to discuss 'system trust', and how trust changes can be seen to have started the litigious  society. There is some discussion of whether the prisoner's dilemma game can really demonstrate trust, which is quite interesting.

I think it is worth following up on some of the bigger references in this paper: Luhmann, N "Trust and Power", and Bok, S "Lying" I reckon.

Monday 14 March 2011

Glitch

Glitch (http://glitch.com/) is a new MMOG that is still in the alpha stage, but looks like it could be right up my street. It's a new project by the team who started Flickr. The basic premise is that 11 giants are imagining this world, and the players are just tiny figments of their imagination. There is a fascinating interview with the guy who's creating it (Stewart Butterfield) on Gamasutra.  There's also a player-created wiki.

I think the interview is interesting because he has the same problems with Second Life and There.com as I do - what do you do when you're in there? At the same time, he doesn't like the way that in games like World of Warcraft you don't leave a lasting impression on the world - i.e. you kill something but you can go on the same raid to kill the same boss tomorrow. Glitch is an effort to create a world that is changed and responds to the players, and I think it's quite exciting.

I managed to get an alpha account, and had my first go playing last Thursday (the game is only periodically open at the moment). The focus is on building things and learning skills, but unlike other sandbox style games (e.g. Wurm) there are also quests, so there's an entry point. The quests are silly too. I had one where I had to use my emotional bear (after equipping it with lips) to kiss 5 other players, after eating garlic. So the quests show you how the equipment can be used, and encourage interaction - I had to chat to the people I was kissing, I really couldn't bring myself to just run up, kiss them and run away!

There are going to be multiple ways to group players from that interview, so not simply guilds/clans/villages. Some are going to be 'religious' cults, dedicated to one of the giants. He also mentions corporations. It will be really interesting to see what forms, and if there are any differences in character between the two types of groups.

I'm looking forward to seeing this one grow, and really pleased to be on board at the alpha stage.

Thursday 10 March 2011

Review - Collective Action and the Evolution of Social Norms

1. Ostrom E. Collective Action and the Evolution of Social Norms. Journal of Economic Perspectives. 2000;14(3):137-158. Available at: http://pubs.aeaweb.org/doi/abs/10.1257/jep.14.3.137.

Starts out by explaining (briefly) the "zero contribution thesis" and how this contradicts the evidence we see in real life. Other evidence has been gathered by running "public good experiments", which examine the willingness of players to overcome collective action problems. If everyone is a 'Rational Egoist' - e.g. out for most profit for themselves - it makes sense that they will never contribute to public good because the best outcome for them is that they don't contribute, but receive the good from all the other contributions. Of course, if everyone reaches that conclusion, noone contributes.

Over many runnings of public good games, the following seven findings have been found repeatedly:

  1. Subjects contribute between 40-60% of their resources in either one shot games or the first round of finite games. 
  2. Contributions decay downwards, but stay well above zero as the game continues.
  3. If subjects believe others are likely to cooperate, they are more likely to also cooperate.
  4. Learning the game better actually leads to more cooperation.
  5. Face-to-face communication increases cooperation. There is less cooperation when the communication is via computers. 
  6. Subjects will use personal resources to punish free-riders.
  7. The rate of contribution is affected by contextual factors. 
These finding can't be explained with the zero contribution thesis, where all actors are rational egoists. They can be explained by adding different types of actors (Ostrom calls them norm-users) to the model: "Conditional Cooperators" and "Willing Punishers". Conditional cooperators are willing to cooperate as long as they believe others will, and if enough people do they will continue to cooperate. They will tend to trust others, but if others are deemed to be free-riding they will get disappointed and stop contributing, which in turn means other conditional cooperators will also stop. Willing punishers will punish free-riders, either verbally or if available via other means. If reward mechanisms for people who cooperate are available, they may use those instead. 

She goes on to discuss the evolutionary argument for the development of these types. It gets really interesting to me where she says that if you knew the personality type of the people you are playing with, you could predict which is the best strategy every time and rational egoists would not be a favoured adaptation. However, if nothing is known about the other players, the best strategy is not to trust. So you need to know enough about the people you are dealing with to be able to judge them as trustworthy to cooperate. This would be more difficult online, and I would suggest that guilds are a way to find players to trust to cooperate with. 

There's a further section about the institutional rules having a potentially damaging effect on the evolution of good social norms (which also helps to dictate whether people will cooperate or not). There is a discussion of the field studies on public good and self-organised collective action e.g. fisheries. Could that feed in to game design at all? 

Need to follow up on finding out about trust I think, along with some more about this collective action stuff.