In the heat of the moment, online players often hurl abuse at one another ? crowd-based justice can help bring them back into line
THERE is a place where 75 people die every second, hacked to bits with giant flaming swords or blasted into the air with powerful magic. Welcome to League of Legends (LoL), an online world in which 3 million gamers are playing online at any given time.
Games are competitive and tempers often run high, so abusive messages are commonplace. But a new system has shown that not only can such bad behaviour be dealt with by the crowd ? it is also easy to modify.
"We can create behavioural profiles for every player in the game," says Jeff Lin, lead designer of social systems at Riot Games, which manages LoL. The profiles measure how many times users curse or insult their teammates and opponents during a game. It is not just about filleting out the handful of regularly abusive players among LoL's 30 million subscribers: most bad behaviour consists of outbursts from players who are normally well behaved.
"The question is how do we stop the spread of bad behaviour?" Lin says. A system called Tribunal, demonstrated at the Massachusetts Institute of Technology's Game Lab last month, could be the answer. "Tribunal aggregates all the negative behaviour cases, including chat logs, and bubbles them to the top," Lin explains.
These cases are presented back to the community in the game's forums, where other players vote on whether the behaviour was acceptable or not. Particularly egregious cases, judged by votes, can lead to the offending player being banned.
"If players say 'fag' or the N-word, those cases are the most highly punished," says Lin. The system has also led to new standards ? swearing is now allowed in LoL, but not if it is directed at another player.
The Riot team has also tested other ways of nudging player behaviour. They found that simple messages, displayed during load screens, can have a big effect on player behaviour in the subsequent game. For example, advising players that their teammates would perform worse if they harassed them after a mistake resulted in an 11 per cent reduction in offensive language in the subsequent game, compared with when no tips were shown.
Lin says that systems like Tribunal could be useful if applied to other online systems. Web communities like Reddit already rely on the users themselves to shape the community, and to down-vote offensive posts from view. These mechanisms allow societal norms to emerge in online communities where none were before, just as juries have for hundreds of years, says Cliff Lampe at the University of Michigan in Ann Arbor. "This really helps to shape sites," he says. "It used to be that sites would rise and fall quickly, like MySpace, but these social structures lead to more sustainable sites."
This article appeared in print under the headline "Moderate your language"
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article
Subscribe now to comment.
All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.
If you are having a technical problem posting a comment, please contact technical support.
harrisburg top chef texas great pacific garbage patch ben affleck and jennifer garner google privacy changes windows 8 preview leap year
No comments:
Post a Comment