New study: why you shouldn’t trust internet comments
The “wisdom of crowds” has become a mantra of the internet age. Need to choose a new vacuum cleaner? Check out the reviews on Amazon. Is that restaurant any good? See what Yelp has to say. But a new study suggests that such online scores don’t always reveal the best choice. A massive controlled experiment of web users finds that such ratings are highly susceptible to irrational “herd behavior” – and that the herd can be manipulated.
Sometimes the crowd really is wiser than you. The classic examples are guessing the weight of a bull or the number of gumballs in a jar.
But what happens when the goal is to judge something less tangible, such as the quality or worth of a product? According to one theory, the wisdom of the crowd still holds – measuring the aggregate of people’s opinions produces a stable, reliable value. Skeptics, however, argue that people’s opinions are easily swayed by those of others. So nudging a crowd early on by presenting contrary opinions – for example, exposing them to some very good or very bad attitudes – will steer the crowd in a different direction. To test which hypothesis is true, you would need to manipulate huge numbers of people, exposing them to false information and determining how it affects their opinions.
A team led by Sinan Aral, a network scientist at the Massachusetts Institute of Technology, did exactly that. Aral has been secretly working with a popular website that aggregates news stories. The website allows users to make comments about news stories and vote each other’s comments up or down.
For five months, every comment submitted by a user randomly received an “up” vote (positive); a “down” vote (negative); or as a control, no vote at all. The team then observed how users rated those comments.
At least when it comes to comments on news sites, the crowd is more herdlike than wise. Comments that received fake positive votes from the researchers were 32 percent more likely to receive more positive votes compared with a control, the team reported online last week in Science. And those comments were no more likely than the control to be down-voted by the next viewer to see them.
By the end of the study, positively manipulated comments got an overall boost of about 25 percent. However, the same did not hold true for negative manipulation. The ratings of comments that got a fake down vote were usually negated by an up vote by the next user to see them.