×

Faith in polling takes a big hit after Election Day surprises

  • In this Nov. 8, 2016, photo, a voter fills out his ballot at the Wilson School House in unincorporated Wilson, Idaho. Donald Trump’s victory came as a surprise to many Americans, the nation’s pollsters most of all. Heading into Election Day, most national surveys overstated what will likely be a narrow popular vote advantage for Hillary Clinton and led many to believe she was a shoo-in to win the Electoral College. (AP Photo/Otto Kitsinger) Otto Kitsinger



Monitor staff
Thursday, November 10, 2016

The unexpected success of Donald Trump toppled more political beliefs than you can count, and belief in political polling might be one of them.

“Just look at the polls in New Hampshire, which showed everything from a tossup to a (Granite State Poll released Nov. 3) that had Clinton winning by 11,” said Tim Vercellotti, a political science professor and director of the Western New England University Polling Institute.

New Hampshire wasn’t alone. Not a single poll in Wisconsin, for example, ever indicated that Trump would win that state.

Further, the failures of traditional polling were echoed by the high-tech alternative of online prediction markets. These markers, in which people place real or imaginary bets on who they think will win, have been touted as a way to tap the “wisdom of crowds” and spot the truth via modern methods superior to old-fashioned questions asked over the telephone.

Nope.

“Our market, along with all major polls and the other international prediction markets – some of which paid out early on a Clinton victory last week – predicted Clinton would win the election,” wrote Brandi Travis of PredictIt, a D.C.-based prediction market, in a post-election analysis.

So what happened?

A likely answer is turnout, but that reflects a much deeper problem.

“Trump’s supporters were more enthusiastic and that gets them to the polls. Also, it’s a regularity in American politics that the people who say they will vote but don’t and the people who are not supposed to vote but do both advantage the underdog. This helped Trump, too,” wrote Joseph Bafumi of Dartmouth College in an email response to a Monitor query.

Pollsters know this, of course, and they try to compensate through the mathematical models used to guess how millions of people would answer a question, based on the answers of a few hundred people.

Those models might be mathematical but they are based on knowledge of human behavior, which is changeable and very hard to pin down. Consider the interaction of schooling and voting.

“A strong predictor of turnout is education: The higher your level, the more likely you are to turn out. That’s a truism going back 50 years in political science research,” Vercellotti said.

But Trump’s appeal made that truism not so true this year, as white voters without a college education came out and overwhelmingly supported him.

“What we may have missed is that people we wouldn’t ordinarily expect to vote showed up,” said Vercellotti.

Complicating matters are the growing use of cell phones, which make it harder for pollsters to select certain demographic and geographic factors when calling people, and a shrinking desire of people to participate in polls.

The response rate for even the most successful survey is now in single digits, meaning that for every 15 or 20 people that a pollster contacts, only one person talks to them. (How many times in the past few months did you hang up on a pollster calling your house?) This shortage increases the need to use mathematical models for extrapolation from poll answers to the general public.

“We rely heavily on statistical modeling to adjust our samples and make them look representative. This is fine as long as you know what your population looks like on the key variables. But the problem with election polling is that you don’t know who will actually vote, so the adjustments are based on assumptions and guidance from past elections,” wrote Brian Schaffner, professor of political science at UMass Amherst, in an email response.

“In 2016, there is some preliminary evidence that African American turnout may have been lower than in 2012. This would have led us to include more African Americans in our sample than we should have, thereby leading us to overestimate support for Clinton.” 

Predictive markets should have avoided that problem, since they depend on universal traits of greed and love of gambling, but apparently the people who participate are not representative of people who vote.

That also seems to be the issue with other analyses based on “big data” that uniformly missed Trump’s ascendancy, including a much-touted program called Ada used by the Clinton campaign deciding where and when to spend money and schedule events.

Included in that are high-publicity “polling aggregators,” which predicted the election by looking at lots of polls.

Behind all these algorithms and calculations were hard-to-see assumptions about human behavior on a large scale, which is the most complex of complex systems. Therein lies the problem: Compared to creating an election forecast by predicting what 100 million humans will do, creating a weather forecast by predict what a million-billion-trillion molecules will do is straightfoward. And we know how dependable weather forecasts are.

So what does this mean for the future of political polls?

The models will be tweaked, of course, but there’s no guarantee that the changes made based on what happened in 2012 will be relevant in 2014 or 2016. Yet political polls will stay around because people are interested in them, despite our disclaimers otherwise.

“There’s no way to go back – people have become addicted to poll data. There is an insatiable appetite for polling – I find it a problem that we pollsters have created, because our numbers go out there, they rocket around the web, they get lots of attention for maybe one news cycle, and then they’re gone,” said Vercellotti.

Further, there are incentives for those who create independent polls in the first place. Lesser-known universities have built national reputations by specializing in polls, most notably Quinnipiac University of Connecticut but also UNH via its Survey Center and St. Anselm College with its New Hampshire Institute of Politics.

Vercellotti said his own school, Western New England University in Springfield, Mass., is among them.

“Our poll was featured in the New York Times Sunday, boy did I hear lots of praise,” he admitted.

“Universities have benefited from publicity, notoriety – we’ve created this appetite. I’m not sure it’s good in the long run,” he added.

(David Brooks can be reached at 369-3313 or dbrooks@cmonitor.com or on Twitter @GraniteGeek.)