Florida was the turning point. On that, they seem to agree.
Other questions were left unanswered in the aftermath of Donald J. Trump’s victory in the U.S. presidential race, and could remain so for a while. Why were so many polling forecasts so off-base? Who is to blame? What went wrong, and how can it be fixed?
What does it mean? Too early to call.
Some believe the final result could shake academic pollsters and the election-forecasting industry. They point to Wisconsin, where recent polls estimated Hillary Clinton to be ahead by four points, six points, eight points. But Donald Trump ended up winning the state by one percentage point.
“It was something bigger than some pollster being better than others — that this is really an industrywide problem,” said Patrick Murray, director of the Monmouth University Polling Institute.
Mr. Murray’s team last polled Wisconsin residents less than a month ago, and judged Mrs. Clinton to be a seven-point favorite. He was confident in the institute’s work. Monmouth is known for its polling acumen; earlier this year, it was the only university-based polling operation to earn an A+ grade from FiveThirtyEight, the data-based news website.
Last night was the worst beat for pollsters since “Dewey Defeats Truman” in 1948, said Mr. Murray. He pointed to reports suggesting that internal polling for both campaigns also underestimated Mr. Trump’s support in key states. “There was something systemically wrong,” he said, with the prevailing methods of using polling data to model electoral results.
Tuesday’s forecasting crack-up caught a lot of people off-guard, but the polling industry may have been due for a reckoning, according to Peter Woolley, a professor of comparative politics at Fairleigh Dickinson University.
Over the last decade, he said, the trappings of an increasingly noisy, hyperconnected society have eroded the formerly reliable methods of divining the mind-sets of strangers from afar. Unbeknown to the journalists and readers who may be inclined to treat polling data as prophecy, public-opinion researchers have convened for years to address the growing barriers to getting good data from willing subjects.
“With or without this election and these results, pollsters have been losing sleep and losing hair on their heads because they understand that the introduction of cellphones has turned the sampling world upside-down,” said Mr. Woolley.
“They understand that response rates have been steadily declining over a long period of time,” he continued. “They have worried about whether they have the right people in their sample. They have worried about whether they can complete interviews with citizens who are inundated with ads, cold calls, scam artists. And so, for many people who answer the phone, their first reaction is skepticism.”
The election results surprised Mr. Woolley nonetheless. “It’s perfectly reasonable to think, ‘There was a state here and there that behaved differently,’” said the professor, “but here you had a pattern all in the same direction, underestimating the vote for Trump.”
Another pattern: According to exit polls, a significant majority of voters who held low opinions of both candidates voted for Mr. Trump.
A Sober View
The good news? Patterns can be studied.
Mr. Murray, the Monmouth polling director, pointed out that Britain’s vote to leave the European Union, Colombia’s rejection of a peace deal with rebels, and the election Mr. Trump make up a set of recent case studies in which citizens, animated by anti-establishment anger, turned out in greater numbers than polling suggested.
Several polling directors suggested that academic institutions could lead the way in sorting out what happened in the states where even late-stage polls gave short shrift to Mr. Trump’s support.
“I think pollster universities are very inclined to be transparent, to share the data, and to say, Let’s figure this thing out,” said Don Levy, director of the polling institute at Siena College. “We don’t do this for money. We’re not working for candidates. We will continue to study the process.”
But university pollsters may not agree on how much soul-searching their own operations need to do.
Meanwhile, Mr. Levy said all their New York State estimates turn out “perfectly right.”
“Over all,” he said, “I feel pretty comfortable with how we did.”
On Wednesday, Marist College sent out a news release boasting that its pollsters had predicted Mrs. Clinton’s margin of victory in the popular vote within a fraction of a percentage point.
In an interview, Lee M. Miringoff, director of the Marist Institute for Public Opinion, urged calm about the failure of many polls to predict the outcome of the election. Polls have margins of error, he said, and many of Tuesday’s “misses” fell within it. Mr. Miringoff also advised against reading too much into the relative success of individual polls. “Today’s outlier,” he said, “might be tomorrow’s conventional wisdom.”
As far as lessons go, both those who make polls and those who read them would do well to re-examine the assumptions they make about data and its predictive powers, said Mr. Woolley.
“We’ve overindulged ourselves in the precision of polling,” he said, “and a more sober view would recognize that all along it is the scientific method to arrive at an estimate.”
Another lesson is that while flawed assumptions can make polls inadequate for predicting exactly who will show up at the polling place and what they will do in the privacy of the voting booth, polling data does help people understand the dynamic between campaigns and voters as an election unfolds.
“Polls remain useful, they just provide the storyline of the campaign,” said Mr. Miringoff. “Without the polls, people would be lost.”