Before it fell apart, Michael J. LaCour’s study of political canvassing tactics was timely not just for what he supposedly discovered, but for how he discovered it.
In order to figure out whether gay-marriage advocates could actually change people’s minds about the issue by having brief conversations on their doorsteps, Mr. LaCour, a University of California at Los Angeles graduate student, designed an experiment.
Gay and straight canvassers would knock on doors and follow scripts. There would be before-and-after surveys, randomized assignments, controls, and other accouterments of rigorous experimental trials.
The reported findings — that a short conversation with a stranger could change a person’s views on gay marriage — made a huge splash. Mr. LaCour hadn’t just looked at some data on canvassing and public opinion and connected a few dots; he had conducted a controlled experiment that made a serious case for cause and effect.
It turns out that Mr. LaCour probably faked the data. UCLA is still conducting an investigation, but there is evidence that the before-and-after surveys never happened. Science, the journal that published a paper based on the experiment, has retracted it.
Still, the fact that Mr. LaCour had embraced an experiment-based approach to political-science research was no accident. Such methods have “captured political science’s imagination,” says Arthur Lupia, a professor at the University of Michigan at Ann Arbor, especially among ambitious young scholars.
They have also created tension. The allure of experiments has led some academics to walk a fine line between observing politics and playing politics. And it’s not always clear where that line is.
The embrace of field experiments in political science marks a historical shift, says Mr. Lupia. For centuries, scholars relied on qualitative methods, and later statistics, to understand how politics works. Since the turn of the century, however, scholars have increasingly teamed up with campaigns and political-action groups to run field tests on the American electorate.
Donald P. Green, the Columbia University professor who wrote the Science paper with Mr. LaCour before requesting that it be retracted, is widely credited with popularizing experimental methods in political science. (Mr. Green declined to comment for this article.)
In 1998, Mr. Green, then at Yale University, designed a randomized experiment with his colleague Alan S. Gerber to test how well various ways of contacting voters in New Haven, Conn. — postcards, phone calls, or canvassing — might persuade them to go to the polls.
Mr. Green and Mr. Gerber later published a paper based on that research in the American Political Science Review. Their work eventually caught the attention of campaign strategists in Washington, who began to bring scholarly methods, and scholars themselves, into the trenches. By 2012 social scientists were all but credited with getting President Obama re-elected.
During that time experimentation became an increasingly appealing method for political-science researchers, especially those who wanted to influence politics rather than merely study it.
“The fact that you’re looking at real-world outcomes and working with real-world organizations means that you’re going to have a more direct effect and policy significance,” says David W. Nickerson, an associate professor of political science at the University of Notre Dame.
The Montana Experiment
But scholars who insinuate themselves into the business of politics risk running afoul of the tenets of their discipline. That risk was apparent in another scandal that recently rocked political science, involving researchers from Stanford University and Dartmouth College.
The researchers sent fliers to 100,000 registered voters in Montana; the fliers included an official state seal and purported to show the political leanings of the candidates for the state’s Supreme Court. The goal was to measure the effect of the fliers on how people ended up voting.
But the researchers had crossed a line. The experiment angered Montana officials, who saw the fliers as deceitful. To make matters worse, the study had not been properly approved by either university’s institutional review board, which is supposed to vet experiments that involve human subjects.
The presidents of Stanford and Dartmouth ended up sending a letter of apology to the voters and citizens of Montana. “No research study,” they wrote, “should risk disrupting an election.”
That line jumped out at Jon Krosnick, a professor of political science, communication, and psychology at Stanford who was not involved in the study. The presidents seemed to be saying that the researchers “had no business interfering in the conduct of politics,” says Mr. Krosnick. But most field experiments in political science, he says, interfere in politics to some degree.
The question is where to draw the boundaries. Mr. Lupia, who edited The Cambridge Handbook of Experimental Political Science with Mr. Green, among others, says the Montana debacle was a wake-up call that such a discussion was long overdue.
Political scientists had been running field experiments for years, but the sample sizes tended to be small enough that a single, ethically dubious experiment couldn’t really foul up an election, says Mr. Lupia. The Montana study, which took aim at roughly 15 percent of the state’s registered voters, was more ambitious than anything he had seen.
“If you had told me four years ago” that academic researchers would run an experiment of that size, says Mr. Lupia, “my head would have fallen off.”
Blind Spots
Experimental techniques are still new enough to political science that researchers are not necessarily good at spotting potential ethical problems, says Mr. Krosnick. The Stanford professor came to the field from psychology, where students are trained early on about how to avoid the hazards of running tests on people. In political science, he says, the blind spots can be bigger.
“I think the cart got way ahead of the horse here in terms of ethics,” says Mr. Lupia. “We had a couple experiments, and they were great. We did not have a disciplinary conversation about ‘do no harm.’”
In the eight months since the Montana embarrassment, that conversation has not happened, he says.
Mr. Lupia knows firsthand how much work it can be to reach a disciplinewide consensus on ethics. In 2012 the Michigan professor led an effort to write transparency standards into the ethics guide of the American Political Science Association.
Once he and his colleagues managed to do that, they pushed the major journals in the field to make authors abide by those standards. Among other things, that would mean authors would have to make their data available to other researchers who want to check their work. More than two dozen political-science journals have committed to putting the transparency requirements in place by mid-January 2016.
Mr. Lupia sees a similar way forward for clearing up the ethical haze hanging over field experiments.
“What we could do is have all the leading professional associations and journals say, ‘Look, here’s a set of red lights, and we’re not going to publish anything where one of these red lights is crossed,’” he says. “If you got a coalition of journals and professional associations to do that, it would clear up this problem pretty quickly.”
For now, experimental political science is left to look in the mirror and assess the damage from a pair of black eyes.
The unraveling of Mr. LaCour’s study and the problems with the Montana experiment have important differences. The Montana case has to do with the question how much of a footprint researchers ought to leave. Meanwhile, “the LaCour story has really nothing to do with experiments,” says Mr. Krosnick. “It’s a guy who made up data.”
And yet the alleged sin, in both cases, has been deception. In politics anything goes; not so in academe. Researchers who take the tools of experimentation out of the laboratory and into the field walk a fine line between observing the game, and playing it.
“Many of the academically manipulated studies are ones that do involve academics, for their own purposes, disseminating information of debatable accuracy,” says Mr. Krosnick. “And that, to me, is what went wrong [in Montana]. That is a land mine.”
Steve Kolowich writes about how colleges are changing, and staying the same, in the digital age. Follow him on Twitter @stevekolowich, or write to him at steve.kolowich@chronicle.com.