Judges were sent letters asking them to recuse themselves from cases. Voters received fliers just before an election that broadcast candidates’ party affiliations. Drivers were told to break traffic laws.
All are examples of studies engineered by university researchers and presented this month at the annual conference of the American Political Science Association, in San Francisco. All fall under the practice of “field experimentation,” an increasingly popular tactic in the social sciences that has been credited with valuable discoveries about how people truly behave in real-world situations.
The elections experiment, by a researcher at the University of Cincinnati, found that running more black candidates won’t necessarily help Republicans win black votes. The traffic experiment, by a Yale University team in Mexico, found an inverse link between drivers’ wealth and the likelihood that the police would demand bribes from them.
But field experimentation is a fraught topic. Where some researchers see promising results, others see scientists’ behaving as modern-day Allen Funts, devising increasingly clever, sneaky, and even borderline-legal ways of watching people who don’t think they’re being watched.
“The fact that anybody in America can do any crazy thing and it’s legal — that’s not enough of a justification for political scientists to follow experimental techniques that modify the democratic process,” said one critic of the boom in field experimentation, Theda Skocpol, a professor of government and sociology at Harvard University.
“I’m not sure what the answer is here,” she said, “but I am sure there are ethical issues that need to be thought through much more carefully than they are being thought through.”
‘A Window on Real Behavior’
In one especially notorious case last year, Stanford University and Dartmouth College apologized to nearly 100,000 people in Montana who received pre-election mailers with an official state seal. The letters had actually come from researchers at the universities, who were testing what messages would affect the recipients’ votes.
The Montana experiment bore some similarities to the study reported last week by a researcher at the University of Cincinnati. For that project, an assistant professor of political science, David Niven, oversaw the mailing of postcards to 240 black voters in Ohio’s Franklin County. His goal was to determine whether pointing out the Republican Party affiliation of two black candidates in two low-profile local contests would affect their support in last November’s general election.
One of the candidates won, the other lost, and the margins in both cases were far beyond the number of voters contacted by Mr. Niven and his research partners. But neither candidate knew of the experiment in advance, and both issued statements of protest when they learned of it last week.
Rita McNeil Danish, who lost her bid for an open seat on the county’s Common Pleas Court by 11,840 votes, said Mr. Niven had interfered with her election, which was explicitly nonpartisan. “Whether it was 100 households or 1,000, this study, through its mailers, distracted the voters from this message and directed voters toward a more partisan, political perspective,” Ms. McNeil Danish said in her statement.
Clarence E. Mingo II, who was re-elected as county auditor by more than 30,000 votes, said the study’s emphasis on race as a primary factor in the election “insults the intelligence of the African-American voters.” Party affiliation is listed on the ballot for the county auditor contest.
Mr. Niven said the study had managed to glean important insights without causing any real interference. The mailings were sent to voters in 28 tiny subdivisions of precincts, mostly in Columbus, chosen because each region consisted exclusively of black voters. “Absolutely there’s scientific value in that,” he said of his findings, “because you’re getting a window on real behavior.”
But the value of such research, and any interference it may cause, are matters of deep division among Mr. Niven’s colleagues in the social sciences. And just as those divisions are growing, the federal government is about to rewrite the regulations for protecting human subjects of research in a way that would ease the oversight of social scientists.
‘Calibrating’ Ethical Review
The regulations, known as the Common Rule, were written largely to protect patients involved in medical experiments. Universities’ institutional review boards are charged with reviewing the ethics of proposed experiments. But because of the rules’ emphasis on medicine, those boards often have less expertise in the social sciences.
And because many social-science experiments — such as interviewing people or conducting surveys — are seen as posing limited risks, the proposed changes in the Common Rule will expand the exemptions that social-science studies receive from institutional review. The idea behind the new rules, which are expected to take effect late next year, is to “calibrate the level of review to the level of risk involved in the research,” federal officials said this month.
But the federal government and institutional review boards may be underestimating the risks posed by field experiments, several experts said.
“These are not clear black-and-white issues,” said Scott W. Desposato, an associate professor of political science at the University of California at San Diego, who is now editing a book on ethics and comparative political research. In a survey he conducted, Mr. Desposato said, both scholars and citizens expressed negative reactions to deceptive field experiments during elections. And often the scholars conducting field experiments are graduate students or junior faculty members who are under pressure to “have an impact” with little or no training in ethics, he said.
Field experimentation is scientifically “very exciting,” said Jon A. Krosnick, a professor of humanities and social sciences at Stanford University, and it has “the potential to produce much stronger, more convincing, more reliable and accurate evidence about what causes what in the political domain.” But, he said, “it also runs this risk of having impact on people who didn’t consent to participate in research.”
The agenda at the political-science conference in San Francisco this month featured dozens of field experiments. Among them was one led by a group that included Donald P. Green, a professor of political science at Columbia University. It sent letters to elected judges in the United States, asking them to recuse themselves from cases in which the judges appeared to have ties to one of the parties in the case.
Mr. Green also advised the three Yale researchers who hired four car drivers to commit minor traffic violations in Mexico City so they could tally police requests for bribes. And he was a co-author with Michael J. LaCour of a much-publicized study about gay-marriage attitudes last December that was later retracted by the journal Science after Mr. LaCour was founded to have fabricated data.
Mr. Green did not respond to requests for comment. One of his co-authors on the judicial-recusal paper, Costas Panagopoulos, a professor of political science at Fordham University, said that the study was still underway and that he did not want to discuss it publicly until the work was completed, to avoid alerting potential targets. Mr. Panagopoulos said, however, that the study had been approved by an institutional review board.
“With our project as well as with much of this field-experimental work, there is generally no deception of any kind that is involved,” Mr. Panagopoulos said. “So whatever information is shared is factual, typically publicly available, information.”
Bypassing the Board
In the case of the Cincinnati study, an institutional review board might consider Ms. McNeil Danish and Mr. Mingo to be the subjects. And boards generally do not regard politicians as needing protection, since they serve voluntarily. That would have been a key criterion for the review board at the University of Cincinnati if it had been asked to decide whether to approve Mr. Niven’s study, said the board’s chairman, Michael Linke, a volunteer associate professor of medicine.
Mr. Niven said he did not seek the review board’s approval because the postcards were mailed to voters by a separate entity that is beyond the university’s jurisdiction. “This was done essentially by an outside group,” he said, “and then I came in and just used the data.”
That sounds reasonable, said Michael M. Binder, an assistant professor of political science and public administration at the University of North Florida, who made his own presentation at the San Francisco conference on experiments designed to increase voter turnout.
The Cincinnati study was “fascinating,” with a “cool research design” that took clever advantage of a unique set of demographics and candidate affiliations in the Ohio election, Mr. Binder said. Mr. Niven “didn’t do anything wrong, he didn’t lie,” Mr. Binder said. But, he added: “If he was sending the mailings, he definitely would need an IRB.”
Mr. Niven said the mailings sent to the Ohio voters came from “the Ohio Blue PAC.” He told The Chronicle he was willing to identify that entity and provide a sample of the postcards it sent, though he hadn’t done so after several days of reminders. His 24-page report to the San Francisco conference did not note that any outside partner was involved.
The question of third-party involvement is critical, Mr. Desposato said. It makes the difference between normal political actors with a “legitimate interest” in affecting a political outcome and a researcher intervening in politics solely for the purpose of studying it, he said.
For now, researchers such as Mr. Niven are caught in a bit of a dilemma. His intervention was so small it didn’t appear to have affected the outcome for either Ms. McNeil Danish or Mr. Mingo. But as a result, Mr. Krosnick said, it’s difficult for him to make broad claims about the wider applicability of his findings. A larger intervention, however, would have raised the risk of actually affecting the election’s outcome.
Mr. Linke said he would have liked the Cincinnati review board to have helped Mr. Niven with that determination. In the future, however the federal government finalizes its proposed changes in the Common Rule, he said, social scientists need somebody — on a review board, in their department, or elsewhere — to provide them with outside ethical analysis.
Representatives of the nation’s institutional review boards, due to gather in November for the annual meeting of the Public Responsibility in Medicine and Research group, are keenly aware of the need to act, Mr. Linke said. Ethical norms for social scientists after the Common Rule revision should be a “hot topic of conversation then,” he said.
Paul Basken covers university research and its intersection with government policy. He can be found on Twitter @pbasken, or reached by email at paul.basken@chronicle.com.
Correction (9/14/2015, 1:57 p.m.): This article originally misstated the professorial rank of Costas Panagopoulos, a political scientist at Fordham University. He is a full professor, not an assistant professor. The article has been updated to reflect this correction.