I am currently working on an uncontentious, anonymous online survey of professors. Last month, the research ethics board at my (Canadian) university told me that I was not approved to run this survey unless I employed a crucial safeguard: I had to warn these professors that if they filled out the survey in a public place, then someone might see their screen and that this could be an invasion of their privacy. Let me repeat that: The board that decides whether or not I can do my research insisted that I must tell people with Ph.D.s that using their laptops in public might allow someone to see their screen.
We’re sorry, something went wrong.
We are unable to fully display the content of this page.
This is most likely due to a content blocker on your computer or network.
Please allow access to our site and then refresh this page.
You may then be asked to log in, create an account (if you don't already have one),
or subscribe.
If you continue to experience issues, please contact us at 202-466-1032 or help@chronicle.com.
I am currently working on an uncontentious, anonymous online survey of professors. Last month, the research ethics board at my (Canadian) university told me that I was not approved to run this survey unless I employed a crucial safeguard: I had to warn these professors that if they filled out the survey in a public place, then someone might see their screen and that this could be an invasion of their privacy. Let me repeat that: The board that decides whether or not I can do my research insisted that I must tell people with Ph.D.s that using their laptops in public might allow someone to see their screen. How did we get here?
At a basic level, the arguments in favor of ethical review of research are well known. Before formalized screening for ethics became routine, many horrific medical experiments were conducted on vulnerable people, who were often deceived about the nature and risks of the research. This history goes well beyond Tuskegee. In the 1940s, Albert Kligman, a University of Pennsylvania dermatologist who was one of the inventors of the acne medication Retin-A, intentionally exposed prisoners to herpes, staphylococcus, athlete’s foot, and the chemical agent dioxin. (Penn Medicine apologized last year for Kligman’s research misconduct.) If you want a jarring experience, consider reading one of his published conference proceedings, which include the results of an experiment to test anti-fungal agents on children with developmental disabilities. Kligman deliberately infected some children with fungi in order to test various strengths of anti-fungals, reporting that “one child in a state mental institution was able to tolerate the formalin treatment for five hours.” The conference proceedings include the subsequent discussion, during which no one even glancingly referenced the pain he inflicted on these children. The doctors who commented apparently considered such practices to be normal.
The problem was not limited to medicine. In the 1960s, the Harvard psychologist Henry Murray conducted experiments that entailed subjecting undergraduate students to, in his words, “vehement, sweeping, and personally abusive” verbal attacks over a period of years. One of those undergraduates was Ted Kaczynski, “the Unabomber.”
In response to these and other ethical breaches, countries around the world began setting up institutional review boards for human-subjects research. In 1962 the NIH started requiring research participants to sign consent forms. NIH policies spread to other research areas in response to the 1974 National Research Act, and they have spread further since then. The goal was to ground research in principles such as informed consent, and to ensure that research subjects would gain some benefit from their participation in studies. These principles were originally designed for medicine, and it was never entirely clear how neatly they would translate across disciplines. (This problem was exacerbated by the fact that social scientists were underrepresented on the federal commission tasked with drafting rules for ethics review.)
The burden of this broken system falls on all researchers, but especially on those who are early in their career.
ADVERTISEMENT
It was apparent early on that the process could run amok. In 1966, the sociologist Gresham Sykes worried that some review boards might be “overzealous,” hewing to “the strictest possible interpretation” of institutional standards. He predicted that relying on experts from a wide range of fields — as IRBs typically do — could produce committees “incapable of reasonable judgment in specialized areas.” And he worried that review boards might simply function as rubber stamps, offering “the appearance of a solution, rather than the substance.”
Sykes’s fears have come to pass. Quick searching leads to numerous examples of IRB overreach. For example, the psychiatrist Scott Alexander Siskind tried to get approval for a study of bipolar disorder for two years before finally giving up. He worked with psychiatric patients, who weren’t allowed access to pens lest they stab themselves, but his IRB would not accept consent forms signed in pencil. Another IRB objected to research on math education because “subjects may feel bored or tired during interviews.” While discussing my IRB issues with others, I heard from a professor who studies mosquitoes and whose IRB said that if he wanted to ask people about their past mosquito bites then he should have trauma counseling available. There are manymoresuch examples.
Some of the degeneration of IRBs is a result of the usual suspects: fear of legal risk on the part of universities, petty bureaucrats with delusions of heroism and an infantilizing view of research subjects, generalized incompetence operating in an environment with little accountability, and genuine resource constraints.
But some of it is also because of the Procrustean task of fitting research ethics into the machinery of university administration. When our completed research is peer reviewed before publication, that review is typically done by scholars who are not only within our discipline, but experts in our research niches. Most researchers would find it comical to imagine a sociologist reviewing an article by a biologist and vice versa. We would expect that the results of such a process would not be particularly informative and that such non-expert reviewers would very often disagree on their assessments of the research. We would be right.
But when it comes to reviewing research before it is carried out, we pretend that a grab bag of academics, administrators, and nonacademic community members can make these sorts of evaluations. We trust them not only to assess whether the proposed research clears an ethical bar, but also to make recommendations on how to improve the ethics of the research.
ADVERTISEMENT
Unsurprisingly, this often goes badly. For example, when a group of doctors from different institutions submitted the same low-risk, education-related proposal to their six different IRBs, the responses were wildly inconsistent. One IRB approved the study as written, while the remaining five recommended up to 24 changes (though they did not agree on which parts of the proposal ought to change). The fastest review board finished its work in six days, and the slowest took over six months (at which point the researchers gave up and dropped that institution from the research plan). This is just one study, but there areothers.
And medical research is an easy case for IRBs, because medical researchers have consistently been a part of the discussions around regulating research ethics. Most rules were written with the small but real risk of physical harm from medical research in mind. Things are more challenging for qualitative researchers or experimental social scientists. It all adds up to a major burden. In a survey of over 11,000 researchers with federal grants in 2018, over two-thirds of those who worked with human subjects said that dealing with their IRB was a “substantial” part of their workload. About one-third of the same group said that improving IRBs was either a “high” or their “highest” priority for reducing unnecessary administrative burden.
This absurdity is allowed to stand for two reasons. First, as far as I can tell, nearly all proposed research eventually getsapproved. If everything gets through eventually, then IRB issues are an annoying detour rather than a dead end. In this situation, busy professors may decide that challenging the IRB isn’t a good use of their time. Of course, it isn’t clear that everything getting through is a good sign. Obviously, the presence of an IRB influences the kind of research that people attempt — but it still seems that a filter that lets everything through might not be a good filter.
Second, the unpredictability of the ethics-review system hurts most when you are facing time pressure. This means that it most hurts graduate students, researchers on short-term contracts, and tenure-track professors, probably in that order. In other words, the academics most affected by the present IRB morass are also worst-placed to challenge the system.
What can be done? The first step is to realize that the current system is strange. If one thinks that expert peer review is important to evaluating research ex post, then expansive ex ante ethics screening by people from other disciplines — as well as nonacademics — is bizarre. Consider how much specialized knowledge is involved in understanding the risks and expected benefits of any research project. A study that entails interviewing survivors of a natural disaster, for instance, might risk re-traumatizing participants. It might also benefit participants, offering them a chance to work through a difficult experience. It might be harmful for some participants and helpful for others, along lines that are difficult for even the study’s authors to predict. (After all, researchers don’t propose studies when they already know how things will shake out.) Assessing the ethics of a study like this is a task for psychologists with expertise in trauma — not for random volunteers.
ADVERTISEMENT
And of course all of this presupposes a medical framing in which the benefits of research plausibly accrue to the participants. But much of social science is not like this. Consider a case where a researcher wants to understand how to reduce police corruption, and interviews police officers as part of that research. The potential beneficiaries here are the public — the interviewed officers, if anything, stand to be financially harmed by the research. But do we really think researchers should be obligated to benefit corrupt police officers just because those officers agreed to be interviewed?
One might think the solution to all this is to fully inform committees about the risks and benefits of each research project, but this is harder than it sounds. Weighing risks and benefits — and understanding when a proposal is under- or overstating them — is genuinely difficult. It is silly to expect non-experts to be able to reliably pass this kind of judgment. Grant-making organizations, for example, assess research proposals by relying on either expert internal reviewers or peer review. They do not just hand a stack of grant proposals to a grab bag of local professors and community members.
So the current system is far from ideal. However, if we’re stuck with review by diverse ethics boards then we ought to review only a minimal core of concerns. This is especially true for low-risk research such as anonymous online surveys. Surprisingly, this protocol is already codified in many countries, including the U.S. The fact that many IRBs in the U.S continue to unreasonably scrutinize low-risk research is a problem of local practice and institution-specific IRB culture. Canada, however, lacks exemptions for benign behavioral research. Adding such an exemption would be a meaningful improvement to the Canadian status quo.
Other small tweaks could lead to large time savings. Currently, if an approved protocol changes in minor ways — say, a small revision to the title of the research project — then the researchers must file an amendment to their protocol and wait until this is approved by their ethics board, which could take months. This requirement could be modified so that researchers simply self-certify that such amendments meet a de minimis standard, avoiding another back-and-forth with their ethics board. In a similar spirit, U.S. law allows researchers to self-certify that their research is exempt from review, and a law professor at the University of Chicago has created a simple online form that helps researchers make this decision. A large share of social-science research is exempt, and simple self-certification could drastically reduce red tape and speed up research. All that is needed for this change to be adopted is for universities to allow researchers to use such tools.
The truth is, IRBs will nearly always lack the expertise to genuinely promote ethical research across diverse disciplines. Instead, they often function as a kind of legal screen: They’re the price you pay to have your university’s legal team defend you if you get sued because of your research.
ADVERTISEMENT
This insurance can be provided by entities other than universities. Private IRBs exist, but currently they focus almost exclusively on medicine — and universities often do not let people bypass their institutional IRB by gaining approval elsewhere. The presence of an outside option for review would help researchers with less-than-competent review boards. A private IRB system might also result in more disciplinary specialization among IRBs, which could lead to a more consistent and useful process.
Research, including social-science research, can cause harm to subjects. A research project deserves scrutiny to the extent that it can cause harm. However, the current mode of scrutinizing ethics in social science is broken. The burden of this broken system falls on all researchers, but especially on those who are early in their career.
The burden also falls on research subjects and the people who could benefit from our research, if only we were allowed to do it. Consider one last example: The charity GiveDirectly sends money directly to the cellphones of poor people, mostly in low-income countries. The group regularly runs experiments in partnership with academics in order to measure its impact and improve the effectiveness of its programs. According to one of their founders, one of their first experiments was delayed for over a year — depriving people living on a few dollars a day of a large cash transfer — because an IRB was afraid that giving money to poor people might hurt them.
Ryan Briggs is an associate professor in the Guelph Institute of Development Studies and the department of political science at the University of Guelph.