A recent report from the Institute of Medicine gives academe and government a new opportunity to think and act differently about promoting integrity in scientific research.
Speaking as a member of the IOM committee responsible for the report (although not representing either the committee or the institute), I believe that we now can change the entire tone of the discussion about the conduct of science.
In the past, scientists and policy makers have asked questions that are basically negative: What is misconduct? How can it be prevented? What should be done to protect whistle-blowers and to provide due process to researchers accused of misconduct? The new report, in contrast, raises positive questions: What is integrity? How do we find out if we have it? How can we encourage it?
Many scientists have found it difficult to invest much energy in the negative goal of preventing research misconduct. Almost every researcher considers misconduct to be pathological and destructive behavior, but also very rare.
At the same time, as I noted in an earlier essay in The Chronicle (“The Practice of Science at the Edge of Knowledge,” The Review, March 24, 2000), the everyday practice of science can be remarkably ambiguous. In the words of the National Academies’ 1992 report “Responsible Science,” sometimes “the boundary between fabrication and creative insight may not be obvious.”
For instance, when it comes to distinguishing data from experimental noise, heuristic principles can be helpful, but an investigator’s experience and intuition -- in short, his or her creative insight -- will determine the final interpretation. To some, the selection of results might appear arbitrary and self-serving, or even an example of misconduct. The case of the Nobel laureate Robert A. Millikan, who selected 58 out of 140 oil drops from which he calculated the value of the charge of the electron, provokes precisely that kind of debate.
Not only is data selection a common and necessary feature of much science, but also articles announcing a scientific discovery typically do not describe what actually happened during the research. Instead, as the Nobel laureate François Jacob wrote in The Statue Within: An Autobiography, “writing a paper is to substitute order for the disorder and agitation that animate life in the laboratory ... to replace the real order of events and discoveries by what appears as the logical order, the one that should have been followed if the conclusions were known from the start.” That reconstruction of reality into a presentation of discovery according to the inductive format provoked the Nobel laureate Peter B. Medawar to write his essay “Is the Scientific Paper a Fraud?”
Because the practice of science can be so ambiguous, too much regulation in the attempt to prevent research misconduct is risky. It has the potential to discourage novelty and innovation and, as a result, to damage science.
Promoting integrity in science has both individual and institutional components: encouraging individuals to be intellectually honest in their work and to act responsibly, and encouraging research institutions to provide an environment in which that behavior can thrive. If we think of integrity in that way, it becomes an output that can be analyzed using measures for individuals (e.g., ethical sensitivity) and institutions (e.g., moral climate) that social and behavioral researchers have already devised. Social and behavioral researchers have a lot to do here if, as proposed in the Institute of Medicine report, federal agencies and foundations that provide financial support for research give money for studies designed to identify, measure, and assess the factors that influence integrity in research.
The IOM report suggests that measuring integrity as an institutional outcome would require both external peer review and self-assessment, and recommends that such measurement become an element of institutional accreditation whenever possible. External peer review is essential in measuring integrity -- as it is in the practice of science itself. Individuals and institutions alike can aim to be objective but nonetheless be fooled by illusion or self-deception.
Presumably, an institution’s self-assessment would include asking individual investigators what kinds of things they do in their research groups to encourage integrity. What a change of emphasis that would be! Although training in the responsible conduct of science exists in various formats at many research institutions, it hardly ever occurs within individual research groups. As long as the apprentice style of science continues, young scientists will be influenced most by what their mentors say and do in practice, not by what professors teach them in classrooms.
Asking investigators how they encourage integrity in their research groups might be just the impetus to get that practice started, if it is not already routine. I suggest a short survey that would ask if they have explicitly discussed any of the following with members of their research teams: the kinds of information to be recorded in notebooks, and in what detail; the basis on which authorship of papers is decided; the difference between heuristic experiments (from which one learns something new) and demonstrative experiments (which do not necessarily extend an investigator’s knowledge but often are necessary for presenting the work to others); reasons for including or excluding data from a presentation in a seminar or from a manuscript; whether it is all right to discuss unpublished findings with researchers in other labs; and what researchers should know about recent and past published literature in their field.
Given the ambiguity of science, I don’t believe there is a single correct way to handle any of those issues. Discussing them explicitly, however, introduces moral reasoning and professional values in the context of the research group -- the place where we need them most.
Finally, the IOM report recognizes that research organizations operate within a broad context. Government regulations and financial decisions can have as much impact on research groups as local institutional policies do. We hear a lot about the economic impact of governmental policies, as well as their environmental impact. What about their ethical impact?
Many scientists, policy makers, and informed members of the public view individual and institutional conflicts of interest as the greatest ethical problems in science. Individual conflict of interest became particularly problematic in the biomedical sciences in the 1960s, when federal support for research began to be used to pay faculty salaries and professors themselves became responsible for obtaining those funds.
That so-called soft-money support has increased pressure on faculty members to be productive. If your salary comes from soft money and you lose your grant, you may lose your position at the same time. Individual conflict of interest was further exacerbated, and institutional conflict of interest became of increased importance, with the passage of the 1980 Bayh-Dole Act, which permitted institutions to own what their employees had invented with the help of federal funds. That pushed research universities and medical centers into the biotech business.
Whatever the long-term effects of both new policies, they are good examples of how financial and political decisions can affect the ethics of research practice. Beyond the recommendations of the IOM report, an excellent additional step in the process of encouraging research integrity would be for the government and universities to pay closer attention to the ethical impact of their decisions -- that is, whether they would promote or discourage ethical research behavior by individual scientists.
Frederick Grinnell is a professor of cell biology and the director of the program in ethics in science and medicine at the University of Texas Southwestern Medical Center at Dallas.
http://chronicle.com Section: The Chronicle Review Volume 49, Issue 6, Page B15