Academics, it’s often said, don’t play well with others. But that cliché doesn’t apply to all of us. Humanists may derive their practices from the myth of the solitary genius laboring in the garret, but the laboratory sciences are justly known for their culture of collaboration.
Bench scientists, as they’re also called, are socialized into lab-based groups. Under the direction of a senior scientist, the staff of a university research lab — graduate students, postdocs, research assistants and other staff members, and sometimes undergraduates, too — become a team that works together on experiments.
When the results of those experiments are published, the articles often have numerous co-authors. Although the first name on the list is traditionally recognized as the lead author (the person who did the largest share of the most important work) and the last position reserved for the director of the lab, there has historically been less distinction among the names in the middle. The ethos is genuinely collaborative.
That shift has rippled across the scientific workplace. It affects the publication practices of journals, the personnel processes in both academia (hiring and tenure decisions) and industry (annual reviews). And it has changed the workings of the all-important funding industry of big-money grants.
"The most likely driver of the changes is individual career incentives, especially promotion and tenure," said Russell Poldrack, a professor of psychology at Stanford University. Authors feel ever more acutely the need to publish as much and as prominently as possible, so they also request explicit credit for their contribution.
As the scientists have become more exacting and demanding, so have the journals that publish their work. It’s become common in scientific literature to see detailed footnotes that explain who contributed equally with whom, or who did the biostatistics that supported a study of proteins. "There are scientists who get into fights about where their names go on the author lists," Poldrack said.
This change has also rattled the money machine, beginning with the major grant-giving agencies. Federal agencies such as the National Science Foundation and the National Institutes of Health are the largest turbines of the financial engine that powers big science. (Nongovernment sources such as the John Templeton Foundation and the Howard Hughes Medical Institute also award significant grants.)
Federal grants bring not only support but also prestige. Scientists have trouble being recognized on the main stage without a major grant, and many research universities won’t grant tenure to a scientist unless he or she has won at least one. Big grants are also crucial in paying for the labs’ continuing operations. So we shouldn’t be surprised that the practices of large grant-giving agencies ramify outward.
Federal grantors recognize a principal investigator. The PI is the one in charge. (There can be co-PI’s on a grant, but in that event, the agencies require a "shared decision plan" — a sort of prenuptial agreement among scientists to determine how to sort things out if there’s a disagreement later on.)
Big grantors also recognize "key personnel" whose central roles are spelled out in the grant proposal. The PI can change key personnel after the grant has been awarded, but only with agency approval. Underneath the key personnel can be dozens of less significant contributors — performers of scientific piecework, you might say. The scientist who does the chromatography for a physics experiment will get a byline credit even if she’s not a partner in generating the main result, but that credit will place her far down the authorship roster.
Grant-giving agencies now demand much more precise assignment of work and credit. The federal agencies," said one biologist, "are tracking everything now." They want to know where the money went, what was created, how many inventions resulted, how many postdocs were funded, and so on.
Such data analytics reflect the increased emphasis on assessment inside and outside academe, and they are changing the culture of collaboration in laboratory-based science.
There are two main reasons that grantors now demand more granular information about who’s doing what with their money. One is concern about appropriate credit for "key personnel," in order to minimize the potential misrepresentation of results. Misrepresentations, and even fraud, have become more common as big science gets bigger and bigger, results come faster and faster, and pressure to publish grows.
If a result turns out to be tainted, grant agencies want to know whom to blame. Federal agencies may stop funding an entire institution if it lacks proper procedures to deal with plagiarism, fraud, and other compliance issues.
New expectations of journal authors are likewise emerging, said K. Gus Kousoulas, a professor of virology and biotechnology, and associate vice president for research and economic development in the STEM fields at Louisiana State University. If you’re listed as a key participant in an experiment, he said, "you need to have touched the main result," meaning that you have to have been involved in the process that produced it. Moreover, he added, "you must be able to defend the main result as if you were standing alone in front of a poster" describing it at a conference.
Many leading scientific journals now require a statement of approval of the results in a submitted manuscript signed by all co-authors (or by the PI on their behalf). That sign-off assigns responsibility for the results. Not for nothing is the principal investigator also described as the "legal author."
The other main reason for the careful dissection of authorial responsibility is simply that there’s more competition for grant money now, and everyone is looking for the smallest advantages in the contest to win some of it. That leads to more and more parsing of the science being proposed, and which gets done.
There are a couple of overarching ironies to all of this. One is that as modern technology allows for easier and faster scientific collaboration, the volume of information that it creates has constrained the easy exchange that once characterized that collaboration.
Another irony is that as the humanities and humanistic social sciences take modest steps toward building their own culture of collaboration, they draw their inspiration from a scientific model that is moving in the other direction, toward the same competitive individualism that these more qualitative fields have sought to escape. Many humanists now believe — rightly, I would say — that trying to figure out which percentage of a jointly produced manuscript may be assigned to each co-author is like trying to unbuild a building. But more and more scientists seem to be trying to assign credit for each brick.
It seems clear that the shift we’re seeing will continue, and the value of collaboration in science will evolve along with its practice. Is this change a good thing? Will we make up in accountability what is lost in collegiality? We’re living that experiment. Let’s see what we do with the results.