In the spring of 2008, I was asked to testify before the Senate Committee on Commerce, Science, and Transportation about network neutrality. I had testified before the same committee on the same subject six years before. But now the issue was central in a presidential campaign, and interest had become much more focused.
Or subscribe now to read with unlimited access for less than $10/month.
Don’t have an account? Sign up now.
A free account provides you access to a limited number of free articles each month, plus newsletters, job postings, salary data, and exclusive store discounts.
If you need assistance, please contact us at 202-466-1032 or help@chronicle.com.
Oliver Munday for The Chronicle
In the spring of 2008, I was asked to testify before the Senate Committee on Commerce, Science, and Transportation about network neutrality. I had testified before the same committee on the same subject six years before. But now the issue was central in a presidential campaign, and interest had become much more focused.
As I sat at the hearing table, waiting for my chance to speak, I received a message from Sen. John Sununu, Republican of New Hampshire: “You shouldn’t be shilling,” the message scolded me, “for big internet companies.” I was stunned as I realized that Sununu thought I was being paid to give testimony. And then I recognized that of course he thought I was being paid. Practically everyone in my field now gets paid to give public testimony. (“Practically,” but not everyone, and certainly not me.) One colleague had been paid $50,000 to write an essay about cable regulation. I had known of the payment and was surprised it wasn’t noted in the acknowledgments. “I forgot,” he told me.
To consider whether such payments are a problem — that is, whether they corrupt the academy — we must ask two questions: First, do they change the testimony of the person paid? Second, whether they change it or not, do they change the public’s trust in that testimony?
Sununu’s email to me evinced the second concern quite effectively. He assumed I was being paid, and that assumption was quite fair; lax standards for reporting such conflicts create strong pressure among academics to accept such payments — there’s little good in rejecting them, and little harm in accepting them. And as a senator, Sununu obviously understood the careful dance that would have been behind such a payment. He wouldn’t have believed I was bought outright. No one “sells” his testimony in such a crude way. No one has to. Instead, he would believe that if I were being paid, I would be sensitive to bending my words in ways that made their sponsor happy. And to the extent that he believed that, he would discount my words appropriately. Such is the nature of Washington. Why wouldn’t it be the nature of the academy?
In this sense, trust in a particular role, such as that of an academic, or a doctor, or a psychiatrist, is a collective good. You could be the most trustworthy used-car salesman in the world, but good luck persuading the average customer to trust you. The honest used-car salesman is a chump, at least if there’s no way to demonstrate his difference from others “like him.” He has little incentive to behave better than them, and maybe real incentive to behave worse.
It therefore makes perfect sense for a profession to police trust — to restrict individuals from converting that collective good into a private gain. If academics are perceived as shills, their potential contribution to the search for sensible policy is weakened.
ADVERTISEMENT
Paid testimony is just one example of financing that may influence academics. As government funding for higher education falls and the cost of administration rises, universities and research departments are increasingly keen to find other sources of revenue. This pushes academics to spend more and more time finding private sponsors, who have many reasons for funding research. Sometimes the reason is simply charitable — a way to give back to the school. Sometimes it is personally motivated — parents who lost a child to a rare disease may fund research for that disease. And sometimes it is commercial — for-profit companies eager to use the talent and insight of researchers to advance the science, as well as the economics, of the commercial entity.
In each case of private sponsorship, the best institutions devise ways to minimize any bias, or any suggestion of bias. At the University of Chicago, where I first taught, we’d receive a summer stipend to fund our research and writing. At the end of the summer, the dean would inform us whom we should thank in the work we had completed. That sequence was important. When you did your work, you had no way to know whether it would interest or offend the funder or, if you cared, how to tailor your work to fit the funder’s interests. Likewise, at Stanford, where I helped found the Center for Internet and Society, the dean was quite clear that I would have no obligations to raise money for the center, and its work would be independent of fund raising. We were given a budget; it was the dean’s job to raise money for that budget.
Yet this effort to minimize influence or bias is easier for some institutions than for others. Harvard or Stanford or the University of Chicago can afford to do the right thing, as they have an enormous potential for fund raising. Other institutions are less free. And we should think carefully about the pressures that affect those institutions as they craft the rules that will guide their faculty members. As one dean at a major American law school told me, “The funders are becoming much more transactional. They want to know what their money will get them.” That’s fine for them. But the real question concerns what it does to the researchers. Do they internalize the pressure of that transaction? Does it affect how they do their research?
Let’s imagine first that it does not. Whether or not the researcher is corrupted, there is a problem if trust in the research is weakened.
ADVERTISEMENT
Dennis Thompson, father of the field of “institutional corruption,” writes that rules against conflict of interest “do not assume that most physicians or researchers let financial gain influence their judgment. They assume only that it is often difficult if not impossible to distinguish cases in which financial gain does have an improper influence from those in which it does not.”
If the public doesn’t believe your research because of the way it was funded, that decreases its value — at least in those contexts in which you depend upon the public’s trust. Consider pharmaceuticals: If your profits depend upon doctors’ believing in the efficacy of your drugs, then skepticism induced by the way the drug’s research was funded could weaken your profits. This is not just speculation: A study by Aaron Kesselheim and his colleagues, funded by the Edmond J. Safra Center for Ethics, at Harvard, found that physicians were half as willing to prescribe hypothetical drugs described as having been studied in industry-funded trials as they were to prescribe drugs said to have been studied in NIH-funded trials. Half as willing! That’s a pretty significant effect, driven solely by the way the research was funded.
If, in fact, industry funding does not corrupt clinical trials, maybe we should spend our time educating those physicians and the rest of the public, rather than trying to muck about with how research gets funded. This is a fair demand. And indeed, unless we can educate the public about how an otherwise benign influence is still likely to corrupt, there will be little resolve among most to address the problem. An analogy may set up the point. Until the late 19th century, doctors generally considered the germ theory of disease to be absurd. How could something invisible wreak so much havoc? To believe in germs, doctors needed both data about their effect and a theory about how they worked.
The evidence for academic corruption and how it might work is much greater than I can cover in this essay. But the outlines are clear enough, and they follow two distinct tracks. The first traces an obvious economy of influence that is pervasive in social and professional life. The second unpacks more completely the not quite obvious ways in which our brains confront these social and professional contexts.
Oliver Munday for The Chronicle Review
First let’s consider the influences. Luigi Zingales, a professor at the University of Chicago’s Booth School of Business, has studied academics researching finance who, he explains, understand that they will be able to get the data they need for their work only if the bank or financial institution grants access. But that institution is unlikely to give permission to critics. As Zingales writes, academic economists “generally have to develop a reputation for treating their sources favorably.”
ADVERTISEMENT
That dynamic is, of course, not present in every field. You can study the Confederate government without needing to strike a deal with Jefferson Davis. But Zingales points to a mechanism that we all recognize more generally: The most natural of human traits is the effort to please. We try to please our bosses. We try to please the people we must deal with every day at work. Whole libraries are filled with studies about the efforts of regulators to please those they regulate. As Zingales shows, sometimes the effect of such efforts in the aggregate steers a field away from certain truths. And when we can see this systemic effect, we need to craft incentives to avoid it.
But isn’t the problem eliminated if we just deploy ethical souls? Couldn’t we better train (or discipline) academics so they are not improperly influenced?
It’s here that we confront the most important assumption behind the belief that we need not worry about money’s corrupting influence: that if people would only act decently, or responsibly, they could resist the influence. Or, more strongly, that the only people who are bent by this kind of influence are weak or corrupted people. And thus, if it is someone you trust, there is no reason to worry. Call this the "(ethically) tough guy assumption.” Like avoiding a cupcake, or a drink before driving, the issue is simply one of will and determination, nothing more.
Here’s what we know about the tough-guy assumption: It is completely false. The influences that operate to bend judgment don’t operate at the conscious level. They don’t announce themselves. Indeed, for many of these influences, if they did not have the effect predicted, that would not make the person a “strong person.” It would make him a sociopath. The psychological influences that institutional corruption must reckon with are the product of thousands of years of evolution. And that evolution has built us to respond socially in certain predictable ways.
Whether or not the researcher is corrupted, there is a problem if trust in the research is weakened.
Drawing on a wide range of psychological research in their essay “Physicians Under the Influence: Social Psychology and Industry Marketing Strategies” (2013), Sunita Sah and Adriane Fugh-Berman outline just how industry influence can affect even the ethically engaged professional. Here are some of the biases they outline, all of which have been demonstrated by numerous studies.
ADVERTISEMENT
The Belief in Biased Information: Medical professionals commonly believe they can distinguish between objective truth and the marketing fluff of pharmaceutical companies. Multiple studies show they are wrong, including studies demonstrating that their beliefs about promoted drugs correlate with promotional material rather than scientific fact, and that people base their beliefs on initial information even after discovering the information was flawed or irrelevant.
The Belief-in-Self Bias: People underestimate their own biases. For example, doctors believe that their own prescribing behavior is not affected by drug promotion, while believing that most other doctors’ prescribing behavior is affected — two beliefs that cannot be true in the aggregate. As Carol Tavris and Elliot Aronson describe, it’s not the bad person who’s most vulnerable to these corrupting influences. It’s the good person. The thief knows he’s a thief. But the good person doesn’t.
The Entitlement Bias: Professionals have typically worked hard to achieve the status they enjoy. That burden gives them a sense of entitlement that, Sah and George Loewenstein have shown, helps them rationalize the acceptance of gifts or inducements. Professionals implicitly reminded of the sacrifices they had made to achieve their positions were twice as willing as those in a control group to accept such rewards; those given explicit arguments to accept them were almost three times as likely to do so.
The Reciprocity Bias: Humans reciprocate. It is built into who we have evolved to be that we recognize a gift and feel obliged. And while receiving a large gift might set off alarm bells (“Am I being bribed?”), small gifts can have their effect subconsciously. Social psychologists have studied the effect of gift influence without the target’s being aware of that influence.
ADVERTISEMENT
The Social-Validation Bias: Humans are social animals. We are affected by what our peers believe — and not just when we’re teenagers. This dynamic means that the behavior and attitudes of professionals will be affected in part by the behavior of others. For example, studies show that students in medical schools in which gift-giving had been restricted were more skeptical of marketing messages. Other studies show that initial resistance to marketing messages can fade over time, in part because of inconsistency between the explicit policy and the messages conveyed unofficially.
The Moral-License Bias: Doing good can make you bad. Put differently, the more morally you behave, the more likely you are to cut yourself some slack. That’s the conclusion of the extraordinary work of Dan Ariely and others: We all hew close to what we know is good and steer as far as we can from what we know is bad. But when we’ve behaved well, we feel entitled to deviate. Good behavior is a kind of savings; bad behavior is the withdrawal of that savings from our implicit moral bank.
The consequence of this analysis is not easy for the modern academy to accept, for so much in the funding of academics depends upon denying the implications of this work.
Academics are usually people who have chosen to do what they do not for the money but for the freedom, or the intellectual engagement, or the desire to teach. All of these motives seem far from the motives that guide the corrupt. And yet, in an obvious, psychological way, the academic is the most vulnerable. Not only is he less likely to be experienced in the influence game, but he is also psychologically primed to be most vulnerable.
ADVERTISEMENT
That is why the independence of researchers must be built into the DNA of the institutions within which they work. For the consequence of failing to account for this vulnerability among researchers is a remarkable growth of skepticism by the public about what science says.
No doubt some of this skepticism stems from other factors. There’s much we don’t understand, because the attention span necessary for understanding is greater than the attention we, or the media, can devote to the issue. The competing reports about red wine or the healthfulness of fat, for example, produce confusion in most of us and hence skepticism.
But a significant portion of this skepticism comes from a familiar pattern in the research about risks from certain activities or substances. The pattern goes something like this: The substance is used, and it is pervasive; questions are raised about its safety; the industry insists on its safety, questions notwithstanding; after some time, the science supporting that conclusion of safety is drawn into doubt; the dominant view is then inverted, as questions about safety become overwhelming.
It’s not the bad person who’s most vulnerable to these corrupting influences. It’s the good person. The thief knows he’s a thief. But the good person doesn’t.
This is the pattern evoked by the term “tobacco science.” As many have described, none more effectively than David Michaels in his book Doubt Is Their Product (Oxford University Press, 2008), the effort to sow doubt to protect a threatened substance is an industry of its own. This is the history of much more than tobacco. We don’t do enough to account for this skepticism. And the consequence is significant.
ADVERTISEMENT
In September 2014, I attended Zeitgeist, an annual conference hosted by Google, drawing together academics and thought leaders from around the world. Jimmy Carter was a speaker, introducing his book about the modern scourge of sex slavery. The story he told was astonishing to the audience assembled. There are more slaves today than at the height of African slavery. The No. 1 airport for transporting sex slaves? Atlanta. This is an evil all could rally to oppose. The audience loved and respected Carter. He had become a hero.
But then, toward the end of his session, a young man rose to ask Carter about GMOs. The man assumed that Carter, like most liberals, would be an opponent. But Carter said GMOS were “one of the best things that [had] happened to the world.” Almost immediately, it seemed, a significant portion of the audience turned against Carter. This former farmer obviously knew nothing about GMOs.
The GMO debate is to liberals what climate science is to conservatives. On any measure, the dominant view among scientists is that climate change is real and that GMOs are not, in their nature, harmful. Yet those dominant views of science are accepted differently, depending on politics.
This difference plays into a comfortable story about how partisans are blind on both sides. It isn’t simple ignorance that drives the refusal to accept what science claims. It is a perception of motivated bias. Conservatives who doubt climate change believe it is a conspiracy among liberals, meant to destroy industries thought dangerous to the environment. Liberals who doubt the safety of GMOs believe the science behind the industry claims is just more “tobacco science.” And indeed, if you build a matrix of GMO science, and distinguish between industry-funded and independently funded research, you’ll find a strong correlation (at least) between industry funding and the results that benefit industry.
ADVERTISEMENT
I am a believer in science. I have no compunction about tinkering with what nature gave us. And I think we should encourage research into better ways to produce food or avoid disease that are consistent with the values of our societies.
But I am also a believer in the harm from institutional corruption. And I believe that the anti-GMO movement is a consequence of that harm. Because whether or not, in this case in particular, the almost religious skepticism about GMOs is justified, it is plain that the skepticism grows from other cases of corrupt science. The anti-GMO movement is the payback for lax attention to this kind of corruption in the past. And rather than simply ridicule the people who are raising concerns now, we should be much more vigilant about removing any reason they might have for questioning GMO safety.
As universities and research departments seek new sources of revenue, they must remember that these sources come with strings, however carefully they are crafted to seem invisible. These strings pull at us, and if we build our institutions to allow them to attach, they will affect us and our work. This will happen despite the fact that researchers are good people — indeed, we could say, because researchers are good people.
We need to see that, especially among the best of us, we are all still human. We need to recognize the limits of the human brain and build institutions that account for those limits. We need to recognize them and respect them — before they sell the soul of the academy.
Lawrence Lessig is a professor of law and leadership at Harvard Law School and the founder of Creative Commons. His new book is America, Compromised (University of Chicago Press), from which this essay is adapted.