Academic circles are stirred up over a recent Wall Street Journal article that accused scholars of taking Google’s money to study technology’s impact on society. In researching the story, reporters relied on a source’s database that amassed a list (probably using Google) of papers, workshops, and centers that purportedly had Google’s financial backing. The point was to prove that Google-backed professors seeded regulatory debates with studies to shore up the tech giant’s business interests.
The database is rife with errors and tenuous links. For example, it lists people who never received a dime from Google and names graduate students who attended tech conferences co-sponsored by Google, a frequent sponsor of academic technology events, because the attendees published papers about tech and society a few years later. The article’s “Gotcha!” angle suggests that private technology support for university research is a problem that must be rooted out. It suggests that money will inevitably produce conflicts of interest and pave the way for tech companies to bamboozle the public.
Let’s set aside the fact that a dearth of public funds for higher education and public-interest research in the United States defines the history of science in this country. Let’s also set aside that some professors crying “foul” seem to agree with the Journal’s premise: Academic researchers and tech companies make dubious bedfellows.
But in social studies of media and technology, we can’t learn about public life without closely sifting through the social relationships and exchanges stored by the likes of Google. Researchers and tech companies must share skills and data sets to understand things like the meaning of privacy in the age of Facebook, how clickbait-y news headlines, piped into our filter bubbles, shape our politics, or — as in my research — how many people are paid to do “digital labor,” like reviewing content or fielding customer-service queries.
I sit on the board of Public Responsibility in Medicine and Research, hold an academic faculty appointment, and am a full-time researcher at an industry-based lab — and I believe framing this issue as a conflict of interest misses the point.
Researchers in the United States have long been heavily dependent on the private sector, compared with our peers in other countries, to do foundational science. The lack of options for state-sponsored, independent research makes it necessary for biomedical researchers studying, say, a drug’s efficacy, to work with the drug’s manufacturer to get the data necessary for evaluation and analysis. But to call this a conflict of interest is to miss an opportunity. It also ignores the reality that research institutions, including universities, play a part in shaping scholarly inquiry. Tech companies, dependent on research communities for innovation, and research communities, dependent on tech-industry data, could hold each other to a new set of expectations that obligates them to collaborate, even more than they do now, to advance the public good.
Conflict-of-interest disclosures, established 20 years ago by biomedicine journals and public-health institutions, made it harder for professors to show their faces at cocktail parties if they were in the back pockets of Big Tobacco. But declaring conflicts of interest didn’t call on scholarly communities to stop bad research from moving ahead. Nor were individual researchers urged to choose the public good over personal profit. The burden was, simply, to disclose financial backing. Arguably, reliance on conflict disclosures led to a rise in paid research leaves to facilitate research, labeled “for internal use only” behind the IP-protected firewalls of biomedicine. The precipitous drop in higher-education funding, from the 1980s onward, meant that new Ph.D.s, particularly in cutting-edge research domains like gene therapies, were more likely to receive lucrative job offers from Big Pharma than from anemic universities. This trend gutted our supply of teachers for the next generation of scientists.
We can’t blame scholars, particularly early in their careers, for seeking out the best resources and access to data to do their work.
Research communities studying technology’s social impact face similar challenges. Their goal is to understand the human condition in a digital age. They can’t do their job without accessing the proprietary data sets of the tech world, now that so many of us shop, debate, and flirt online. Like Big Pharma, technology companies are gobbling up the next generation of scholars needed at universities to train future tech researchers asking social questions. Private-sector technology jobs come with funding and access to tantalizing data sets chronicling people’s digital lives. We can’t blame scholars, particularly early in their careers, for seeking out the best resources and access to this data to do their work.
The question shouldn’t be how to avoid working with tech companies; the question should be how best to ensure that collaborations between tech and social research, like those in pharmacology and biomedicine, mutually benefit not only a company’s interests and scientific inquiry but the public’s right to respectful, just, and beneficial research. That seems particularly important when the raw materials for our analyses are private citizens’ status updates, clicks, and tweets. As tech rushes headlong into the age of artificial intelligence, and algorithms loom larger than life, don’t we need more tech companies and social researchers working together to ensure that society is well-served by new technologies?
How might researchers and tech companies better assure the public that they will produce studies designed explicitly to benefit the public interest and private citizens generating data? We could start by calling on tech researchers, at universities or at a private company, to not only make public the funders underwriting it but also identify the anticipated benefits of the study. This is standard practice, in most other social-research domains, if the study involves humans or animals.
There are other mechanisms we could put in place, without lifting a regulatory finger: First, university-based social researchers of technology should press their peers at companies to produce data sets for public-interest research. Second, researchers must come to expect of themselves and their peers that they will do research that offers an explicit benefit to both research participants and the public interest. It need not be greater but it should never be less than what a tech company stands to gain from the study’s insights.
And, lastly, private citizens should be given a clear choice to participate in or opt out of research studies involving their data. This would give them a meaningful way to share their data with researchers in the name of the public good. University and corporate technology research collaborations aren’t going away. The history of conflict disclosures in privately funded academic research is littered with unforeseen consequences for future scholarship and failures to do right by the public. But private-public partnerships don’t have to be a corrupting force that is, by definition, a conflict of public interest. Private citizens, tech companies, and researchers could choose to make these collaborations serve the public good. Otherwise, the public-minded mission that animates much of the social sciences may become untenable. For those who study society and technology, finding a mutually beneficial and publicly accountable path forward is a necessity, not a luxurious option.
Mary L. Gray is a fellow at Harvard University’s Berkman Klein Center for Internet and Society, an associate professor at Indiana University, and a senior researcher at Microsoft Research.