Skip to content
ADVERTISEMENT
Sign In
  • Sections
    • News
    • Advice
    • The Review
  • Topics
    • Data
    • Diversity, Equity, & Inclusion
    • Finance & Operations
    • International
    • Leadership & Governance
    • Teaching & Learning
    • Scholarship & Research
    • Student Success
    • Technology
    • Transitions
    • The Workplace
  • Magazine
    • Current Issue
    • Special Issues
    • Podcast: College Matters from The Chronicle
  • Newsletters
  • Virtual Events
  • Ask Chron
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Professional Development
    • Career Resources
    • Virtual Career Fair
  • More
  • Sections
    • News
    • Advice
    • The Review
  • Topics
    • Data
    • Diversity, Equity, & Inclusion
    • Finance & Operations
    • International
    • Leadership & Governance
    • Teaching & Learning
    • Scholarship & Research
    • Student Success
    • Technology
    • Transitions
    • The Workplace
  • Magazine
    • Current Issue
    • Special Issues
    • Podcast: College Matters from The Chronicle
  • Newsletters
  • Virtual Events
  • Ask Chron
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Professional Development
    • Career Resources
    • Virtual Career Fair
    Upcoming Events:
    An AI-Driven Work Force
    University Transformation: a Global Leadership Perspective
Sign In
The Conversation-Logo 240

The Conversation

Opinion and ideas.

Lessons Learned From the Facebook Study

By Duncan J. Watts July 9, 2014

By now, anyone who is remotely interested knows that the Facebook data-science team, in collaboration with some researchers at Cornell University, recently published a

To continue reading for FREE, please sign in.

Sign In

Or subscribe now to read with unlimited access for as low as $10/month.

Don’t have an account? Sign up now.

A free account provides you access to a limited number of free articles each month, plus newsletters, job postings, salary data, and exclusive store discounts.

Sign Up

By now, anyone who is remotely interested knows that the Facebook data-science team, in collaboration with some researchers at Cornell University, recently published a paper reporting “experimental evidence of massive-scale emotional contagion through social networks.” If you’ve heard about this study, you probably also know that many people are upset about it. Even the journal that published it, the Proceedings of the National Academy of Sciences, has issued an “editorial expression of concern” about potential violations of ethical standards.

Much of the concern has focused on the issue of informed consent, and whether or not the Facebook terms-of-service agreement constitutes such a thing. That focus is understandable, but it has distracted attention from the real problem: the failure of ethical-review procedures to keep up with technology.

Consider what the National Science Foundation has to say about informed consent on human-subject research:

The fundamental principle of human subjects protection is that people should not (in most cases) be involved in research without their informed consent, and that subjects should not incur increased risk of harm from their research involvement, beyond the normal risks inherent in everyday life.

So yes, informed consent is always preferable, and in many cases mandatory, but not in all cases. What are those cases? Once again, according to the NSF:

An IRB may … waive the requirements to obtain informed consent provided the IRB finds and documents that: (1) The research involves no more than minimal risk to the subjects; (2) The waiver or alteration will not adversely affect the rights and welfare of the subjects; (3) The research could not practicably be carried out without the waiver or alteration; and (4) Whenever appropriate, the subjects will be provided with additional pertinent information after participation.

Reasonable people can differ on what constitutes “minimal” risk and whether or not the Facebook study would have passed such a test. But it’s worth reading about what the researchers actually did, rather than relying on sensationalized media summaries. In fact, they did not “manipulate emotions” in any direct sense at all. Rather, they scored users’ posts for emotional content on the basis of their use of words like “great” (positive) or “awful” (negative), and then randomly prevented some of those posts from showing up on friends’ news feeds. No actual content was altered, and users could always see all posts by visiting their friends’ pages. Given that Facebook already makes innumerable decisions every day about what content is posted to news feeds (only a fraction of what your friends post shows up), the manipulation applied by the researchers was relatively tiny—well within “the normal risks inherent in everyday life.” (The effects, also measured by word counts, were even smaller—roughly 1 percent—but that’s another matter.)

ADVERTISEMENT

There’s also a strong argument to be made that the research could not practicably be carried out without the waiver. Thoughtful commenters have suggested that Facebook could have recruited a separate “opt in” research pool of users who were required to read and sign a special terms-of-use agreement. Although useful for some study designs (e.g., the kind of “virtual lab” experiments that my colleagues and I run), however, this approach has at least two problems for the kind of study Facebook conducted. First, any such pool would be highly self-selected, potentially biasing the results; and second, unless a separate pool was recruited with its own customized agreement for every study, users might still object that they didn’t understand the implications of what they were agreeing to. Debriefing 700,000 subjects, meanwhile, might well have caused more confusion and consternation than it would have averted.

For all these reasons, the study—had it been subject to institutional review—very likely would have been approved, without the requirement of informed consent. But was it subject to any such review at all? And if not, should it have been?

This is where things get murky. Technically, the Cornell researchers examined only “secondary data,” meaning anonymized data from a study that was conducted by another party (Facebook), and hence were not required to submit to IRB approval. Also technically, Facebook is not required to have its research approved by an IRB (which is required only for federally funded institutions). So technically, no individual did anything wrong. Nevertheless, the absence of clear procedures of ethical review allowed everyone involved to assume either that no action was required, or that if was, someone else had taken care of it.

And that is the real ethical issue at stake here. It is not that the study itself was unethical, but rather that no one involved was required to address the ethical implications before embarking on it.

These implications are not always clear-cut. If, say, the researchers had proposed seeding Facebook users’ news feeds with fictitious stories rather than simply adjusting the existing filter, that design might not pass an ethical test—it would really depend on the details. Nor can researchers be relied upon to evaluate the ethical implications of their own work. Indeed, the history of human-subject regulation is littered with socially valuable psychology experiments—Milgram’s shock treatments, the Stanford Prison Experiment—that the researchers themselves felt were unproblematic but that today we regard as unethical.

ADVERTISEMENT

Partly in response to those early missteps, today we have pretty good procedures for reviewing and approving psychology experiments done in university labs. We also have a pretty clear idea of how to handle survey research or ethnographic studies. But research done on Facebook doesn’t fit neatly into any of those categories. It’s kind of like a lab, in the control that it affords researchers, but it’s also kind of like a survey, in the remote, hands-off relationship between researcher and subject. It’s even a bit like an ethnographic study, in that it allows researchers to observe interactions among subjects in their own environment.

The benefits are that it’s far more naturalistic than a traditional lab, experiments can be run on much larger scales, and much richer data can be collected. Potentially, Facebook and other web platforms—including Twitter and Amazon, but also email, search, and media services—can shed new light on many important questions of social science, such as the nature of human cooperation and conflict, the dynamics of public-opinion formation, and the relationship between organizational structure and performance.

But, as in other areas of life, technology is opening up exciting capabilities faster than our institutions for regulating those activities can adapt. I submitted my first IRB proposal for web-based social science 14 years ago, and since then I have had experience with review procedures both at Columbia University as well as in corporate research labs (first at Yahoo! Research, where we implemented an IRB-like process, and now at Microsoft). Although progress has been made over that time, many university IRBs still have little experience with the mechanics of web-based research. Meanwhile researchers in private companies, who do understand the mechanics, typically don’t receive formal training in human-subject research. Finally, it doesn’t help that most web platforms blur the boundary between a research site and a commercial product—domains that are currently regulated by different federal agencies.

What we need is an ethics-review process for human-subject research designed explicitly for web-based research, in a way that works across the regulatory and institutional boundaries separating universities and companies. For the past two years, my colleagues at Microsoft Research have been designing precisely such a system, which is to be rolled out shortly.

It is still a work in process, and many details are liable to change as we learn what works and what doesn’t, but the core principle is one of peer review. Although we have an ethics board composed of experienced researchers (including me), the idea is not to have every proposal submitted to the board for review—a recipe for bottlenecks and frustration. Rather, it is to force researchers to engage in structured, critical discussions with educated peers, where everyone involved will be accountable for the outcome and hence will have strong incentives to take the review seriously. Unproblematic designs will be approved via an expedited process, while red flags will provoke a full review—a two-tier system modeled on existing IRBs.

ADVERTISEMENT

Aside from its inherent scalability, the peer-review approach also has the benefit of involving the entire research community in discussions about ethics. Rather than placing the burden of review on a small committee of experts, everyone will have to undergo some basic training and consider the ethical implications of their research. The goal is to create an educated community that, in subjecting all cases to diverse viewpoints, lets fewer errors slip through. And because the process is designed to run continuously, insights arising from novel cases will diffuse quickly.

Lest this picture sound utopian, let me add that not everyone likes the idea of ethical peer review, or even the idea of institutional review of any kind. Even among those in favor, different people have different ideas of what is acceptable and what isn’t, and they all have strong opinions. I expect that I’ll be having many arguments with my colleagues as we roll this out, and none of us will entirely get our way. In fact, that’s sort of the point.

I’m hopeful that our peer-based approach to ethical review will become a model for industry and academic research. No doubt other approaches will be proposed—indeed, some already have. Regardless of which model wins out, if we have learned one lesson from this latest controversy, it should be that all human-subject research, whether conducted in companies or at universities, whether online or offline, whether “massive scale” or not, should be subject to ethical review. The public trust in social science is at stake.

Duncan J. Watts, a principal researcher at Microsoft Research, has been doing web-based social science research for 14 years, at Columbia University, Yahoo! Research, and Microsoft.

We welcome your thoughts and questions about this article. Please email the editors or submit a letter for publication.
Share
  • Twitter
  • LinkedIn
  • Facebook
  • Email
ADVERTISEMENT
ADVERTISEMENT

More News

Illustration of a magnifying glass highlighting the phrase "including the requirements set forth in Presidential Executive Order 14168 titled Defending Women From Gender Ideology Extremism and Restoring Biological Truth to the Federal Government."
Policy 'Whiplash'
Research Grants Increasingly Require Compliance With Trump’s Orders. Here’s How Colleges Are Responding.
Photo illustration showing internal email text snippets over a photo of a University of Iowa campus quad
Red-state reticence
Facing Research Cuts, Officials at U. of Iowa Spoke of a ‘Limited Ability to Publicly Fight This’
Photo illustration showing Santa Ono seated, places small in the corner of a dark space
'Unrelentingly Sad'
Santa Ono Wanted a Presidency. He Became a Pariah.
Illustration of a rushing crowd carrying HSI letters
Seeking precedent
Funding for Hispanic-Serving Institutions Is Discriminatory and Unconstitutional, Lawsuit Argues

From The Review

Football game between UCLA and Colorado University, at Folsom Field in Boulder, Colo., Sept. 24, 2022.
The Review | Opinion
My University Values Football More Than Education
By Sigman Byrd
Photo- and type-based illustration depicting the acronym AAUP with the second A as the arrow of a compass and facing not north but southeast.
The Review | Essay
The Unraveling of the AAUP
By Matthew W. Finkin
Photo-based illustration of the Capitol building dome propped on a stick attached to a string, like a trap.
The Review | Opinion
Colleges Can’t Trust the Federal Government. What Now?
By Brian Rosenberg

Upcoming Events

Plain_Acuity_DurableSkills_VF.png
Why Employers Value ‘Durable’ Skills
Warwick_Leadership_Javi.png
University Transformation: a Global Leadership Perspective
Lead With Insight
  • Explore Content
    • Latest News
    • Newsletters
    • Letters
    • Free Reports and Guides
    • Professional Development
    • Virtual Events
    • Chronicle Store
    • Chronicle Intelligence
    • Jobs in Higher Education
    • Post a Job
  • Know The Chronicle
    • About Us
    • Vision, Mission, Values
    • DEI at The Chronicle
    • Write for Us
    • Work at The Chronicle
    • Our Reporting Process
    • Advertise With Us
    • Brand Studio
    • Accessibility Statement
  • Account and Access
    • Manage Your Account
    • Manage Newsletters
    • Individual Subscriptions
    • Group and Institutional Access
    • Subscription & Account FAQ
  • Get Support
    • Contact Us
    • Reprints & Permissions
    • User Agreement
    • Terms and Conditions
    • Privacy Policy
    • California Privacy Policy
    • Do Not Sell My Personal Information
1255 23rd Street, N.W. Washington, D.C. 20037
© 2025 The Chronicle of Higher Education
The Chronicle of Higher Education is academe’s most trusted resource for independent journalism, career development, and forward-looking intelligence. Our readers lead, teach, learn, and innovate with insights from The Chronicle.
Follow Us
  • twitter
  • instagram
  • youtube
  • facebook
  • linkedin