> Skip to content
FEATURED:
  • The Evolution of Race in Admissions
Sign In
  • News
  • Advice
  • The Review
  • Data
  • Current Issue
  • Virtual Events
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Find a Job
    • Post a Job
Sign In
  • News
  • Advice
  • The Review
  • Data
  • Current Issue
  • Virtual Events
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Find a Job
    • Post a Job
  • News
  • Advice
  • The Review
  • Data
  • Current Issue
  • Virtual Events
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Find a Job
    • Post a Job
Sign In
ADVERTISEMENT
The Review
  • Twitter
  • LinkedIn
  • Show more sharing options
Share
  • Twitter
  • LinkedIn
  • Facebook
  • Email
  • Copy Link URLCopied!
  • Print

Facebook and Falsehood

By  Henry Farrell
January 15, 2017

After the election, many people blamed Facebook for spreading partisan — and largely pro-Trump — “fake news,” like Pope Francis’s endorsement of Trump, or Hillary Clinton’s secret life-threatening illness. The company was assailed for prioritizing user “engagement,” meaning that its algorithms probably favored juicy fake news over other kinds of stories. Those algorithms had taken on greater prominence since August, when Facebook fired its small team of human beings who curated its “trending” news section, following conservative complaints that it was biased against the right.

We’re sorry. Something went wrong.

We are unable to fully display the content of this page.

The most likely cause of this is a content blocker on your computer or network. Please make sure your computer, VPN, or network allows javascript and allows content to be delivered from c950.chronicle.com and chronicle.blueconic.net.

Once javascript and access to those URLs are allowed, please refresh this page. You may then be asked to log in, create an account if you don't already have one, or subscribe.

If you continue to experience issues, contact us at 202-466-1032 or help@chronicle.com

After the election, many people blamed Facebook for spreading partisan — and largely pro-Trump — “fake news,” like Pope Francis’s endorsement of Trump, or Hillary Clinton’s secret life-threatening illness. The company was assailed for prioritizing user “engagement,” meaning that its algorithms probably favored juicy fake news over other kinds of stories. Those algorithms had taken on greater prominence since August, when Facebook fired its small team of human beings who curated its “trending” news section, following conservative complaints that it was biased against the right.

Initially, Facebook denied that fake news could have seriously affected the election. But recently it announced that it was taking action. The social-media giant said it would work with fact-checking organizations such as Snopes and Polifact to identify problematic news stories and flag them as disputed, so that people know that they are questionable. It will also penalize suspect stories so that they are less likely to appear in people’s news feeds.

In each instance — the decision to remove human editors in August and the recent decision to use independent fact-checkers — Facebook has said that it cannot be an arbiter of truth. It wants to portray itself as a simple service that allows people and businesses to network and communicate, imposing only minimal controls over what they actually say to one another. This means that it has to outsource its judgments on truth — either by relying on “machine learning” or other technical approaches that might identify false information, or by turning to users and outside authorities.

Both approaches try to deal with fake news without addressing politics. Neither is likely to work.

The great strength and the great weakness of Silicon Valley is its propensity to redefine social questions as engineering problems. In a series of essays, Tim O’Reilly, the head of O’Reilly Media, argues that Facebook and similar organizations need to avoid individual judgments about the content of web pages and instead create algorithms that will not only select engaging material but also winnow out the false information from the true. Google has created algorithms that can comb through metadata for “signals” suggesting that pages are likely to have valuable content, without ever having to understand the intrinsic content of the page. O’Reilly argues that one can do the same thing for truth. Facebook’s algorithms would identify websites that repeatedly spread fake news and penalize their stories. This would define fake news as an engineering problem, in which one simply had to discover which signals were associated with true stories and give them priority.

ADVERTISEMENT

Truth Issue
The Post-Truth Issue
“Post-truth—adjective; relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.” So says Oxford Dictionaries, announcing their 2016 word of the year. If we really have entered a post-truth era, as so many have written, what does that mean for the scholar and the student? For the citizen and the state? In our special issue, we wrestle with these and other urgent questions.
  • The Plots Against America
  • Fighting Fake
  • Google and the Misinformed Public
  • Know-Nothing Nation
  • Our Graduates Are Rubes
  • When Truth Becomes a Commodity

But as Zeynep Tufekci, Cathy O’Neil, and others have pointed out, algorithms are hardly neutral. In Maciej Cegłowski’s pungent description, machine-learning algorithms provide a kind of “money laundering for bias.” This laundering is likely to be both detectable and controversial. Imagine, for example, if an unsupervised machine-learning process determined that conservative political orientation provided a strong signal that the news on a particular website was untrustworthy, and started penalizing conservative sites accordingly. The purported neutrality of the algorithm wouldn’t count for much in the ensuing uproar.

Even apart from the controversy, it isn’t at all clear that algorithms will be very good at determining the truth in politically muddy situations. While they may spot the blatant frauds, as Gilad Lotan has pointed out, obviously fake news isn’t nearly as tricky to deal with as biased news. Very often for such stories, there isn’t going to be any genuinely “neutral” version of the truth that the results of the algorithm can be checked against. This means that it will be hard to train the algorithm — at best, its results can be checked against human-derived measurements that are themselves likely to be highly imperfect and politically biased in ways that are difficult to correct for.

But while the turn away from algorithms might be seen as progress, the shift toward independent fact-checkers is probably not the right answer. Trying to avoid any whiff of politics, Facebook has sought to minimize its own role, relying on third-party fact checkers. But the company’s proposed strategy will work only if its outside arbiters are genuinely seen as neutral.

Social-media companies must take responsibility for preventing the spread of obvious falsehoods, while allowing users to argue for different interpretations of the truth.

That’s extremely unlikely. In politics, different sides in a debate cling firmly to different truths. This doesn’t mean that both sides are equally wrong when, for example, conservatives reject well-founded scientific conclusions about global warming. It does mean that the truth claims underlying many important political debates cannot be settled a priori, and that even when they can be, it’s going to be impossible to avoid political contestation. People — especially the people most likely to be aggrieved by these arbiters’ decisions — don’t believe in independent referees anymore. Instead, they’re likely to look at these organizations as yet another crew of media elites, telling ordinary people what they should or should not believe.

So if neither approach will work, what will?

ADVERTISEMENT

If businesses, public intellectuals, and academics want to start addressing the problem, they are going to have to start thinking in political terms, just as climate scientists have had to get politicized to engage in the debates over global warming. If Facebook and other companies are going to act effectively against fake news, they need to take a directly political stance, explicitly acknowledging that they have a responsibility to prevent the spread of obvious falsehoods, while continuing to allow the sites’ users to express and argue for a variety of different understandings of the truth that are not obviously incompatible with empirical facts.

This would require Facebook to take the hitherto unthinkable step of taking an editorial position rather than presenting its judgments as the outcome of impersonal processes. It would involve hiring human beings as editors, supplementing their judgments with automated processes as needed, defending these judgments where appropriate, and building institutionalized processes for appeal when the outcomes are questionable.

This likely means that Facebook will become embroiled in messy political debates. However, it will become embroiled in these debates no matter what. Facebook needs to start thinking systematically about how to engage with its political responsibilities, rather than continuing to pretend that it doesn’t have any.

Henry Farrell is an associate professor of political science and international affairs at George Washington University.

A version of this article appeared in the January 20, 2017, issue.
Read other items in this The Post-Truth Issue package.
We welcome your thoughts and questions about this article. Please email the editors or submit a letter for publication.
Opinion
ADVERTISEMENT
ADVERTISEMENT
  • Explore
    • Get Newsletters
    • Letters
    • Free Reports and Guides
    • Blogs
    • Virtual Events
    • Chronicle Store
    • Find a Job
    Explore
    • Get Newsletters
    • Letters
    • Free Reports and Guides
    • Blogs
    • Virtual Events
    • Chronicle Store
    • Find a Job
  • The Chronicle
    • About Us
    • DEI Commitment Statement
    • Write for Us
    • Talk to Us
    • Work at The Chronicle
    • User Agreement
    • Privacy Policy
    • California Privacy Policy
    • Site Map
    • Accessibility Statement
    The Chronicle
    • About Us
    • DEI Commitment Statement
    • Write for Us
    • Talk to Us
    • Work at The Chronicle
    • User Agreement
    • Privacy Policy
    • California Privacy Policy
    • Site Map
    • Accessibility Statement
  • Customer Assistance
    • Contact Us
    • Advertise With Us
    • Post a Job
    • Advertising Terms and Conditions
    • Reprints & Permissions
    • Do Not Sell My Personal Information
    Customer Assistance
    • Contact Us
    • Advertise With Us
    • Post a Job
    • Advertising Terms and Conditions
    • Reprints & Permissions
    • Do Not Sell My Personal Information
  • Subscribe
    • Individual Subscriptions
    • Institutional Subscriptions
    • Subscription & Account FAQ
    • Manage Newsletters
    • Manage Your Account
    Subscribe
    • Individual Subscriptions
    • Institutional Subscriptions
    • Subscription & Account FAQ
    • Manage Newsletters
    • Manage Your Account
1255 23rd Street, N.W. Washington, D.C. 20037
© 2023 The Chronicle of Higher Education
  • twitter
  • instagram
  • youtube
  • facebook
  • linkedin