> Skip to content
FEATURED:
  • The Evolution of Race in Admissions
Sign In
  • News
  • Advice
  • The Review
  • Data
  • Current Issue
  • Virtual Events
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Career Resources
    • Find a Job
    • Post a Job
    • Career Resources
Sign In
  • News
  • Advice
  • The Review
  • Data
  • Current Issue
  • Virtual Events
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Career Resources
    • Find a Job
    • Post a Job
    • Career Resources
  • News
  • Advice
  • The Review
  • Data
  • Current Issue
  • Virtual Events
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Career Resources
    • Find a Job
    • Post a Job
    • Career Resources
Sign In
ADVERTISEMENT
News
  • Twitter
  • LinkedIn
  • Show more sharing options
Share
  • Twitter
  • LinkedIn
  • Facebook
  • Email
  • Copy Link URLCopied!
  • Print

Can an MIT Computer Learn to Scare You?

By  Nadia Dreid
October 30, 2016
A composite of images produced by MIT’s Nightmare Machine, part of an experiment in which human participants vote on which of the computer-generated faces are frightening, in an effort to teach the machine how to make scary images.
Courtesy of MIT
A composite of images produced by MIT’s Nightmare Machine, part of an experiment in which human participants vote on which of the computer-generated faces are frightening, in an effort to teach the machine how to make scary images.

The threat that machines will act independently of their operators has stoked fears for longer than artificial intelligence has been a concept — but could a computer actually learn to scare us?

Enter the Nightmare Machine. Three researchers at the Massachusetts Institute of Technology are seeking to answer that question, using a deep-learning algorithm to teach a computer to produce images of faces and places that scare people.

Pinar Yanardag, a postdoctoral researcher on the project, cites luminaries like Elon Musk and Stephen Hawking as acknowledging the potential threat of artificial intelligence. Mr. Hawking expressed doubt over whether artificial intelligence would be the best or the worst thing to happen to humanity, and Mr. Musk likened it to “summoning the demon.”

We’re sorry. Something went wrong.

We are unable to fully display the content of this page.

The most likely cause of this is a content blocker on your computer or network. Please make sure your computer, VPN, or network allows javascript and allows content to be delivered from c950.chronicle.com and chronicle.blueconic.net.

Once javascript and access to those URLs are allowed, please refresh this page. You may then be asked to log in, create an account if you don't already have one, or subscribe.

If you continue to experience issues, contact us at 202-466-1032 or help@chronicle.com

A composite of images produced by MIT’s Nightmare Machine, part of an experiment in which human participants vote on which of the computer-generated faces are frightening, in an effort to teach the machine how to make scary images.
Courtesy of MIT
A composite of images produced by MIT’s Nightmare Machine, part of an experiment in which human participants vote on which of the computer-generated faces are frightening, in an effort to teach the machine how to make scary images.

The threat that machines will act independently of their operators has stoked fears for longer than artificial intelligence has been a concept — but could a computer actually learn to scare us?

Enter the Nightmare Machine. Three researchers at the Massachusetts Institute of Technology are seeking to answer that question, using a deep-learning algorithm to teach a computer to produce images of faces and places that scare people.

Pinar Yanardag, a postdoctoral researcher on the project, cites luminaries like Elon Musk and Stephen Hawking as acknowledging the potential threat of artificial intelligence. Mr. Hawking expressed doubt over whether artificial intelligence would be the best or the worst thing to happen to humanity, and Mr. Musk likened it to “summoning the demon.”

We know AI terrifies us in the abstract sense. But can AI scare us in the immediate, visceral sense?

“We know AI terrifies us in the abstract sense,” Ms. Yanardag said in an email. “But can AI scare us in the immediate, visceral sense?”

It works like this: The computer uses an algorithm to create a “fake” set of faces from real images, then another algorithm to extract the image style from one photo and apply it to another. For example, the algorithm may pick zombie-like features from one image and apply them to a computer-generated face. The result: a contortion that might be called scary.

ADVERTISEMENT

But to learn whether the computer-generated images can scare people, the machine needs human participants. The researchers are asking people who don’t mind a scare to sift through a few of the machine’s scary faces and vote on whether or not they are frightening. The computer learns from the responses which kinds of images are considered scary and which aren’t.

The machine has received over half a million votes, and Manuel Cebrian, a principal researcher on the Nightmare Machine project, said that voters generally agree on which images scare them. “Initial tallies reveal that humans quickly converge on finding some of them very scary, and others not so much,” he said in an email. Eventually, researchers may be able to use the data set provided by participants to make the generated images even more frightening.

‘Delivering a Scare’

Margee Kerr knows a lot about fear. It’s the focus of her studies as a sociologist, and she says teaching a computer to scare people will be tricky. Fear is distinctly personal in many cases, informed by individual experience. And with faces, expressions can be interpreted differently based on one’s culture.

“You’re always going to have people who get really excited and happy when they see an explosion, and people who get really sad when they see a picture of a bunny,” Ms. Kerr said. The best the MIT researchers should hope for is to frighten a large group, but not everyone. “It might hit like 80 percent of the population,” she said, “and that’s what I would be really interested to find out, is how effective it is, how efficient it gets at actually delivering a scare.”

So what makes a face scary? It’s difficult to tease out what’s evolutionary instinct and what’s cultural, Ms. Kerr said. Eyes widened so that the whites are very visible, a scowling mouth, flared nostrils, and bared teeth have been shown to be generally frightening in studies that measured brain activity, she said. But the uncanny valley effect — the idea that people are repulsed by faces that look nearly human but are slightly off — could also be fruitful ground.

ADVERTISEMENT

Any face or expression that confuses viewers’ ability to interpret what they’re seeing can generally be considered scary, Ms. Kerr said. “Faces that are discordant in some way,” she added. “So that creates this aesthetic dissonance that makes us wonder what’s really going on, because we can’t read their facial expressions confidently.”

That is why people are often scared of clowns, she said, with their happy eyes and downturned mouths.

‘A Supercomputer Psychopath’

The Nightmare Machine team has noticed a heavy response on Twitter, with one user expressing fear at the idea of its use as a tool of torture, while another called the machine “a supercomputer psychopath.” Iyad Rahwan, an associate professor in MIT’s Media Lab and a researcher on the project, said the team understood those fears existed around artificial intelligence and wanted to play into them.

Technology has always terrified people because it is a catalyst for change.

“Technology has always terrified people because it is a catalyst for change,” Mr. Rahwan said in an email. “In the old days, early medical science gave rise to Frankenstein, and the Industrial Revolution gave rise to the Luddites who feared machines taking their jobs. Today, people are still afraid of AI taking over our jobs, but also of Terminator-style robots obliterating humanity.”

But the team’s goal, he said, was to find a fun way to begin to root out “the barriers between human and machine cooperation.”

ADVERTISEMENT

“Psychological perceptions of what makes humans tick and what makes machines tick,” he said, “are important barrier for such cooperation to emerge.”

If you’d like to see whether the Nightmare Machine gives you a fright, you can check it out here.

We welcome your thoughts and questions about this article. Please email the editors or submit a letter for publication.
Technology
ADVERTISEMENT
ADVERTISEMENT

Related Content

  • When the Teaching Assistant Is a Robot
  • Behind a Computer’s Surprise Victory, Hints of Global Economic Upheaval
  • Never Mind Turing Tests. What About Terminator Tests?
  • Is Artificial Intelligence a Threat?
  • Writing Instructor, Skeptical of Automated Grading, Pits Machine vs. Machine
  • ‘Moral’ Robots: the Future of War or Dystopian Fiction?
  • Explore
    • Get Newsletters
    • Letters
    • Free Reports and Guides
    • Blogs
    • Virtual Events
    • Chronicle Store
    • Find a Job
    Explore
    • Get Newsletters
    • Letters
    • Free Reports and Guides
    • Blogs
    • Virtual Events
    • Chronicle Store
    • Find a Job
  • The Chronicle
    • About Us
    • DEI Commitment Statement
    • Write for Us
    • Talk to Us
    • Work at The Chronicle
    • User Agreement
    • Privacy Policy
    • California Privacy Policy
    • Site Map
    • Accessibility Statement
    The Chronicle
    • About Us
    • DEI Commitment Statement
    • Write for Us
    • Talk to Us
    • Work at The Chronicle
    • User Agreement
    • Privacy Policy
    • California Privacy Policy
    • Site Map
    • Accessibility Statement
  • Customer Assistance
    • Contact Us
    • Advertise With Us
    • Post a Job
    • Advertising Terms and Conditions
    • Reprints & Permissions
    • Do Not Sell My Personal Information
    Customer Assistance
    • Contact Us
    • Advertise With Us
    • Post a Job
    • Advertising Terms and Conditions
    • Reprints & Permissions
    • Do Not Sell My Personal Information
  • Subscribe
    • Individual Subscriptions
    • Institutional Subscriptions
    • Subscription & Account FAQ
    • Manage Newsletters
    • Manage Your Account
    Subscribe
    • Individual Subscriptions
    • Institutional Subscriptions
    • Subscription & Account FAQ
    • Manage Newsletters
    • Manage Your Account
1255 23rd Street, N.W. Washington, D.C. 20037
© 2023 The Chronicle of Higher Education
  • twitter
  • instagram
  • youtube
  • facebook
  • linkedin