Technology

Can an MIT Computer Learn to Scare You?

October 30, 2016

Courtesy of MIT
A composite of images produced by MIT’s Nightmare Machine, part of an experiment in which human participants vote on which of the computer-generated faces are frightening, in an effort to teach the machine how to make scary images.
The threat that machines will act independently of their operators has stoked fears for longer than artificial intelligence has been a concept — but could a computer actually learn to scare us?

Enter the Nightmare Machine. Three researchers at the Massachusetts Institute of Technology are seeking to answer that question, using a deep-learning algorithm to teach a computer to produce images of faces and places that scare people.

Pinar Yanardag, a postdoctoral researcher on the project, cites luminaries like Elon Musk and Stephen Hawking as acknowledging the potential threat of artificial intelligence. Mr. Hawking expressed doubt over whether artificial intelligence would be the best or the worst thing to happen to humanity, and Mr. Musk likened it to "summoning the demon."

“We know AI terrifies us in the abstract sense. But can AI scare us in the immediate, visceral sense?”
"We know AI terrifies us in the abstract sense," Ms. Yanardag said in an email. "But can AI scare us in the immediate, visceral sense?"

It works like this: The computer uses an algorithm to create a "fake" set of faces from real images, then another algorithm to extract the image style from one photo and apply it to another. For example, the algorithm may pick zombie-like features from one image and apply them to a computer-generated face. The result: a contortion that might be called scary.

But to learn whether the computer-generated images can scare people, the machine needs human participants. The researchers are asking people who don’t mind a scare to sift through a few of the machine’s scary faces and vote on whether or not they are frightening. The computer learns from the responses which kinds of images are considered scary and which aren’t.

The machine has received over half a million votes, and Manuel Cebrian, a principal researcher on the Nightmare Machine project, said that voters generally agree on which images scare them. "Initial tallies reveal that humans quickly converge on finding some of them very scary, and others not so much," he said in an email. Eventually, researchers may be able to use the data set provided by participants to make the generated images even more frightening.

‘Delivering a Scare’

Margee Kerr knows a lot about fear. It’s the focus of her studies as a sociologist, and she says teaching a computer to scare people will be tricky. Fear is distinctly personal in many cases, informed by individual experience. And with faces, expressions can be interpreted differently based on one’s culture.

"You’re always going to have people who get really excited and happy when they see an explosion, and people who get really sad when they see a picture of a bunny," Ms. Kerr said. The best the MIT researchers should hope for is to frighten a large group, but not everyone. "It might hit like 80 percent of the population," she said, "and that’s what I would be really interested to find out, is how effective it is, how efficient it gets at actually delivering a scare."

So what makes a face scary? It’s difficult to tease out what’s evolutionary instinct and what’s cultural, Ms. Kerr said. Eyes widened so that the whites are very visible, a scowling mouth, flared nostrils, and bared teeth have been shown to be generally frightening in studies that measured brain activity, she said. But the uncanny valley effect — the idea that people are repulsed by faces that look nearly human but are slightly off — could also be fruitful ground.

Any face or expression that confuses viewers’ ability to interpret what they’re seeing can generally be considered scary, Ms. Kerr said. "Faces that are discordant in some way," she added. "So that creates this aesthetic dissonance that makes us wonder what’s really going on, because we can’t read their facial expressions confidently."

That is why people are often scared of clowns, she said, with their happy eyes and downturned mouths.

‘A Supercomputer Psychopath’

The Nightmare Machine team has noticed a heavy response on Twitter, with one user expressing fear at the idea of its use as a tool of torture, while another called the machine "a supercomputer psychopath." Iyad Rahwan, an associate professor in MIT’s Media Lab and a researcher on the project, said the team understood those fears existed around artificial intelligence and wanted to play into them.

“Technology has always terrified people because it is a catalyst for change.”
"Technology has always terrified people because it is a catalyst for change," Mr. Rahwan said in an email. "In the old days, early medical science gave rise to Frankenstein, and the Industrial Revolution gave rise to the Luddites who feared machines taking their jobs. Today, people are still afraid of AI taking over our jobs, but also of Terminator-style robots obliterating humanity."

But the team’s goal, he said, was to find a fun way to begin to root out "the barriers between human and machine cooperation."

"Psychological perceptions of what makes humans tick and what makes machines tick," he said, "are important barrier for such cooperation to emerge."

If you’d like to see whether the Nightmare Machine gives you a fright, you can check it out here.