I returned from Konstanz to find a whole slew of newspapers, websites, and news magazines had revived a language technology story from two months ago (Adrienne LaFrance discussed it in The Atlantic in June). Facebook, they reported, had been trying to get two chatbots (“Bob” and “Alice”) in an “adversarial network” to learn negotiation by reading a stash of transcribed negotiations between humans and imitating them. But as the chatbots purported to negotiate over the pricing of balls, hats, and books, their conversation started looking like this:
Bob: i can i i everything else . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i everything else . . .
Alice: balls have a ball to me to me to me to me to me to me to me
The experiment had to be terminated because the bots had evolved a new language that the scientists couldn’t understand, the journalists wrote excitedly. Newsweek said that the properties of a negotiation bot made for
a potential monster: a bot that can cut deals with no empathy for people, says whatever it takes to get what it wants, hacks language so no one is sure what it’s communicating and can’t be distinguished from a human being. If we’re not careful, a bot like that could rule the world.
Deliberate alarmist mendacity? Or journalists as gullible as babies? I guess the charitable thing is to assume the latter.
The stuff about a secret language is hogwash. Some discussions on Language Log months ago are relevant. People had discovered that typing meaningless repetitive sequences into Google Translate could lead to weird and apparently random outputs.
Smut Clyde at Riddled fed strings of the Japanese hiragana symbol for nu (め) to Google Translate to be rendered into English. Increasing numbers of repetitions produced weirder and weirder results (see table). Similar experiments with lots of other languages yielded similar results. Repeatedly typing the Thai character sequence ไๅ yielded “Are you” (3 repetitions), “This is it” (9), “This is how it is supposed to be” (anything between 18 and 24), “This is how you will have this is that it is” (27), or “This is how it will be as it is that we have made it possible to make this possible to be the way it is” (48). It seems quite wrong to say “and so on” here; there is no pattern!
It wasn’t about Asian languages. Mark Liberman exhibited sequences of a single letter translated from Hawaiian into French, Spanish, German, and English as random nonsense (see his annotation to Smut Clyde’s comment here).
The fun was short-lived: By early May, Google researchers had patched their algorithms to produce less entertaining output. Most long meaningless repetitive sequences now translate as themselves. But Language Log stockpiled many screenshots of the strange behaviors I have mentioned.
What was going on is explained to a small extent by Mark Liberman here and in more technical detail by Andrej Karpathy here. Complex computer programs trained to “learn” patterns found in huge bodies of data, and feed back results about their own performance recursively, will produce strange and apparently random behaviors if you put them in a situation where no sensible output is suggested by the data on which they were trained. These babblings mean absolutely nothing.
The chaotic performance of a program when totally befuddled by unfamiliar inputs is compatible with its being able to produce useful results under more normal conditions. Google Translate makes useful guesses at translations a lot of the time. When its “learning” from genuine paired translations gives it clues as to what English sequence of words might correspond to a given Japanese sequence, it can be very helpful — despite the fact that it doesn’t look up the meaning of even a single word.
The Facebook chatbots, according to their keepers, did appear to be simulating some kind of negotiation over pricing of balls, hats, and books. But their inability to compose well-formed sequences of English words revealed their utter cluelessness in that regard.
Dhruv Batra, a Facebook researcher, said to Fast Company (it was credulously transcribed by The Daily Telegraph): “There was no reward to sticking to English language.” How could they possibly stick to English? That would necessitate a fully implemented grammatical competence. Thousands of researchers all over the world have been trying for decades to encapsulate in computer-readable form what “sticking to English language” would imply. We are not there yet.
The chatbots were neither departing from one language nor inventing another. They weren’t talking to each other at all. They were just flailing around with — though even this is anthropomorphizing — no idea of what their masters wanted them to do.Return to Top