Facebook Catches Two Chatbots Speaking Their Own Language
Bob: i can i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i i can i i i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i . . . . . . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i i i i i everything else . . . . . . . . . . . . . .
Alice: balls have 0 to me to me to me to me to me to me to me to me to
Bob: you i i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
These are presumably the last words — for now at least — of Bob and Alice, two Facebook research chatbots, instructed to negotiate with each other for some balls. If you understand it, you’re doing better than Facebook engineers, who shut down them down. “Our interest was having bots who could talk to people,” Facebook’s Mike Lewis told FastCoDesign.
Bob’s and Alice’s assignment (FACEBOOK)
This kind of thing is, of course, inevitable, even if it is alarming. Who wants machines talking behind our backs, or worse, like parents of a toddler spelling out words, right in front of us in a way we can’t comprehend. It’s high on the list of concerns expressed by people, like Elon Musk, who’ve been shouting loud warnings of the dangers inherent in the development of AI.
In this case, it’s not an entire language but more of a machine-friendly shorthand, and it’s being seen over and over again with AI. Dhruv Batra, visiting researcher at Facebook AI Research (FAIR) also speaking with FastCoDesign, “Agents will drift off understandable language and invent codewords for themselves. Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”
It’s not really chatbots we need to be concerned about right now, since the major companies working to develop them — Facebook, Apple, Google, Amazon, and Microsoft — are currently focused on bots that can communicate clearly with humans, and as Batra puts it, “It’s important to remember, there aren’t bilingual speakers of AI and human languages.”
Ironically, the Google AI developed for their Translate feature, Google Neural Machine Translation (GNMT), has reportedly developed its interlingua that holds meanings it needs to convert from one human tongue to another.
Google Translate moves meaning from one human language to the interlingua, and then translates the interlingua into the target language.
(GOOGLE)
It’s the less user-facing AI whose choice of language might have us concerned: It may ultimately be capable of not only developing language we don’t know, but that may be beyond our capabilities. We break down meanings into words or short combinations of words. “The reason why humans have this idea of decomposition,” says Batra, “breaking ideas into simpler concepts, it’s because we have a limit to cognition.” High-powered computers seeking efficiency in their processes, on the other hand, may be able to use words or phrases as “tokens” stand-ins for highly complex meanings. We’d be completely lost trying to keep up.
Even so, it may be in programmers’ interest to go ahead and let AI communicate in a manner of its own choosing, since presumably it will be able to find the shortest distance between two points, if you will, better than we can.
So far, when we do eavesdrop on AIs talking, it’s been a lot more mundane than scary, as with Bob’s and Alice’s shorthand. (They did successfully complete some of their negotiations for balls, hast, and books, by the way.)
There may be time still to cram the genie back in the bottle. Whether or not we should is what’s keeping people up at night.