Ah, the media. By definition the watchdogs of society. But sometimes things get a little awry, especially when dealing with complex, next-level stuff.
Reports came out recently that Facebook's AI Chatbot experiment yielded a seemingly remarkable development: it had created a language humans couldn't understand. News outlets immediately reported that Facebook panicked and shut the whole program down as a result, implying that artificial intelligence has finally caught up with us and will edge us out to oblivion eventually, taking over the world once and for all. Machines, machines, machines. They've finally won!
Except they didn't. Not even remotely close.
Facebook's Chatbot Experiment
Facebook has these so-called chatbots. They live inside Messenger acting as "assistants," and they seem a lot more approachable than typical virtual assistants because the way they respond to queries makes it feel as if a human actually was on the other end. As a result, the interaction plays out much like a typical chat with a friend.
The New York Times chatbot, for instance, tells you the latest news if you ask it via Messenger. There's also a chatbot for games, one that allows you to buy stuff, and another that can display flight details. The list goes on, but you get the idea.
It sounds all but innovative, of course, but Facebook didn't stop there. To push the envelope further, Facebook reported in June that it was attempting to teach chatbots how to negotiate with humans for many reasons, one of them being if chatbots learn how to negotiate, it could help users land the best deals for them, or at the very least, it can make them sound more naturally human. It's a little creepy, sure, but also straightforward enough a concept.
Facebook AI Chatbot: What Really Happened
So some of the bots Facebook experimented with produced, well, garbled sentences, as Wired reports. These sentences were gibberish at best — unintelligent, unsurprising, and certainly not a cause for concern at all. Nobody at Facebook's research lab "panicked," and it didn't shut down the whole thing.
The media, however, honed in on the "fear" element of the whole thing, though there was actually none. Coverage of topics surrounding AI, however, often fall into this trap since the subject is highly complex and carries varied implications to how the world will eventually handle war, criminal justice, and many other phenomena once AI gets to a significant level of functionality.
But for the sake of full disclosure, here's what really happened: the researchers tried to develop AI that could negotiate with humans, as mentioned above. The lab started with a small-scale game in which two players were asked to divide sets of objects between themselves.
Facebook's technical paper explains how this works, but for the sake of conciseness, the bots basically were taught dialogue relevant to negotiation language. Then they were allowed to use trial and error — more fancily called reinforcement learning — to actually do the negotiating. But when both bots employing reinforcement learning played, they gave out imperceptible sentences. It's still English, but broken and nonsense.
"We found that updating the parameters of both agents led to divergence from human language," as Facebook states in the paper.
The gibberish language was exactly what caused the headlines about how AI has learned its own language and is on the initial stages of imposing dominance upon human beings.
But in fact, Facebook's mishap actually demonstrates how AI is still fairly underdeveloped, so suffice it to say that reports did a total 180 by coming out with headlines that espoused fear. In truth, AI is nowhere near there yet. By the looks of things, Facebook's own AI experiments echo the current limitations of AI: it can only do what it's programmed to do.
"The blind literalness of current machine learning systems constrains their usefulness and power," as Wired best puts it.
So long story short: AI isn't taking over our world anytime soon. They're not communicating secretly right now. What happened at Facebook's lab was closer to a clue to how imperfect AI is rather than the incipient hints of total world domination by machines.