The observations made at Facebook are the latest in a long line of similar cases. In each instance, an AI being monitored by humans has diverged from its training in English to develop its own language. The resulting phrases appear to be nonsensical gibberish to humans but contain semantic meaning when interpreted by AI “agents.”
Negotiating in a new language
As Fast Co. Design reports, Facebook’s researchers recently noticed its new AI had given up on English. The advanced system is capable of negotiating with other AI agents so it can come to conclusions on how to proceed. The agents began to communicate using phrases that seem unintelligible at first but actually represent the task at hand.
READ MORE: DeepMind’s new AI can use imagination to contemplate the future
In one exchange illustrated by the company, the two negotiating bots, named Bob and Alice, used their own language to complete their exchange. Bob started by saying “I can i i everything else,” to which Alice responded “balls have zero to me to me to me…” The rest of the conversation was formed from variations of these sentences.
While it appears to be nonsense, the repetition of phrases like “i” and “to me” reflect how the AI operates. The researchers believe it shows the two bots working out how many of each item they should take. Bob’s later statements, such as “i i can i i i everything else,” indicate how it was using language to offer more items to Alice. When interpreted like this, the phrases appear more logical than comparable English phrases like “I’ll have three and you have everything else.”
English lacks a “reward”
The AI apparently realised that the rich expression of English phrases wasn’t required for the scenario. Modern AIs operate on a “reward” principle where they expect following a sudden course of action to give them a “benefit.” In this instance, there was no reward for continuing to use English, so they built a more efficient solution instead.
“Agents will drift off from understandable language and invent code-words for themselves,” Fast Co. Design reports Facebook AI researcher Dhruv Batra said. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”
AI developers at other companies have observed a similar use of “shorthands” to simplify communication. At OpenAI, the artificial intelligence lab founded by Elon Musk, an experiment succeeded in letting AI bots learn their own languages.
AI language translates human ones
In a separate case, Google recently improved its Translate service by adding a neural network. The system is now capable of translating much more efficiently, including between language pairs that it hasn’t been explicitly taught. The success rate of the network surprised Google’s team. Its researchers found the AI had silently written its own language that’s tailored specifically to the task of translating sentences.
READ NEXT: Facebook close to building chat bots with true negotiation skills
If AI-invented languages become widespread, they could pose a problem when developing and adopting neural networks. There’s not yet enough evidence to determine whether they present a threat that could enable machines to overrule their operators.
They do make AI development more difficult though as humans cannot understand the overwhelmingly logical nature of the languages. While they appear nonsensical, the results observed by teams such as Google Translate indicate they actually represent the most efficient solution to major problems.