AI has a lot of hype behind it, both in the past through scientific research, sci-fi lore, and for its potential future use-cases. But for all its advances, AI does still have some limitations, specifically around mastering language.
The GPT-3 language model, developed by OpenAI, generates text that “has the potential to be practically indistinguishable from human-written sentences, paragraphs, articles, short stories, dialogue, lyrics, and more,” as this explainer from Twilio outlines.
But according to a recent interview by Undark with scientist, author, and entrepreneur Gary Marcus, “we need to take these advances with a grain of salt.” He says that AI has become “over-reliant” on deep learning, which he says has “inherent limitations.” The ideal way forward, he believes, would be both “traditional, symbol-based approaches to AI” and deep learning.
Here are some interesting highlights from the interview:
His criticisms of GPT-3
I think it’s an interesting experiment. But I think that people are led to believe that this system actually understands human language, which it certainly does not. What it really is, is an autocomplete system that predicts next words and sentences. Just like with your phone, where you type in something and it continues. It doesn’t really understand the world around it.
On the “gullibility gap”
It’s the gap between our understanding of what these machines do and what they actually do. We tend to over-attribute to them; we tend to think that machines are more clever than they actually are. Someday, they really will be clever, but right now they’re not.
On why he’s optimistic about AI
People are finally daring to step out of the deep-learning orthodoxy, and finally willing to consider “hybrid” models that put deep learning together with more classical approaches to AI. The more the different sides start to throw down their rhetorical arms and start working together, the better.
