Google, Microsoft, IBM, Apple, and 885 other players in the A.I. market have all been spinning their wheels in the wrong direction.
Using brute force in machine learning and natural language processing (NLP) with advanced statistics, bots such as Siri, Echo, Viv, Hound, Skype and others fall off a cliff the moment they receive a command that is not an exact match for the engine. This is because NLP can only approximate meaning. Counting words and tracking word order, or even parsing by syntax, results in probability — guesswork, at best.
[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":2027153,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"bots,","session":"C"}']For all the progress that has been made in A.I., there is one hard problem that has remained fundamentally unsolved: natural language understanding (NLU).
According to John Giannandrea, a Google senior vice president, “understanding language is the holy grail of [A.I.].”
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
“[If machines cannot] have a meaningful conversation, it quickly goes off the rails,” said Andrew Ng, deep learning expert and chief scientist at Baidu and an associate professor at Stanford. Weren’t machine learning and deep learning going to solve all of that?
Even with deep learning, a machine can’t yet converse naturally with a human, and it may never even beat a three-year-old child in NLU. Human language is infinitely more complex than a Go or chess game. And the chatbot ecosystem will always lose potential traction and consumer adoption if the conversation isn’t natural.
Google’s Parsey McParseface and the normal factor graphs (or LFG) technique parse to grammar, which does not help with meaning. The most-quoted scientist alive today, Chomsky, said that “statistical models have been proven incapable of learning language.” But the Chomskyan models have failed in NLU, since they, too, parse to grammar. Other models like LFG and Combinatory Categorial Grammar (or CCG) suffer from the same problems.
Machines will be able to handle conversational speech and text only when they match every word to the correct meaning, based on the meanings of the other words in the sentence — just like a three year old does.
To solve NLU, computer scientists Yann Lecun and Andrew Ng agree that we need to develop new paradigms. There are several companies entering the market today to solve for this critical problem of natural conversations. These include Maluuba (a company that only focuses on deep learning) and our team at Pat Inc.
We feel we’ve made a radical breakthrough in NLU, and there are many notable professors who would agree. We have solved the A.I.-hard NLU problem by automating the role and reference grammar (or RRG) model and using Word Sense Disambiguation, Context Tracking, Machine Translation, and Word Boundary Identification.
[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":2027153,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"bots,","session":"C"}']
The result is an NLU framework with the ability to learn more concepts over time. When applied to consumer chatbots and devices, this will increase conversational ability, making it similar to that of a human.
Natural conversations include the ability to understand context, to understand a speaker who asks multiple questions within one request or interrupts and corrects themselves, the ability to translate conversations from one language to another more accurately, and more.
The possibilities for NLU to improve customer support and interactions are potentially limitless.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More