If you’re a fan of the X-Men comics series, you’re familiar with Cerebro, a fictional device that taps into the brain waves of humans and has the ability to identify mutants by an individual’s thoughts and experiences. Wouldn’t it be terrifying if Cerebro were real? If we could indulge in mind reading at a global scale?
While we haven’t discovered that capability — neural quantum entanglement anyone? — consider that social media posts reveal the inner workings of roughly two billion individuals. The deepest insights of a quarter of humanity are available for analysis. All we lack are effective ways to examine these metaphorical brain waves and make sense of them.
[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":2125631,"post_type":"guest","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,bots,","session":"B"}']In many ways software has come to the rescue, particularly in surfacing individual voices. There are now tools that quickly surface social posts and conversations between consumers and businesses. These tools empower organizations to engage their customer base and understand their needs and concerns through open, authentic dialogue.
However, there’s still a lot of analysis left undone, especially in the aggregate. As an industry, social networks and tools surface opinions largely from the loudest voices in the room. We don’t infer. Our technology collects what people explicitly say without discovering why. That’s because the reasons for “why” are quite hard to untangle, as they are often not volunteered and require complicated inference or risky assumption.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
Effectively what we’re getting are word and phrase trends, not deep understanding. Surfacing a topic that is being commonly discussed is far from pinpointing what specific groups and demographics feel towards said topics and why. Conversations and the people having them are three dimensional, representing much more than mere words convey.
Subtleties that connect underlying people, issues, and the causal basis are missed by the best algorithms and the best practitioners. For instance, sophisticated pollsters, pundits, and analysts failed to call the recent U.S. Presidential election and now are now scrambling to explain it. Might the answers be hiding among billions of social posts?
What if we were able to lean on artificial intelligence to survey and draw conclusions? Imagine an AI that’s always listening for you. A digital research assistant that continuously hears and understands tens of thousands of posts per second, providing the key synopses.
Is AI up to this task? Not today.
Our AI systems grab headlines, yet they only excel at narrow, though hard, tasks. Uber and others are empowering automobiles to perceive their environment well enough to make the next turn or dodge a pedestrian, but they’ll never learn to grow wings and fly. Google created a machine that can beat a highly skilled human player at an incredibly complex game, but that machine can’t answer questions about the game’s history or independently learn to play another game.
Artificial intelligence today is still a misnomer. The Oxford English Dictionary (via Google) defines intelligence as “the ability to acquire and apply knowledge and skills,” and AI just isn’t yet there. In the words of Tom Davenport, a Fellow of the MIT Initiative on the Digital Economy and thought leader in this space, “deep learning is not profound learning.” Or to quote another expert, Oren Etzioni, CEO of the Allen Institute for AI, “AI is just simple math executed on enormous scale.” AI today is a way to make computers more capable but not yet intelligent in the way we expect of humans.
[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":2125631,"post_type":"guest","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,bots,","session":"B"}']
However, someday, might AI reach the needed level of self-sufficient intelligence?
Fortunately, AI research is on the right path toward deeper understanding. Historically a major goal was to make a machine capable of masquerading as a human, to pass the Turing test. We’re now challenging researchers with harder problems such as those represented by a newer test, the Winograd Schema Challenge.
Winograd Schema is interesting as it reveals of the state of the art, demonstrating we aren’t as far from true AI as laypeople may think. To understand this test, let’s look at an example from this year’s O’Reilly AI Conference.
- The large ball crashed right through the table because it was made of styrofoam.
- The large ball crashed right through the table because it was made of steel.
In each sentence, what does the word “it” refer to? The answer is one that most 7-year-old children can muster. Yet for machines, the answer is fiendishly hard. Now take this up to the variation among every published tweet.
[aditude-amp id="medium2" targeting='{"env":"staging","page_type":"article","post_id":2125631,"post_type":"guest","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,bots,","session":"B"}']
The day may come when AI is closer to a human’s level of deep learning and understanding, when AI effectively makes sense of complicated and vastly large piles of data in quantities that human brains cannot efficiently process.
Tests like the Winograd Schema may propel AI forward to better understand implications and connections. Yet fairly basic language understanding is an early stop on the journey towards profound intelligence and the ability to independently acquire and apply information.
Our benchmark has advanced only slightly from Turing’s test. There is still much work to be done.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More