Skip to main content [aditude-amp id="stickyleaderboard" targeting='{"env":"staging","page_type":"article","post_id":2153179,"post_type":"guest","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,bots,","session":"C"}']
Guest

Why 2017 is the year of data-driven AI

AI needs more data.

Image Credit: Shutterstock.com/Mmaxer

There was much ado about artificial intelligence (AI) platforms in 2016. It was warranted. Major developments and offerings came out of Microsoft (Cognitive Services), Google (TensorFlow), Amazon (Rekognition, Polly, Lex), IBM (Watson), Salesforce (Einstein), and many more. AI and machine learning (ML) are the hammers that turn just about every business’ data problem into a nail.

But if 2016 was the year of the platform, 2017 will be the year of data.

[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":2153179,"post_type":"guest","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,bots,","session":"C"}']

Looking at AI through the lens of platforms is a bit like looking through the wrong end of a set of binoculars. In that backward view, AI looks tiny and distant. This is compounded by the fact that the quiet, small advances made daily by agile software teams — especially at smaller companies — often go unnoticed. It’s the big players and the eureka moments that get the press. But, as those are few and far between, AI often still seems a long way off. And that’s not the case. If you hold your binoculars correctly, you can see that AI is close and big. It’s here, and it’s pervasive. Soon, simply applying AI in your business won’t be a sufficient differentiator. In 2017 and beyond, the businesses with the right data will win.

It’s easier than ever to build AI

Unlike in years past, if you’re developing AI today, you have multiple viable choices for infrastructure and algorithms. Two years ago, perhaps you were running loads on EC2 if not your own servers, but you needed your own framework to build an AI. Now we have TensorFlow, Cognitive Services, Rekognition, Polly, IBM Watson APIs … It’s an arms race, but more fundamental platform features are getting ever closer to functional parity.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

Platforms and algorithms have advanced so much that, in the near future, the make-or-break factor will be the data. Accurate, precise, specific training data — whether that’s semantic segmentation masks, polygon and cuboid annotations, relational mapping, sample utterances and conversations, product attributes, sentiment analysis, or one of the many other types of training data that today’s AI necessitates — is what sets one application apart from the rest. That’s why so many large AI platforms are more than happy to let you use their services for free, as long as they can use your data to improve their algorithms.

The data makes all the difference

Let’s consider some examples:

Autonomous vehicles: It’s reasonable to assume that if you’re GM or BMW, you’re existentially afraid of Uber, Tesla, and Google. The latter are miles ahead (ha) in self-driving technology, due in large part to their reams of training data — both historical and pouring in daily — via their info-gathering vehicles and technology.

With broad access to a variety of viable AI building blocks, it’s the data — accurately annotated images and video among them — that will play a critical role distinguishing one self-driving system from another. Just think of what Waze did to Navteq and others.

Conversational AI: Amazon’s Echo ecosystem is growing with impressive velocity, and competition within it is heating up.Think about the travel industry: What will make the difference between Expedia and Booking.com’s Alexa skills? They’ll both run on Amazon Voice Services. One company may have a better product team, and that makes a meaningful difference. The company with the higher quality, more specialized training data, however, will have the advantage. Sampling domain-specific dialogue will enable an app that allows travelers to book travel and gather information via naturally spoken language. That will be the game changer.

Apple enjoyed a head start with Siri. Google moved fast with Now and has a tremendous installed base that enabled it to effectively catch up. Microsoft rushed to market with Cortana, and IBM is investing heavily. But today most ears are tuned to Amazon. Echo starred at CES, in large part because Amazon is so focused on creating an ecosystem before its competitors. But there’s an obvious bridge across their moat: the training data. While building an Alexa skill is remarkably easy, Amazon does not share failure data with its developers. Amazon’s natural language models improve, but ultimately developers will get frustrated if Amazon cannot find a way to help them understand why users disengage when their utterances and intents fail.

[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":2153179,"post_type":"guest","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,bots,","session":"C"}']

Recommendation engines and visual search: Consider home goods retailers. Most people don’t know the specific names of pieces of furniture or furnishings they might be interested in or how to verbally describe them. Visual search — in which a customer searches and shops online with an image query instead of a text one — makes finding products easier. Naturally, the Wayfairs and Houzzes of the world are keen on visual search and investing in what they deem the best computer vision platform.

Likewise, recommendation engines and “more like this” algorithms that suggest products similar to those a shopper has shown interest in make finding the perfect product a breezy delight. Or at least that’s the idea. Bad data makes for weird recommendations, as we’ve all experienced. If I’m shopping for Pussycat MP3s, do I really want cat food, too?

With widely available compute power, systems, and ready-to-use algorithms to supplement or replace homegrown frameworks, retailers’ respective setups will eventually reach comparable performance. The training data with the richest labels and annotations will be the breakthrough that nudges one in front.

Great obstacles

Early adopters who gather accurate training data to improve their models enjoy first-mover advantages. This is largely due to learning loops. Not only do first movers’ predictive models improve continuously with training, application, and correction, but so do all the subsystems that benefit from better overall performance. For example, more accurate computer vision models drive more precise navigation, which in turn drives a need for higher resolution, better sensors, which in turn enables more use cases, creating a beautiful flywheel effect.

[aditude-amp id="medium2" targeting='{"env":"staging","page_type":"article","post_id":2153179,"post_type":"guest","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,bots,","session":"C"}']

The proliferation of infrastructure, platforms, and know-how is quickly making basic AI functionality achievable by software across just about any application that entails data, text, speech, or other media. If we look through our AI binoculars properly, we will see that leading companies will move quickly beyond this new, higher common standard to apply AIs in ways that are more sophisticated, specialized, and personalized. And guess what these more ambitious applications will require? You guessed it: increasingly sophisticated, specialized, personalized — and reliable — human training data. After all, a machine is only as smart as the humans who train it.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More