Skip to main content [aditude-amp id="stickyleaderboard" targeting='{"env":"staging","page_type":"article","post_id":2354171,"post_type":"exclusive","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,business,","session":"B"}']
Exclusive

Microsoft spells out its plans for Semantic Machines

Microsoft announced Sunday it acquired conversational AI startup Semantic Machines, a company whose staff includes former Siri chief scientist Larry Gillick and researchers such as Percy Liang, who helped create Google Assistant.

Few details were provided in a blog post to announce the acquisition, but today Microsoft AI and Research Group chief technology officer David Ku spoke with VentureBeat to share how the acquisition will improve Cortana, help developers make interoperable voice apps, influence products across Microsoft, and plans to open the first conversational AI center for excellence at the University of California, Berkeley.

[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":2354171,"post_type":"exclusive","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,business,","session":"B"}']

How Semantic Machines will change Cortana

Founded in August 2014 in Newton, Massachusetts, Semantic Machines works in areas like speech recognition and natural language processing to develop AI that can understand more than the simple commands common today like “Alexa, play music” or “Siri, what’s in the news?”

Semantic Machines tech is now going into a variety of Microsoft services, and once they’re available, Cortana should be much smarter, nimble, and capable of carrying out more tasks with fewer words, Ku said.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

In time, these systems will learn from a user’s habits.

“From an end user standpoint we would create new skills using the Semantic Machines approach that could handle a lot more language variations so you can for example issue a command then go back and say ‘No, I meant don’t send it to my boss, send it to my boss’ boss’ and have the system adaptively learn from context so you don’t have to recreate everything from scratch so from the user standpoint [it is] much more resilient and adaptable language, understanding, and actions,” Ku told VentureBeat in an interview.

Users of Microsoft’s bots like Xiaoice in China and Rinna in Japan may experience similar improvements.

Recent advances from Microsoft and Semantic Machines’ work in multiturn dialogue may also help the Cortana company keep up with evolving assistants from competitors.

The acquisition enables Microsoft to compete with evolving AI assistants from other brands, such as Samsung’s Bixby 2.0 due out later this year, and Google Assistant, who recently introduced a feature enables users to make multiple commands in a single sentence.

Conversational AI center

A Microsoft spokesperson declined to share the financial terms of the deal, but they did say it acquired Semantic Machines for both its technology and its team. That gives the company a presence both near Boston and in Berkeley, close to many researchers with expertise in advanced dialogue and conversational AI.

[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":2354171,"post_type":"exclusive","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,business,","session":"B"}']

The center will focus on attracting research talent and leveraging the reputation of staff members like Dan Klein at UC Berkeley and Percy Liang at Stanford University.

“The intent is to build out more and more of this capacity for advanced research and product capabilities in Berkeley, tapping into the Berkeley ecosystem and Bay Area,” Ku said. “We want to build out this talent base that can start to apply to push the state of the art in conversational AI both for building products but also advancing kind of the state of the art.”

The conversational AI center for excellence will work alongside other conversational AI initiatives from Microsoft like the Cortana Research division, Cortana Intelligence Institute opened in February at RMIT University in Melbourne, Australia, and the Microsoft AI and Research Group in Redmond, Washington.

How Semantic Machines will help developers make voice apps

Whereas today Cortana voice apps or skills are created using the Azure Bot Service or Microsoft Bot Framework, in the future, when Semantic Machines tech is integrated into Microsoft products, developers will be able to teach voice apps to function by learning from example.

[aditude-amp id="medium2" targeting='{"env":"staging","page_type":"article","post_id":2354171,"post_type":"exclusive","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,business,","session":"B"}']

“What Semantic Machines is really talking about is a new approach where we could use machine learning to map end-to-end learning that starts from the natural language utterance all the way to the actions that are performed — the API calls, the tasks that are performed in response — and to do that in a way that allows us to use machine learning to teach the system through what we call demonstrations,” Ku said. “From the developer standpoint of having to create these skills it would be in which we would have much less engineering effort because you can do all this mapping to actions and APIs learned through machine learning.”

In addition to Cortana and Xiaoice, Microsoft plans to bring Semantic Machines tech to a range of products including the Azure Bot Service, Microsoft Cognitive Services, Microsoft’s AI solution for customer service, and other conversational computing technology for enterprise customers.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More