Publicly traded customer relationship management (CRM) software company Salesforce is today talking up new artificial intelligence (AI) research that’s been conducted by the Salesforce Research division the company formally announced in September, following the MetaMind acquisition.
MetaMind’s team was skilled in a type of artificial intelligence called deep learning, which involves training artificial neural networks on lots of data and then getting them to make inferences about new data. The new Salesforce Research division, headed up by former MetaMind cofounder and chief executive Richard Socher, does research in an academic context, and over time its learnings can lead to Salesforce product improvements. Salesforce competitors like Microsoft also operate units that regularly publish academic research.
[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":2107947,"post_type":"story","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,business,cloud,","session":"D"}']But today the news is about new research. Salesforce Research has developed a neural network called a dynamic coattention network, which can answer questions about text, thanks to a coattentive encoder and a dynamic decoder. “The DCN interprets documents based on specific questions, builds a conditional representation of the document for each different question asked, iteratively hypothesizes multiple answers, and weeds out initial incorrect predictions. In the end it arrives at the most accurate and intuitive answers,” Socher wrote in a Medium post.
Question answering is an established domain and an area of ongoing research for other major technology companies, such as Facebook. Salesforce tested out the dynamic coattention network’s performance on on the Stanford Question Answering Dataset (SQuAD), and it came out ahead of systems from Allen Institute for Artificial Intelligence, Microsoft Research Asia, Google, and IBM.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
“The Dynamic Coattention Network is the first model to break the 80 percent F1 mark, taking machines one step closer to the human-level performance of 91.2 percent F1 on the Stanford Question Answering Dataset,” Salesforce research scientists Victor Zhong and Caiming Xiong wrote in a blog post. (The full paper is here.)
In addition, Salesforce Research has thought up an alternative to the longstanding long short-term memory (LSTM) type of recurrent neural network (RNN) called a quasi-recurrent neural network, or QRNN. The advantage is that it’s possible to process all of a given text at once, in parallel, rather than one word at a time, which means you can do it more quickly, as research scientist James Bradbury explains in a blog post. The team applied the QRNN to sentiment analysis, translation, and predicting the next word in text, and overall it performed better than an LSTM RNN. (Paper here.)
Salesforce Research has also built a model that can be likened to a Swiss Army knife: It can handle dependency parsing, part-of-speech tagging, semantic relatedness, syntactic chunking, and textual entailment. “These results are notable because many of the previous models were designed to handle single or a few related tasks to achieve the best results only for the limited number of the tasks,” former Salesforce Research intern Kazuma Hashimoto wrote in a blog post. (That paper is here.)
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More