Amazon has suddenly made a remarkable entrance into the world of open-source software for deep learning. Yesterday the ecommerce company quietly released a library called DSSTNE on GitHub under an open-source Apache license.
Deep learning involves training artificial neural networks on lots of data and then getting them to make inferences about new data. Several technology companies are doing it — heck, it even got some air time recently in the show “Silicon Valley.” And there are already several other deep learning frameworks to choose from, including Google’s TensorFlow.
[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":1948002,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"big-data,business,dev,","session":"B"}']Amazon is not the most active technology company in the realm of open source. Facebook or Google would be better candidates for that honor. But Amazon supplies a reason for this move in a frequently asked questions (FAQ) page included in the repo:
We are releasing DSSTNE as open source software so that the promise of deep learning can extend beyond speech and language understanding and object recognition to other areas such as search and recommendations. We hope that researchers around the world can collaborate to improve it. But more importantly, we hope that it spurs innovation in many more areas.
Amazonians are aware of the software’s limitations. In its current form, DSSTNE (pronounced “destiny”) cannot support convolutional workloads for image recognition, and its support for recurrent neural nets is limited. (Engineers will be working on that.) But the software can train using multiple graphics processing units (GPU) at one time, unlike some other frameworks, and it’s already showing performance advantages over even the cutting-edge TensorFlow. Amazon says it provides a 2.1X speedup over TensorFlow on a g2.8xlarge GPU instance in the Amazon Web Services (AWS) public cloud. And DSSTNE is particularly performant relative to competitors when there isn’t a whole lot of training data to work with, Amazon said.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
Plus, C++-based DSSTNE may also have advantages when it comes to ease of use.
“DSSTNE’s network definition language is much simpler than Caffe’s, as it would require only 33 lines of code to express the popular AlexNet image recognition model, whereas Caffe’s language requires over 300 lines of code,” Amazon wrote on the FAQ page.
Documentation is here.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More