Earlier this week, Google made a splash when it released its TensorFlow artificial intelligence software on GitHub under an open-source license. Google has a sizable stable of AI talent, and AI is working behind the scenes in popular products, including Gmail and Google search, so AI tools from Google are a big deal.

Today on GitHub, TensorFlow, primarily written in C++, is the top trending project of the day, the week, and the month, having accrued more than 10,000 stars in about one week.

But there are several other open-source tools to choose from on GitHub if you want to improve your app with deep learning, a type of AI that involves training artificial neural networks on a bunch of data and then getting them to make inferences about new data.

Here’s a rundown of some other notable deep learning libraries available today.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

The big ones

The long tail

  • Apache Singa. Written almost completely in C++, this one emerged as an Apache incubator project in March. The original work on Singa was done by six students and research fellows from the National University of Singapore and one professor at China’s Zhejiang University.
  • Brainstorm. A promising new library from a small team of researchers at the Swiss artificial intelligence lab Istituto Dalle Molle di Studi sull’Intelligenza Artificiale (IDSIA), Brainstorm can handle what are being called Highway Networks involving very deep networks with hundreds of layers. Like Theano, it’s written in Python.
  • Chainer. Preferred Networks, a startup based in Japan, announced the release of the Python-based framework in June. Chainer’s design is based on the principle “define by run” — that is, the network is determined on the fly rather than only at the beginning, according to the documentation for the framework.
  • CNTK. A product of Microsoft Research, CNTK first became available on Microsoft’s CodePlex open-source code repository hosting site in April. It uses a graph model. “In a directed graph, each node represents an input value or a network parameter, and each edge represents a matrix operation upon its children,” Microsoft chief speech scientist Xuedong Huang wrote in a blog post about CNTK. Written in C++, the project is available under a MIT license. (Update on January 25, 2016: Microsoft today moved CNTK from CodePlex to GitHub and, in doing so, dropped the Microsoft Research License. As a result, the project is no longer limited to non-commercial use.)
  • ConvNetJS. This tool from Stanford University Ph.D. student Andrej Karpathy allows you to train neural nets right inside of your browser, using good old JavaScript. Karpathy has a great tutorial for getting started with ConvNetJS, as well as nifty browser-based demos.
  • Deeplearning4j. The name says it all — it’s deep learning for Java. This project is backed by startup Skymind, which launched in June 2014. Users of the software include Accenture, Booz Allen, Chevron, and IBM.
  • DSSTNE. Executed primarily in C++ and pronounced “destiny,” DSSTNE is Amazon’s from-the-ground-up effort that can be seen as a response to Google’s TensorFlow. At release time in May 2016 it did not support convolutional neural nets, and support for recurrent neural nets was limited, but it works across multiple GPUs, and Amazon is claiming a “2.1x speedup” over TensorFlow with a g2.8xlarge GPU instance from Amazon Web Services (AWS).
  • h2o. This Java-based framework is part of a more general machine learning runtime from a startup that goes by the same name (although not long ago the startup went by a different name, 0xdata).
  • Marvin. This new entrant from Princeton University’s Vision Group is written in C++. The team offers a file for converting Caffe models into a format that works in Marvin.
  • MatConvNet. This is a MATLAB toolbox for implementing convolutional neural nets. It was first developed by professor Andrea Vedaldi and Ph.D. student Karel Lenc of the University of Oxford’s Robotics Research Group.
  • Merlin. This one is notable for being primarily written in and available for the relatively young programming language Julia. The team is initially focused on natural language processing (NLP).
  • MXNet. Primarily written in C++, MXNet was created by the people behind the CXXNet, Minerva, and Purine2 projects. It’s meant to use memory efficiently, and can even run on a smartphone, for tasks like image recognition. In November 2016 Amazon Web Services (AWS) named MXNet its preferred deep learning framework, despite that Amazon previously developed its own.
  • Neon. Startup Nervana Systems published its Neon software under an open source back in May. Some benchmarks suggest that Neon — which is written mostly in Python and Sass — could outperform Caffe, Torch, and Google’s TensorFlow.
  • Paddle. This is Baidu’s framework, based primarily in C++ and open-sourced in August 2016. Baidu uses it for predicting click-through rates, classifying images, optical character recognition, ranking search result, and detecting computer viruses.
  • Veles. Named after the Slavic god by the same name, Veles comes from Samsung. It’s written mostly in Python, and it can be run in an IPython notebook.

What am I leaving out?

There are other frameworks available today — these are just the most interesting ones I’ve encountered — and more will surely emerge in the future. Let me know what’s missing, so I can update this post accordingly.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More