Google today announced that it has open-sourced TensorFlow Serving, a piece of software that makes it easy to deploy machine learning models that can make inferences about new data. The software, which is available on GitHub, works natively with Google’s previously open-sourced TensorFlow deep learning framework, but it can also support other tools.
“TensorFlow Serving makes the process of taking a model into production easier and faster. It allows you to safely deploy new models and run experiments while keeping the same server architecture and APIs,” Google software engineer Noah Fiedel wrote in a blog post.
Written primarily in C++, the technology should make it a little easier for people to get off the ground when serving up machine learning models using open source tools such as TensorFlow. And while TensorFlow Serving is flexible, because it natively supports TensorFlow, it could help boost adoption of that framework from Google. As more developers start to use the TensorFlow software, Google could improve its capabilities and even uncover new talent.
Deep learning is increasingly popular, not only at web companies like Google and Facebook, but also among startups, as it can help with image recognition, speech recognition, and natural language processing. The process involves training artificial neural networks on large sets of data and then having them make inferences about new data. The TensorFlow serving software is specifically geared toward the inference phase.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
An overview of the architecture of TensorFlow Serving is here.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More