Facebook today is announcing that its researchers have developed hardware for a type of artificial intelligence called deep learning, which can be used inside several of the company’s applications. Facebook is publishing the hardware designs for anyone to explore through the Open Compute Project.
The servers, codenamed Big Sur, are packed with graphics processing units (GPUs), which have become the chip of choice for deep learning. The technique involves training artificial neural networks on lots of data — pictures, for instance — and then getting them to make inferences about new data. Facebook is investing more and more into this field, so it makes sense for the company to design custom hardware, just as it has general-purpose servers, storage, and networking equipment. And it also makes sense to share the designs.
“This is a way of saying, ‘Look, here is what we use, here is what we need. If you make hardware better than this, we’ll probably buy it from you,'” said Yann LeCun, head of the Facebook Artificial Intelligence Research lab, during a conference call on the news. Facebook prominently hired LeCun in 2013.
Deep learning, a domain in which LeCun is highly regarded, can be used for speech recognition, image recognition, and even natural language processing. Facebook does all of those. It’s a core area for Facebook, just as it is for Google and Microsoft. Facebook has previously open sourced some of its AI software, and now the openness has extended to hardware.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
Each Big Sur server can pack in as many as eight GPUs, each of which can max out at 300 watts. Facebook designed Big Sur based on Nvidia’s Tesla M40 GPU, but it can accommodate other GPUs as well.
Facebook has deployed these servers at its data centers both inside and outside the U.S., LeCun told reporters on the call.
Big Sur beats what Facebook was using before for deep learning.
“Leveraging NVIDIA’s Tesla Accelerated Computing Platform, Big Sur is twice as fast as our previous generation, which means we can train twice as fast and explore networks twice as large,” Facebook researchers Kevin Lee and Serkan Piantino wrote in a blog post. “And distributing training across eight GPUs allows us to scale the size and speed of our networks by another factor of two.”
Check out the full blog post for more detail on the Big Sur server.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More