Google today announced its advancements in deep learning, a type of artificial intelligence, for key processes like image recognition and speech recognition.
When it comes to accurately recognizing words in speech, Google now has just an 8 percent error rate. Compare that to 23 percent in 2013, Sundar Pichai, senior vice president of Android, Chrome, and Apps at Google, said at the company’s annual I/O developer conference in San Francisco.
Pichai boasted, “We have the best investments in machine learning over the past many years.” Indeed, Google has acquired several deep learning companies over the years, including DeepMind, DNNresearch, and Jetpac.
Deep learning involves ingesting lots of data to train systems called neural networks, and then feeding new data to those systems and receiving predictions in response.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
The company’s current neural networks are now more than 30 layers deep, Pichai said.
Google uses deep learning across many types of services, including object recognition in YouTube videos and even optimization of its vast data centers.
Meanwhile, Baidu, Facebook, and Microsoft are also beefing up their deep learning capabilities. Earlier-stage companies like Flipboard, Pinterest, and Snapchat have also been doing research in the area — but none have the computing power that Google does. So Google’s achievements in real production apps are a pretty big deal.
To view all of VentureBeat’s Google I/O coverage, click here.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More