[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":546595,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,","session":"A"}']
Google unveiled its artificial intelligence software this summer that could recognize faces of cats, people and other things by training on YouTube videos. The technology is now being used to improve the results for Google’s products, such as speech recognition for Google Voice.
Google’s neural network, which processes data similar to the way the brain works and learns, is based on simulating groups of connected brain cells that communicate with each other. When it absorbs data, the neural network becomes better at processing it and recognizing relationships among the data. That’s what we call learning.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
Neural networks have been used for decades in face detection or chess-playing software. But Google has a lot more computing power than anyone else, thanks to all of the data centers it has for processing search requests. Google is now using neural networks recognize speech better. That’s increasingly important for Android, the mobile operating system that competes with Apple’s iOS. Vincent Vanhoucke, leader of Google’s search recognition efforts, told Technology Review that results have been improved 20 percent to 25 percent for speech recognition. Other Google products could benefit too.
Google researchers say they’re not building a biological brain yet. But maybe one of these days….
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More