Neural networks had a lot of promise in the field of artificial intelligence in the 1980s and 1990s. But decades later, the revitalized category of “deep-learning neural networks” is making huge progress.
That’s the analysis from Jeff Dean, senior fellow at Google, speaking at the Nvidia GPUTech conference in San Jose, California today. Dean described how neural networks have improved with better algorithms for making sense of images and other sorts of pattern-recognition problems.
“Deep neural networks are very effective for a wide range of tasks,” Dean said.
Neural networks are modeled on the neurons in the brain. They take advantage of parallelism, perform automated analysis, and get better through reinforcement learning. And they’re being used to solve some of the biggest computing problems of the day, like figuring out if that’s a cat or a dog you’re looking at on the Internet, Dean joked.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
Since Google does a lot of YouTube video recognition, Dean quipped, “Of course we have a cat neuron.” The cat detector, the joke goes, is a giant data center.
In all seriousness, the progress is phenomenal, said Jen-Hsun Huang, chief executive of Nvidia, in his keynote speech yesterday. He cares about it because neural networks take advantage of the graphics processing unit (GPU) in a computer. That’s because GPUs are very good at doing a lot of things at the same time, and in the past decade, Nvidia has made the software to make it easier for programmers to take advantage of those parallel processing units.
“The improvements are happening rapidly,” Dean said.
In one area of classification of images, neural nets are making huge progress, Dean said. In the annual ImageNet competition, the best neural net was able to correctly classify images with a 25.7 percent error rate in 2011. That went to 16.4 percent in 2012, 11.7 percent in 2013, and 6.7 percent in 2014. Baidu showed a paper with a 6.0 percent error rate in January, Microsoft published a 4.9 percent error rate in February, and Google itself published a paper with a 4.8 percent error rate on March 2.
“This is indicative of the kind of progress we are making,” Dean said.
The neural nets are useful for a lot of things that don’t have anything to do with image recognition.
Deep neural nets can solve analogies, like “Rome is to Italy like Berlin is to … Germany.”
Google also uses the technology for translating words. Deep networks can also do reinforcement learning, where a system learns by getting feedback on how well it executed on a task. Google’s machines also learned how to play old Atari 2600 games. At first, the machines did a terrible job. But with reinforced learning, they eventually became masters of games like Pong and Breakout.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More