The self-driving car is a huge computational problem. It’s so tough that traditional computer vision techniques can’t tackle it. So Nvidia has turned to an artificial intelligence technique, dubbed deep learning, to train brain-like computers capable of “learning” how to identify hazards on a road and safely direct a self-driving car.

The technology requires an enormous amount of pattern recognition and neural network computing power. And that means it needs Nvidia’s graphics processing units, or GPUs, to handle the load. With enough GPUs in the cloud or in a car supercomputer, the deep-learning task is more practical, said Nvidia CEO Jen-Hsun Huang in a press conference at the 2016 International CES (the big tech trade show in Las Vegas this week).

He said Nvidia’s strategy is to create GPUs to enable deep learning, to create a deep-learning platform for others to use, and to create an end-to-end deep-learning network that learns and solves real-world problems.

Huang said that one of the keys will involve deep-learning networks that over time get better at recognizing objects, such as pedestrians in hazardous driving situations. So part of the company’s approach to solving the problem is to set up a deep-learning platform, dubbed Jetson, that will be able to learn how to recognize driving imagery.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

Part of this processing will happen in the car, using Nvidia’s new car supercomputer, the Nvidia PX 2. But it will also depend on a cloud-based deep-learning network. Huang said that GPUs improve the rate of learning in a deep-learning computation by 20x to 40x. Which means this kind of platform will be useful beyond the self-driving car problem.

“The number of deep-learning applications we are seeing is utterly quite staggering,” Huang said.

Nvidia will have one deep-learning platform for uses ranging from PCs to self-driving cars.

“We will solve this problem from end to end,” Huang said.

Nvidia can deploy the neural network, collect data from a car that taps the technology, and then go back and teach the network what it got right and what it got wrong. Nvidia is training the network on known imager collections, including one with 1.2 million images in it. In that way, the deep learning network gets smarter over time.

The current Nvidia deep neural network, dubbed Nvidia Drivenet, has the equivalent of 37 million neurons, or brain-like cells, and it takes 40 billion operations to run through the network once. Each time the network runs, it gets better. In July, it was successful in recognizing objects about 39 percent of the time. Now it is at 88 percent accuracy. And Nvidia has trained it on 120 million objects to date.

Ford is one of several companies using deep-learning technology, Huang said.

Nvidia’s newly unveiled PX 2 supercomputer will collect data and send it to the Nvidia CX, which runs the car infotainment system, including the dashboard, so the driver can see it in real time.

“Nvidia is smart to go after a platform approach,” said Patrick Moorhead, analyst at Moor Insights & Strategy, in a statement. “They have the technology to make it happen, but getting developers and customers to do a lot of the work is even more important. Just look at Apple and Google and how they have leveraged ecosystems. If Nvidia can convince carmakers and car electronics makers that their vision of cars is valid, they have an advantage on everyone right now.”

 

 

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More