Nvidia, a publicly traded company that makes graphics processing units (GPUs), has been focusing its business more and more completely on artificial intelligence (A.I.) after having managed to sell considerable quantities of GPUs for that type of computing work to big companies like Facebook and Google. Those GPUs sit in servers, rather than desktops, laptops, or mobile devices, where Nvidia sticks GPUs for gaming, image processing, and other workloads.
But the use of Nvidia’s GPUs for A.I., and specifically deep learning — an approach that involves training artificial neural networks on bunches of data, such as images, and then getting the neural networks to make inferences about new data — has gained particular traction in the technology industry. Now Nvidia wants to see government agencies adopt and expand their use of deep learning — which today typically relies on GPUs — particularly during the training phase.
[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":2088420,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"big-data,dev,enterprise,","session":"A"}']“One of the reasons why I’m going to Washington is I want to talk to a lot of government customers and find out what they’re most interested in and what they want to find out about,” Nvidia chief scientist Bill Dally told VentureBeat in an interview. He is scheduled to give a keynote address at Nvidia’s GPU Technology Conference in Washington this week.
[Government agencies] certainly process lots of image data of various kinds, reconnaissance data — whether it’s aircraft, whether it’s cameras on buildings or soldiers in the field, it’s always been the case that people collect far more images than they can act on. Even domestic commercial security applications tend to be used for postmortems — whether watching action like crime being committed or something nobody actually is paying attention to, you download it and go back and pay attention and see who did it.
With deep learning, systems get better over time as they receive more information. Nvidia is hoping that government agencies start to grasp that the technology can outperform more traditional machine learning methods.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
The corporate event comes a couple of weeks after the White House issued a report on advances in A.I., along with a strategic plan (PDF) that, among other things, suggests that the U.S. “prioritize investments in the next generation of A.I. that will drive discovery and insight and enable the United States to remain a world leader in A.I.”
The move comes as Intel takes steps to gain awareness and eventually market share for deep learning. For one thing, Intel acquired deep learning hardware and software developer Nervana for more than $350 million. Meanwhile, Microsoft has looked to field-programmable gate arrays (FPGAs) to accelerate certain workloads on its Azure public cloud servers. Google has developed tensor processing units (TPUs) for A.I., and it has also been researching the application of quantum computing for A.I.
Dally — who left his position as head of the computer science department at Stanford University to join Nvidia in 2009 — is not confident that these other kinds of infrastructure could be very useful for A.I. at scale. Quantum computing is not used widely in production, Google has not provided extensive information about its TPUs, and FPGAs can be inefficient for a variety of workloads, Dally said. As for Intel? “We’re far enough ahead that we don’t have to worry about them chasing us; they’re many years behind,” he said.
Four-year effort
The company’s A.I. push is immediately evident if you tune in to any Nvidia event these days, but Nvidia, which was founded in 1993, has only been heavily investing in the area for the past four years, Dally said.
In around 2011, Nvidia employees took the first steps in collaboration with Stanford computer science professor Andrew Ng to move the then-nascent Google Brain deep learning system from 16,000 CPUs onto 48 GPUs, making it faster and more cost-efficient, Dally said. “Out of that exercise,” he said, “we basically realized, ‘Gee, deep learning is going to be a huge application for GPUs. We should start asking the question, What can we do to start making our GPUs better?””
Meanwhile, at the University of Toronto, Alex Krizhevsky, Ilya Sutskever, and Geoff Hinton won the 2012 ImageNet object recognition competition using a deep neural network called AlexNet that they had trained with two GPUs.
[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":2088420,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"big-data,dev,enterprise,","session":"A"}']
As Dally remembers it, the work with Ng and Google likely involved experiments with Nvidia’s Fermi and Kepler generations of GPUs. Following that, Nvidia was only able to make small changes to the subsequent Maxwell architecture, “but we actually made a lot of changes to Pascal, specifically to make it better at deep learning,” Dally said. A GPU based on Pascal, like the Tesla P100, can train 10 times faster than a Maxwell-based GPU, Nvidia has shown. And Pascal GPUs can be connected to one another with Nvidia’s proprietary NVLink interconnect, making it possible for data to be transmitted at 160 Gigabits per second, more than 10 times the bandwidth of PCI-Express, Dally said.
Today he’s dreaming bigger — bigger than even the DGX-1 box, which packs eight P100’s connected over NVLink. “My vision is that we’ll be able to create a much larger system,” Dally said. “We’re doing sort of 10-ish today. I think in the near-ish future it will be many tens and hundreds and, ultimately, thousands of GPUs connected together with very high-bandwidth links that look like a single large GPU.”
Looking toward D.C.
Now, with a team of around 110 researchers, Dally is looking to get the technology more widely used in government. Agencies and contractors can rent out GPU-backed instances in public clouds like Amazon Web Services, Microsoft Azure, and IBM SoftLayer, or they can purchase them for deployment in their own data centers. And multiple deployment options are important, because application needs and budget constraints vary.
Either way, Dally believes GPU-powered deep learning can provide better analysis of media coverage, transcripts of conversations, or streams of video from surveillance cameras — whatever policymakers are most interested in.
[aditude-amp id="medium2" targeting='{"env":"staging","page_type":"article","post_id":2088420,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"big-data,dev,enterprise,","session":"A"}']
“For all of those intelligence-gathering applications, I think deep learning is really going to be a huge part of what they do going forward,” he said. “I hope they’ll do it with Nvidia GPUs.”
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More