Question: Yesterday we saw an AI-embedded refrigerator. What do you think is going to be the form factor for AI in the home? Will it be part of individual appliances like that?

Huang: I believe that your home computer will likely be connected to the largest screen in your house, which is usually your TV. For some people, the only screen in their house may be in some interesting places. But there are so many things you want to see and control. You want to see information. It could be baby monitors. It could be security cameras around the house. It could be communicating with a family member through video chat.

Nvidia CEO Jen-Hsun Huang at CES 2017.

Above: Nvidia CEO Jen-Hsun Huang at CES 2017.

Image Credit: Dean Takahashi

Question: One thing we haven’t talked about is your game streaming platform, GeForce Now. I’ve tried Grid. It was very effective on many different platforms. How are you going to roll that out worldwide? Are you building your own data centers, or partnering with other people? How do you plan to keep latency low?

Huang: First of all, the answer is yes and yes. We can partner with a lot of different people. Building out data centers today is much easier than it used to be. We have GPUs and cloud services all over the world now. Amazon has GPUs. Microsoft has GPUs. Google has GPUs. We can host our software on top of our own GPUs inside those data centers. We can co-locate and build specific types of data centers, very highly tuned GPU data centers. There’s a lot of different things we can do. Cloud data centers are everywhere in the world now. It’s a commodity.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

Question: How significant is the partnership you announced yesterday with suppliers like Bosch, in order for you to grow your AI car business?

Huan

Nvidia

Above: Nvidia.

Image Credit: Dean Takahashi

g: Super important. The supply chain of the automotive industry is very specific. The car OEMs provide the vision, the architecture, and some of the engineering, but a lot of the engineering is done in tier ones, as you know. We’re now partnered with the world’s largest in Bosch, and one of the world’s top five in ZF. That’s pretty unheard-of. If you think about all of the other platforms and who’s supporting them, they’re not in the top five. The Nvidia platform has gained a lot of confidence from the tier ones and the OEMs.

Question: Back on GeForce Now, that’s working on Macs now is well. I’m curious about your relationship with Apple, how closely you had to work with them on that, and what else you might be working on with Apple.

Huang: Our relationship with Apple is great, but this is an open platform. It’s just a web service. I don’t really have anything else to say about Apple.

Question: Despite the appeal of Cuda for programming, some people have suggested the neural networks and other workloads in applications like training could use custom chips, whether it’s ASICs or semi-custom parts, application-specific standard parts, FPGAs. Do you have a thought about the notion of how workloads could move from GPUs?

Huang: First of all, a GPU is a custom chip. Cuda, we’ve been evolving it very rapidly so that it can get better and better at different workloads. Pascal was really the first GPU where we put a lot of energy into completely changing the architecture for deep learning. You’re going to see us do much more than that.

The way to think about it is, our GPU is just a custom chip. I believe that general-purpose processing is not a good idea for workloads like deep learning. That’s why we evolved our GPU and evolved Cuda to implement custom capabilities necessary for deep learning.

Question: I wanted to ask about your company culture, how you run your company to choose and focus on the problems you solve and organize your employees to foresee those problems. What would you say is Nvidia’s personality?

Huang: A lot of people have described us as maybe the world’s largest startup. I do think we’re very much startup in the company’s personality. You want to allow the company to be able to dream, to think about the future. In order to do that, to try things, you have to be able to experiment. When you try things, you fail, but when you fail–if you feel that your company and your friends and all of your colleagues are going to punish you for that, you’ll avoid experimentation.

We don’t do that. We happen to enjoy trying new ideas. If they don’t work out, we learn from it and we move on. The culture is—I don’t know that it’s any more magical than that. We’re just a whole lot of people who are trying to make a contribution. We tend to be good about selecting work that only we can do. We don’t go and select work that other people are doing, just because we think it’s a big market. We only select work that we think we can do and make a unique contribution to the world. If you allow your company to do that, it’s going to find great things to do.

The Nvidia headquarters is full of triangles, the basic building blocks of 3D graphics.

Above: The Nvidia headquarters is full of triangles, the basic building blocks of 3D graphics.

Image Credit: Dean Takahashi

Question: In your thinking, what are the most important advantages the GPU has compared to the CPU, especially in self-driving and AI applications?

Huang: A CPU and a GPU are two different things. They’re both needed inside a computer. It’s like salt and pepper. A CPU was designed for instruction processing. A GPU was designed for data processing. One of them is very agile, but the other, the GPU, can handle very large workloads very fast. This is almost like a jet plane, whereas this is maybe like a fighter plane. One is very agile, but the other has very high throughput. If I want to move a lot of workload, I want a big plane with a big engine. That’s kind of what a GPU is. The two processors are very different. It depends on what job you want to do. This is a truck, this is a motorcycle. Very agile here, very high throughput there.

Question: For AI computing, the cloud has huge capability relative to edge devices. In the future of this computing model, do you see either of those sides shrinking?

Huang: Actually, I think the edge side may go up. While the cloud, of course, is growing very fast. The reason for that is because we can now put small networks, artificial networks, in the edge, so that the edge devices can be very intelligent. By making the edge device intelligent, with AI, we can have very fast response times. You can interact with your robot and the latency is very short. On the other hand, it reduces the amount of bandwidth necessary for the cloud.

We need to reduce the amount of traffic to the cloud. Today we have billions of devices. In the future we’ll have trillions of smart devices, and they can’t all be uploading video to the cloud for recognition. You want to do recognition locally and upload only metadata to the cloud.

Question: I think of all the decades of failure behind AI, followed by the last few years of success. Using your analogy, was AI just waiting for this truck to arrive, in order to advance?

Huang: Part of it is destiny. Part of it is serendipity. The destiny part is this. We created a processor that’s incredibly good at data processing and high-throughput computing. On the other hand, the deep neural network approach is very computationally brute-force. At some level, the algorithm is simple and elegant, but it’s only effective if you train it with an enormous amount of data. It needs an enormous computational engine to be effective.

When these two came together, I think that’s serendipity. But the elegance of deep learning is that it’s so rich in capability. It just has that one handicap, the need for a lot of computing behind it. That’s why I’ve always felt that deep learning and GPU is destiny meeting some amount of serendipity.

What’s cool about deep learning is that the model is very transportable. Once you understand it, once you’re able to use it, you’ve turned artificial intelligence from art form into an engineering form. That’s why the number of companies using deep learning is just exploding. It’s a capability you can put your hands around now and really apply. It’s a little bit like when, 40 years ago, it became possible to design your own chips. Companies that started designing chips flourished.

Deep learning, finally you have this tool and this algorithm and this computing platform that allows you to train your own artificial intelligence network. As a result, those companies have flourished as well.

CES2017