Skip to main content [aditude-amp id="stickyleaderboard" targeting='{"env":"staging","page_type":"article","post_id":2146263,"post_type":"story","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,arvr,business,games,","session":"B"}']

Nvidia CEO Jen-Hsun Huang dives deep into gaming, VR, autonomous cars, and Shield TV

Jen-Hsun Huang, CEO of Nvidia, delivers the opening keynote address at CES 2017.

Image Credit: Nvidia

You could say that Nvidia is at its high-water mark. The company’s stock price has tripled in the past year. It has transformed itself from a maker of graphics chips to an artificial intelligence company. And CEO Jen-Hsun Huang gave the opening keynote speech at CES 2017, the big tech trade show in Las Vegas this week.

After that keynote, Huang sat down with a small group of press and answered questions for an hour. Huang talked about how the serendipitous and destined combination of programmable graphics processing units (GPUs) and deep learning neural networks enabled breakthroughs in AI that are leading to further breakthroughs in autonomous cars and voice controls. He also dove deep into PC gaming, virtual reality, Nvidia’s partnership with Nintendo on the Switch console, self-driving car partnerships with Mercedes-Benz and Audi, and Shield TV.

[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":2146263,"post_type":"story","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,arvr,business,games,","session":"B"}']

I participated in the group Q&A with Huang. Here’s an edited transcript of the press event.

Above: Gary Shapiro of CTA on stage with Jen-Hsun Huang of Nvidia at CES 2017.

Image Credit: Dean Takahashi

Jen-Hsun Huang: We had a keynote yesterday, and I talked about three things. The first thing I talked about is that PC gaming and GeForce are thriving. It’s thriving for a lot of reasons. PC gaming is the only gaming platform that’s global. It’s global by design. It’s global by economics. It’s global because it’s based on the PC. It’s based on an essential tool for humanity. People want to buy a lot of things, but people need to buy a PC.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

I also made the point that almost every human is a gamer. I believe that my parents’ generation, none of them were gamers, but my children’s children’s generation, everyone will be a gamer. And yet there are only several hundred million gamers in the world today, which suggests that gaming still has the opportunity to grow by a factor of 10. Gaming is thriving. It’s a global business. Everyone is going to be a gamer.

Production value of games is increasing so fast. In the last 10 years, in order to play a Call of Duty game at a reasonable level—today, compared to five years ago, you need 10 times the computational horsepower. All of a sudden 4K is here. HDR is here. VR is here. The technology that drives the industry, the production value, is increasing incredibly fast.

Gaming is driven no longer just by the fact that it’s fun to play. Gaming is now a sport. You guys have known this for a while. You also know that the League of Legends finals was viewed by more people than the NBA finals. I made the claim that, long term—who knows how long the long term is, but esports could be the world’s largest gaming genre. It could be bigger than soccer, bigger than football, bigger than…swimming, I don’t know. [laughter] It could be quite large. It’s already incredibly large. 100 million mobile gamers. 325 million people watching other people play.

Gaming is also social. It’s a way of hanging out. Don’t forget that when three of your friends are gamers, when they’re playing Overwatch and you’re not, it’s hard to hang out with them. It’s no different from a game of pickup basketball or anything else that we do. It’s a social network. The more of your friends are gamers, the more gamers you get to know. It’s a positive feedback system.

We also have seen, in just the last few years—it’s Twitch that started it, but it’s not only Twitch. One of the fastest-growing segments of YouTube is video game programming. You want to learn how to get to the next level. You want to see somebody’s spectacular feat. You maybe have used video games as a platform for art. You do something really incredible, like capture a short story. Video games, as a way to share your victories, share your moments, share your art—it’s become a really fast-growing product.

All of these factors, these dynamics, are best on PC. That’s the reason why the PC industry has grown, why the PC gaming market has grown so fast, and it’s now the largest gaming platform. We see this dynamic continuing. We’re quite excited about it.

[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":2146263,"post_type":"story","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,arvr,business,games,","session":"B"}']

I talked about AI in the home. Ultimately the home needs a home computer. Your home computer is no longer your PC. Your personal computer belongs to you. Everybody has their own personal computer. There’s no concept of a “home computer” anymore. But I believe there needs to be a home computer. It’s likely that your home computer is your entertainment computer. We’ve always felt that Shield is an entertainment computer. It brings a modern way of enjoying content to the home. But over time it becomes more and more powerful. Eventually it controls and connects to the whole house.

I talked about autonomous driving. We gave an update on our Autopilot platform – our processor, our operating system, all the necessary AIs. There’s a misconception that maybe there’s one AI causing the car to drive, but it’s not like that. AI, in the future, is going to be a whole lot of AIs. It’s a new way of developing software. Deep learning is a new way of developing software. A whole lot of software modules and capabilities are inside the car, and they’re all going to be infused with AI.

You still have to break down the functionality of the car. You still have to break down the computing platform into its modular parts and develop it in pieces. But we expect there will be a lot of different AIs. I talked about perception AIs. I talked about driving AIs. I talked about reasoning AIs – where am I? where is everyone else? – and I made a prediction about all that.

[aditude-amp id="medium2" targeting='{"env":"staging","page_type":"article","post_id":2146263,"post_type":"story","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,arvr,business,games,","session":"B"}']

I also said that the AI is no longer going to be just about driving you on autopilot. Even when it’s not driving you, it’ll be looking out for you. You’ll have a copilot. At all times, you’ll have a copilot fully engaged, fully alert, looking at your surroundings. It has surround perception at all times. Not only that, it’s also connected to its perception of what you’re doing. If there are things happening outside the car that are inconsistent with your attention level, it’ll remind you of that. Even when it’s not confident to drive, it should be fully confident to look out for you. There’s always going to be a copilot AI running.

I talked about these three basic ideas. Gaming. GeForce is thriving. In fact—is it okay if I took one of our lines from the blog and shared it? That’s okay, right? I just don’t know what press etiquette is. [laughter]

Above: Jen-Hsun Huang, CEO of Nvidia, at CES 2017.

Image Credit: Dean Takahashi

Question: Is there such a thing?

Huang: I can say whatever I want, right? There’s a piece of news—GeForce is really thriving. In fact, at CES, there are going to be 30 new gaming laptops launching this year. That’s a lot. 30 new GeForce gaming laptops. Every single OEM in the world. Powered by our brand new GPU, GeForce 1050TI. These laptops are thin. They’re fast. You get essentially something that’s better than a PlayStation 4 in a tiny laptop computer. Your little thin, beautiful laptop gives you the ability to play PS4-quality games. Everything just works.

[aditude-amp id="medium3" targeting='{"env":"staging","page_type":"article","post_id":2146263,"post_type":"story","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,arvr,business,games,","session":"B"}']

We’re also announcing, too, amazing new gaming monitors. The world’s first 4K G-sync HDR models. If you haven’t had a chance to see it, you must see it. No lag. No tear. Full HDR vibrancy, 4K monitors. One is from ASUS and one is from Acer. Those two monitors are going to be fantastic gaming monitors.

GeForce is vibrant. We’ll bring AI to the house, and we’ll turn your car into an AI. It’s either going to be driving you or looking out for you. All right, that’s it.

Question: The AI copilot—I expected something like that before an autonomous car. Can you talk about the timing of that?

Huang: It’s really hard. It’s way harder than an autonomous car. I showed four capabilities. All four capabilities didn’t exist until recently. One singular AI is able to watch where your eyes are looking, where your head is pointing, reading your lips. We’re doing face perception in real time. Something you and I do very easily, very naturally, is really hard for a computer to do. This camera is going to sit inside your car and monitor everything. It’s watching you, watching your passengers.

[aditude-amp id="medium4" targeting='{"env":"staging","page_type":"article","post_id":2146263,"post_type":"story","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,arvr,business,games,","session":"B"}']

Question: So it takes more compute power to do that than–

Huang: The full Xavier. I know. It makes sense. These networks are really big. They’re really deep. Gaze tracking is easy to do poorly, but it’s really hard to do well. Lip reading wasn’t even possible until recently. I mentioned the folks at Oxford we worked with, who inspired this work. It’s very hard.

Above: Jen-Hsun Huang, CEO of Nvidia, at CES 2017.

Image Credit: Dean Takahashi

Question: Does it mean that you’ve dropped voice recognition?

Huang: We’ll do both. But sometimes maybe the environment is too noisy. Maybe you have your windows down. Maybe it’s a convertible. We can still read your lips. It makes sense, right? This car is now an AI, and it has to monitor the driver. But it’s also monitoring all of the environment. Where’s the motorcycle? Where’s the bicycle? Are there pedestrians? Is there a kid playing in the street? Did a ball just roll in front of the car? All this stuff is happening in real time. Copiloting, as it turns out, is really hard, even though it doesn’t drive.

[aditude-amp id="medium5" targeting='{"env":"staging","page_type":"article","post_id":2146263,"post_type":"story","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,arvr,business,games,","session":"B"}']

Question: Given Intel and Qualcomm’s offerings, this market seems to be getting very crowded now. What do you think is your competitive advantage? What are the defining factors that separate winners from losers in the autonomous car computing market?

Huang: First of all, it’s a hard problem. It’s a visual computing problem and an AI computing problem. Those two things, we’re incredibly good at them. We’ve dedicated a lot of time to mastering this field. It’s the type of work that we selected, because it’s the type of work that we would be good at. We started working on AI cars, self-driving cars, long before it was going to be a successful market. For the first nine years, while I was working on this, it was largely a non-market. Zero market. We chose it, though, because the work is important. The work is hard, but it’s something I believed we could be good at.

Now it’s going to be a very large market. There’s a lot of entrants, as you say. But the fact of the matter is, right now there are very few solutions in the marketplace. Drive PX is not designed to be a demo. It’s designed for production. We’re the only production platform in the world today running on Model S. It’s just started shipping this last month. My sense is that we’re going to be ahead of the world in shipping production for real level four self-driving cars, probably by about three years.

Question: Talking about Nvidia transforming into an AI company is interesting. Does that mean you leave some work in gaming and graphics behind, when you’re prioritizing and doing resource allocation? Are you putting a lot more of your R&D and engineering, people going to work on AI, as opposed to your traditional business?

Huang: We have a lot of people working on AI. That’s surely the case. When I think about the work that we do, it all has one thing in common. Number one, it’s all based on GPU computing. We select work that we do not based on whether we think the market is going to be big or not. You guys have known me for a long time. We select the work we do based on the three things I said to this gentleman just now. Is the work important to do? Is it hard to do? Is it work we’re uniquely good at doing? We select work to do that is consistent with our core competencies, what we’re supposed to focus on.

Almost everything we do is based on GPU computing. We work on four different areas, as you guys know: gaming, virtual reality, AI data centers, and self-driving cars. We only really do those four things. We don’t do anything else. It just turns out that these four things happen to be really cool. They’re really impactful. It’s taken a long time for it to become, if you will, “real.” The reason for that is because it’s hard. But they’re all based on one fundamental thing, GPU computing.

They share a couple of capabilities. One is related to visual and one is related to intelligence, artificial intelligence. Maybe someday somebody will discover that imagination, which is computer graphics, and intelligence, which is AI, deep learning—maybe they’re cousins in some way. The computation of these two problems is highly related. Our ability to imagine and our ability to solve problems could be very similar. I don’t have a philosophical linkage between the two, but the two problems are very similar.

Question: You chose to invest in this market several years ago, and now it happens to be something very important. Did you foresee that shift happening?

Huang: We always work on things that are important. The first question is, “Is this an important problem?” Autonomous vehicles is an important problem. Robotics is an important problem, and a very hard problem. It’s a problem that a company like Nvidia would be very good at solving, because the solution involves visual computing and artificial intelligence. It made sense for us to work on it, even if it took a long time.

The way to think about that is, yes, we absolutely foresaw it coming. That’s why we believed it was important. But it took quite a long time. 10 years is a fairly long time to work on something. But if you enjoy working on something, 10 years comes and goes.

Above: Jen-Hsun Huang, CEO of Nvidia, at CES 2017.

Image Credit: Dean Takahashi

Question: What do you think about reports of accidents in tests of self-driving cars? The Tesla incident was the most prominent.

Huang: It’s really unfortunate. There’s no question that the technology has much to advance. That’s why we’re dedicating so much R&D to it. The problem of self-driving cars is an AI problem, and a really hard one. I just don’t think that the work is done yet. That’s why we’re working so hard at it. It’s obviously a very important problem to solve.

Question: Related to that, one of the challenges that the market is facing is that you have a bunch of companies, yourself included, touting autonomous driving as here and now, yet also saying that it’s not here and now. It gets confusing to reconcile when it’s really here – not one or two cars, or even 40 cars, but millions of cars. Tesla is confusing things even more as to how they define what they’re doing. From your position, how should people think about that? How do we reconcile the present versus the future, what we can do and what we can’t?

Huang: First of all, all of you can control this situation quite well. Just don’t write about it. [laughter] Obviously, the reason why we talk about it, the reason why people are interested, is because transportation is such a big deal. It’s the fabric of society. The internet moves information, but transportation moves things. We need things to live. It’s obviously very interesting. Of course, automobiles also connect with a romantic side of us. We love cars. It’s fun to write about.

Now, I think you have a serious question in there related to how we know how far along this is. I actually don’t feel that most people are confused about the capabilities of their car. Just because they’ve read about this in the news yesterday, they don’t go home and say, “Car, drive.” They know that their cars don’t drive themselves. I drive a Model S. I can tell you that it helps me every single day. It improves my safety.

Question: I guess the question is assisted driving versus autonomous. It seems like everyone is focused on autonomous when assisted is more practical and more useful.

Huang: Maybe what you’re saying is what I’m saying as well, which is—I believe a car, an autonomous vehicle, the first thing it’s going to do is plan a route. This car already has a computer inside, connected to the internet. You say, “I want to go here,” and it’ll plan its route, just like a plane does, just like we all do. Out of that route that it’s planned, parts of it, or all of it, it might be able to do that autonomously. If it can do that confidently, then it’ll do it and do it well.

If parts of that route can’t be done autonomously, it’ll tell you. There are lots of different ways to tell you. We’re saying that even when it’s not driving for you, it should be looking out for you. As a result, this AI car concept is a larger concept than autonomous vehicles. That’s why I didn’t say “autonomous vehicles.” I call it an AI car. I believe this AI car will have two basic functionalities. One, driving for you. Two, looking out for you. That idea, I believe, we can put on the road tomorrow morning and it’ll be a betterment for society. But we do have to finish the technology.

Above: Nvidia has partnered with Audi on AI cars.

Image Credit: Dean Takahashi

Question: You’ve announced a broad partnership in the auto industry. What kind of roles does Nvidia want to play? Just as a hardware supplier, or do you have ambitions to play a bigger part in the ecosystem?

Huang: We’re just trying to solve the problem. Our plan is not nearly as grand as what you may be thinking. We believe that there are a lot of cars in the world, a lot of car makers, a lot of car services. There are trucks, shuttles, vans, buses, automobiles of all different types. There is no one-size-fits-all solution for all of those, because they all have different problems and capabilities.

In the case of a shuttle, it’s geo-fenced. The flexibility of the service doesn’t have to be infinite. You can have a lot more mapping data. In the case of an individually owned car, that car has to go anywhere. You should be able to drive your Mercedes to downtown Bangalore as easily as Mountain View or Shanghai. The capabilities of that car have to be different, and so are its limitations, because the challenges are different.

I would say that, number one, there’s no one solution for everything. However, the computing platform for AI can be consistent. Just as every computer is different, the computing platform underneath – the processor, the operating system, the AIs – can be very similar. Our strategy is to create the computing platform. We call it the Nvidia AI Car Platform. This platform would be used by tier ones when they work with OEMs. It’ll be used by OEMs. It’ll be used by car companies, shuttle companies, so on and so forth.

Question: You showed us, yesterday, the Shield and how it’s connected to the Google Assistant. Why do you need the Google Assistant when you could do your own thing?

Huang: It turns out that Google Assistant is quite an endeavor. There are two pieces of Google Assistant, or let me say three pieces, that are quite hard to do. One of them is speech recognition and synthesis — automatic speech recognition and text-to-speech. On top of that is the layer called natural language understanding. That’s what I said versus what I meant. If I just said, “Open it,” I could have been talking about opening anything. But what I meant was likely related to what I was talking about just previously. The natural language understanding part of AI is really complicated. That’s what Google Assistant is doing. The back end of that is a search engine. Google, as you know, is quite good at search. It’s not an inconsequential amount of capability.

When you’re using Google Assistant, you get used to the capability of that assistant. Once we learn how to use a particular assistant, that assistant has capabilities, strengths and weaknesses, and personality. Over time it’s easy for people to use that capability instead of learning four or five different assistants. I get used to working with the people I work with based on our common understanding of each other. Your assistant’s going to be the same way.

Above: Jen-Hsun Huang, CEO of Nvidia, at CES 2017.

Image Credit: Dean Takahashi

Question: You just mentioned that everything will be an AI problem, and that AI problems are very hard to solve. The biggest companies in the industry are all looking closely at this market. What about smaller companies? How can they keep up and contribute to the market in the future?

Huang: This is a great time for startups. We’re working with 1,500 startups today. Never before in the history of our company have we worked with so many startups.

It’s not completely accurate to say that every problem is an AI problem. It turns out that many tough problems we’ve wanted to solve for a long time are AI problems. We’ve not been able to solve them because of the perception part, the world observation part, the pattern recognition part of that problem is so hard, that we couldn’t solve that part until deep learning came along. Recognizing the information. What am I seeing right now? What’s happening right now? That piece of information is easy for a person, but it’s hard for a computer.

Finally we’ve been able to solve that problem with deep learning. Once that happens, the output is metadata. It’s computer data. Now that computer data exists and we know exactly how to use it. We know how to apply a computer to it. What we’re really seeing is that AI is solving some problems that we’ve never been able to solve before.

With respect to startups, the thing you’re starting to see is that these AI platforms are being put into the cloud. These perception layers–once the AI is trained, it’s an API. It’s a voice recognition API or an image recognition API or a voice synthesis API. These APIs are sitting in the cloud. We can connect them to our own applications and write new applications. Startups can now use all of these cloud services. It could be Watson. It could be Microsoft Cognitive Services.

Question: If I’m a startup and my core value is the data that I own, though—if I give away that data, how do you deal with that challenge?

Huang: I know that people say data is the new oil, something like that? It turns out that we all have our own life experiences. That’s what matters. It’s not true that all of these cloud services have all the data in the world. Nvidia designs chips. We have a lot of data about our chips. It’s inside our company. There’s somebody who’s a fisherman, and they own a lot of data about the temperature of the streams where they live. They own a lot of data about that area’s microclimate. That data doesn’t belong to Amazon. Maybe you have a vineyard in France with its terroir and its own special microclimate. That data only exists right there. It’s not available through Google. It’s your data.

That data can be put to good use, finally. The way to think about it is, it’s not true that everybody’s data is going to belong to these cloud services. It’s just not true. We all have our own data. What you’re going to see is, because of AI, these micro-businesses are going to surge. Maybe it’s a brewery. We see people brewing beer with AI now. They have a lot of data about how they brew beer that’s not available on Amazon. It doesn’t belong to Google. It belongs to them. But they can use that information with an AI engine to discover some new insight.

This is a good time for startups. It’s not the opposite.

Question: Looking at the Shield and Spot, do you see any of the interaction there? Are you able to improve those products based on people’s interactions with them, or does it all go straight to Google?

Huang: We’re not intending to collect any data. We’re going to use Shield’s local processing and all of its sensing ability because the Spots—there are multiple Spots in the room. We’re going to put Spots everywhere. When the Spots are plugged into the wall – you don’t have to charge them – it’s going to triangulate the sources of sound and do that quite well. We’re going to take that voice data, that voice signal, and do the processing on Shield. The processing is going to be a deep learning network that does voice synthesis and voice recognition. It’ll be able to do that quite quickly. Then we’ll send that information to the cloud.

The acoustic waves are on Shield. Acoustic data are on Shield. But the actual words, what’s spoken, is sent on, to Google’s cloud.

Question: Since you expanded the Shield TV with all these new features, do you have a border with the Nintendo Switch? Is there anything you’re not going to do, like more tablet work? Is there some way that the Switch fits in with the Shield TV ecosystem?

Huang: Nintendo Switch is a game console. It’s very Nintendo. That entire experience is going to be very Nintendo. The beauty of that company, the craft of that company, the philosophy of that company—they’re myopically, singularly focused on making sure that the gaming experience is amazing, surprising, and safe for young people, for children. Their dedication to their craft, that singular dedication, is quite admirable. When you guys all see Switch, I believe people are going to be blown away, quite frankly. It’s really delightful. But it has nothing to do with AI.

Above: Nvidia Shield TV set-top box.

Image Credit: Dean Takahashi

Question: You explained the process of data collection with Spot. Can you give us your take on this kind of connected environment as far as protecting users’ data from hacking and other privacy issues?

Huang: I don’t know that my take on cybersecurity is any more novel than—hacking isn’t enhanced because you’re talking to a device. Cybersecurity still has to be good. We still have to encrypt all of our transmissions. We have to be rigorous passwords and do all these other things. We still do that. This is no different from a Bluetooth speaker. All the transmissions are encrypted. It doesn’t change the situation. Cybersecurity is just as important.

Question: But the problem seems to be getting bigger and bigger with the number of connected devices in the smart home. Do you think the industry is good enough at protecting all of that?

Huang: That’s the challenge of a connected world. I don’t know that I can give the answer you need.

Question: I’ve seen you speaking many times over the years, always wearing the same jacket. [laughter] How do you feel about where you are, after Nvidia has come so far and is growing so fast?

Huang: It’s very clear that we’re entering a new era of computing. When we all came into the industry, it was during a time when mainframes and minicomputers and client-server were declining in popularity. Personal computers were increasingly successful. The PC industry came. The internet came. Mobile came. For some reason, these phases always last about 10 or 15 years. Over the course of the last 10 or 15 years, technology has galvanized to a point where it’s now possible to do AI.

AI, as you know, is a dream that everyone’s had about the potential of computers for a very long time. For the first time, because of deep learning and because of GPUs making deep learning practical, we’re seeing a new tool that has reignited the AI revolution. That’s where we are. We’re in the middle of that. We’re in the middle of a world, in this new computing era, where the work we’ve been doing for 25 years is more important than ever. It’s a privilege to be at the center of the AI revolution. We’re moving and investing as much we possibly can.

Question: Are the AIs in cars going to be connecting to other parts of the world? Is bringing in Google Assistant maybe an opening for them to compete in the car space? Do you plan to build AI for other parts of our lives?

Huang: If you saw my diagram, on the upper right side it showed these AI assistants. These assistants could be IBM Watson, Microsoft Cognitive Services, Cortana, or Google Assistant. There’s a whole bunch of AI assistants that the car will be connected to. The natural language understanding has to be running in the car, because the car has to interact with you quickly. You’re driving. You don’t have much patience. You can’t ask the car to do something and then wait for it to spend some time thinking. You want natural language understanding to interact with you very quickly.

However, the assistant itself, the AI of information, will be in the cloud. You should be able to decide for yourself, as a customer – do I want to us this assistant or that one? That should be a choice every car-maker offers.

Above: You can preorder Nvidia Shield TV for $200 for the 16GB version or $300 for 500GB.

Image Credit: Nvidia

Question: Given where your partnerships are, do you see Germany as leading in autonomous driving technology? Do the U.S., Japan, and China have a chance to compete?

Huang: China and Japan are moving very fast. We just haven’t announced anything. As you know, Japan—if there’s a country that could benefit from AI, it’s Japan. AI is the core technology for the future of robotics. Robotic manufacturing is still very much centered in Japan and Germany. They’re the centers of the automotive industry. Those two industries both benefit greatly from AI. I have a lot of confidence that Japan and China and the rest of the world will have plenty of news to share in the coming year with respect to AI.

Question: You guys are doing well with your GPUs and the self-driving car market. You’re in the new Nintendo console. Why did you feel it necessary to come out with the new Nvidia Shield, a new home entertainment device, and address the consumer market that way?

Huang: We worked on Shield because we felt that the home computer market, the home computing platform, could be revolutionized. There was a time when people thought that your personal computer was your home PC. But it just happened to be a personal computer that was sitting at home. I believe it no longer does that. When all of us leave our houses, all of our computers leave with us. Our house is now empty. There are no computers inside. I’ve always felt that the house, like the car, will need a computer. That computer will do all kinds of interesting things that are specific to being at home. There’s enjoying content. There’s communication. Yesterday we talked about AI, so that it can control a smart home. It can communicate and engage with you very naturally. We’ve always felt that computer needed to be built.

The question is, in what way would it be built? I thought that Android and being connected to the cloud was a perfect way of doing that. The computing model was great. In the long term, the AI component is so important that I thought the Shield was a good way to do that. That was essentially the idea behind Shield. It’s becoming more and more true than anything.

Question: Yesterday we saw an AI-embedded refrigerator. What do you think is going to be the form factor for AI in the home? Will it be part of individual appliances like that?

Huang: I believe that your home computer will likely be connected to the largest screen in your house, which is usually your TV. For some people, the only screen in their house may be in some interesting places. But there are so many things you want to see and control. You want to see information. It could be baby monitors. It could be security cameras around the house. It could be communicating with a family member through video chat.

Above: Nvidia CEO Jen-Hsun Huang at CES 2017.

Image Credit: Dean Takahashi

Question: One thing we haven’t talked about is your game streaming platform, GeForce Now. I’ve tried Grid. It was very effective on many different platforms. How are you going to roll that out worldwide? Are you building your own data centers, or partnering with other people? How do you plan to keep latency low?

Huang: First of all, the answer is yes and yes. We can partner with a lot of different people. Building out data centers today is much easier than it used to be. We have GPUs and cloud services all over the world now. Amazon has GPUs. Microsoft has GPUs. Google has GPUs. We can host our software on top of our own GPUs inside those data centers. We can co-locate and build specific types of data centers, very highly tuned GPU data centers. There’s a lot of different things we can do. Cloud data centers are everywhere in the world now. It’s a commodity.

Question: How significant is the partnership you announced yesterday with suppliers like Bosch, in order for you to grow your AI car business?

Huan

Above: Nvidia.

Image Credit: Dean Takahashi

g: Super important. The supply chain of the automotive industry is very specific. The car OEMs provide the vision, the architecture, and some of the engineering, but a lot of the engineering is done in tier ones, as you know. We’re now partnered with the world’s largest in Bosch, and one of the world’s top five in ZF. That’s pretty unheard-of. If you think about all of the other platforms and who’s supporting them, they’re not in the top five. The Nvidia platform has gained a lot of confidence from the tier ones and the OEMs.

Question: Back on GeForce Now, that’s working on Macs now is well. I’m curious about your relationship with Apple, how closely you had to work with them on that, and what else you might be working on with Apple.

Huang: Our relationship with Apple is great, but this is an open platform. It’s just a web service. I don’t really have anything else to say about Apple.

Question: Despite the appeal of Cuda for programming, some people have suggested the neural networks and other workloads in applications like training could use custom chips, whether it’s ASICs or semi-custom parts, application-specific standard parts, FPGAs. Do you have a thought about the notion of how workloads could move from GPUs?

Huang: First of all, a GPU is a custom chip. Cuda, we’ve been evolving it very rapidly so that it can get better and better at different workloads. Pascal was really the first GPU where we put a lot of energy into completely changing the architecture for deep learning. You’re going to see us do much more than that.

The way to think about it is, our GPU is just a custom chip. I believe that general-purpose processing is not a good idea for workloads like deep learning. That’s why we evolved our GPU and evolved Cuda to implement custom capabilities necessary for deep learning.

Question: I wanted to ask about your company culture, how you run your company to choose and focus on the problems you solve and organize your employees to foresee those problems. What would you say is Nvidia’s personality?

Huang: A lot of people have described us as maybe the world’s largest startup. I do think we’re very much startup in the company’s personality. You want to allow the company to be able to dream, to think about the future. In order to do that, to try things, you have to be able to experiment. When you try things, you fail, but when you fail–if you feel that your company and your friends and all of your colleagues are going to punish you for that, you’ll avoid experimentation.

We don’t do that. We happen to enjoy trying new ideas. If they don’t work out, we learn from it and we move on. The culture is—I don’t know that it’s any more magical than that. We’re just a whole lot of people who are trying to make a contribution. We tend to be good about selecting work that only we can do. We don’t go and select work that other people are doing, just because we think it’s a big market. We only select work that we think we can do and make a unique contribution to the world. If you allow your company to do that, it’s going to find great things to do.

Above: The Nvidia headquarters is full of triangles, the basic building blocks of 3D graphics.

Image Credit: Dean Takahashi

Question: In your thinking, what are the most important advantages the GPU has compared to the CPU, especially in self-driving and AI applications?

Huang: A CPU and a GPU are two different things. They’re both needed inside a computer. It’s like salt and pepper. A CPU was designed for instruction processing. A GPU was designed for data processing. One of them is very agile, but the other, the GPU, can handle very large workloads very fast. This is almost like a jet plane, whereas this is maybe like a fighter plane. One is very agile, but the other has very high throughput. If I want to move a lot of workload, I want a big plane with a big engine. That’s kind of what a GPU is. The two processors are very different. It depends on what job you want to do. This is a truck, this is a motorcycle. Very agile here, very high throughput there.

Question: For AI computing, the cloud has huge capability relative to edge devices. In the future of this computing model, do you see either of those sides shrinking?

Huang: Actually, I think the edge side may go up. While the cloud, of course, is growing very fast. The reason for that is because we can now put small networks, artificial networks, in the edge, so that the edge devices can be very intelligent. By making the edge device intelligent, with AI, we can have very fast response times. You can interact with your robot and the latency is very short. On the other hand, it reduces the amount of bandwidth necessary for the cloud.

We need to reduce the amount of traffic to the cloud. Today we have billions of devices. In the future we’ll have trillions of smart devices, and they can’t all be uploading video to the cloud for recognition. You want to do recognition locally and upload only metadata to the cloud.

Question: I think of all the decades of failure behind AI, followed by the last few years of success. Using your analogy, was AI just waiting for this truck to arrive, in order to advance?

Huang: Part of it is destiny. Part of it is serendipity. The destiny part is this. We created a processor that’s incredibly good at data processing and high-throughput computing. On the other hand, the deep neural network approach is very computationally brute-force. At some level, the algorithm is simple and elegant, but it’s only effective if you train it with an enormous amount of data. It needs an enormous computational engine to be effective.

When these two came together, I think that’s serendipity. But the elegance of deep learning is that it’s so rich in capability. It just has that one handicap, the need for a lot of computing behind it. That’s why I’ve always felt that deep learning and GPU is destiny meeting some amount of serendipity.

What’s cool about deep learning is that the model is very transportable. Once you understand it, once you’re able to use it, you’ve turned artificial intelligence from art form into an engineering form. That’s why the number of companies using deep learning is just exploding. It’s a capability you can put your hands around now and really apply. It’s a little bit like when, 40 years ago, it became possible to design your own chips. Companies that started designing chips flourished.

Deep learning, finally you have this tool and this algorithm and this computing platform that allows you to train your own artificial intelligence network. As a result, those companies have flourished as well.