Skip to main content [aditude-amp id="stickyleaderboard" targeting='{"env":"staging","page_type":"article","post_id":2146795,"post_type":"story","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,bots,business,transportation,","session":"C"}']

Why Intel is excited about self-driving cars

Intel is working with BMW to make self-driving cars a reality.

Image Credit: Intel

There was a time when computer chip makers had no interest in car electronics. But times have changed as self-driving cars and the artificial intelligence used to pilot them have taken off. Now those cars are set to consume a huge amount of computing power and generate a ton of data to be processed in the cloud, internet-connected data centers.

And that’s why Intel is excited about self-driving cars. It is pushing forward with technologies such as AI, computer vision, 5G wireless connectivity, and infotainment centers in vehicles. At CES 2017, the big tech trade show in Las Vegas last week, Intel said it had developed 5G infrastructure chips that will bring connectivity to self-driving cars, and it also said it was working with BMW on cars of the future. And it bought a stake in the mapping firm Here.

[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":2146795,"post_type":"story","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,bots,business,transportation,","session":"C"}']

Doug Davis is senior vice president and general manager of the automated driving group at Intel. I caught up with Davis at CES 2017 to discuss Intel’s announcements, including its new Intel Go brand for cars.

Here’s an edited transcript of our interview.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

Above: Doug Davis is head of the automated driving group at Intel.

Image Credit: Dean Takahashi

VentureBeat: Autonomous cars are big this year. Do you have any news at the show on that front?

Doug Davis: We announced a new brand called Intel Go. It’s intended to comprehend what we’re doing from an autonomous driving standpoint. Within that, this week we announced a development platform that spans from our Intel Atom product family all the way to the Intel Xeon product family. It’s a development platform for 5G development. It’s (field programmable gate array) FPGA-based, so it allows developers to get started now on 5G for automotive applications. There’s also a software development kit that spans from the car to the cloud, to get developers up and running on those kinds of applications.

The other thing we announced this week that’s directly attributable to what we’re doing in the autonomous space is our 5G modem. It’s the first modem device available for 5G. It’s the second half of 2017.

VB: Every autonomous car is going to need one, basically?

Davis: I wouldn’t say you can’t do an autonomous vehicle without 5G. A lot of the development on cars that are already out on the road is done on 4G LTE technology. But as we look at mainstream deployment, if you’ve been hearing from all the different OEMs, they’re all gravitating to about 2021 as a time frame they’re shooting for. You look at the timing of 5G, it’s very similar.

As we get autonomous vehicles on the road, we’ll go from being drivers to passengers. As passengers we’ll have a lot of spare time. We can do work, do some shopping, watch our favorite series from whatever service we like to watch. That 5G bandwidth will be able to support the end-to-end artificial intelligence that’s necessary for these vehicles. It’ll be able to supply us with broadband connectivity for the things we want to do in the car. But it’ll also have the latency, the speed, to be able to support what’s called “V-to-X.” Vehicle to vehicle, vehicle to infrastructure, vehicle to pedestrian. 5G will support that as well.

[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":2146795,"post_type":"story","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,bots,business,transportation,","session":"C"}']

As we look at when these mainstream fleets of autonomous vehicles begin to get out on the road and the timing for 5G, we think it’ll be an important capability to combine together there. In the meantime we’ll see development happening on 4G technologies, obviously.

VB: In autonomous cars, Jen-Hsun Huang says they have a three-year head start at Nvidia.

Davis: But if you think about it, we have our 5G development platform now. You can put that in a car and begin doing development work. They’ll both have a long development cycle, but their timing is very similar.

VB: They got into a cycle of announcing car supercomputer chips. A couple of them are out there now already. For Intel, what’s the road map? Have you put any word out as to which kind of processor will be your main processor for these applications?

[aditude-amp id="medium2" targeting='{"env":"staging","page_type":"article","post_id":2146795,"post_type":"story","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,bots,business,transportation,","session":"C"}']

Davis: I can say, “Yes, and…” When we think about autonomous driving, it’s not just one chip. An autonomous vehicle will become an artificial intelligence implementation. To be able to put these cars on the road, especially when you think about putting large fleets on the road or making them consumer vehicles, you need to be able to build a model of the vehicle’s behavior. You’ll do that in the data center. That becomes your trained model, built using massive amounts of data. That trained model will then be deployed into the car, the inference that sits in the vehicle. That inference then defines the behavior of the car. But the car will also be collecting lots of data and sending it back to the data center.

When we think about this, the data center, the network infrastructure, and the vehicle are all important elements of delivering the solution. The connectivity part of it is very important. Training in the data center will happen on Xeon, Xeon Phi FPGAs. We’ll have accelerator devices to accelerate certain algorithms via what we’re doing with Nirvana. If you think about what’s happening in the network infrastructure, they’re moving the software-defined networks with certain functions that get virtualized in the network. We’re delivering Atom through Xeon technologies to support that.

Above: Intel believes that AI will be a huge part of car computing in the future.

Image Credit: Intel

In the car — everybody wants to talk about fully autonomous vehicles, but there will be an evolution from driver assist L3 all the way up to L4 and L5. We have products that support that capability. From a compute standpoint, it ranges from Atom for driver assist functions now up to Xeon for fully autonomous. But FPGAs play an important role in that. We have FPGA technology that scales accordingly. We’ve talked a lot about communications with that connectivity part that will sit in the vehicle.

When we think about these things, it’s much more than just a chip. It’s the ability to deliver an end-to-end artificial intelligence implementation. You think about all those things I just described, nobody else in the industry has that breadth of assets.

[aditude-amp id="medium3" targeting='{"env":"staging","page_type":"article","post_id":2146795,"post_type":"story","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,bots,business,transportation,","session":"C"}']

VB: You have the car-makers and their suppliers, and all the startups in the space. Who do you work with? Have you set up all those relationships?

Davis: We described the range of companies. There are the OEMs building cars. There are tier one suppliers providing them, typically, with the compute elements that go into the car. Intel would be considered a tier two. We supply products and technologies to the tier ones. That’s the role we play in the industry.

We’re also very mindful and respectful of our OEMs that we work with. For those that are to a stage where they’re ready to talk about who they’re working with — of course, we’re thrilled to make those kinds of announcements. We’ve talked about the work we’re doing with BMW and Baidu. Delphi recently acknowledged they’re working with us on the solution they’re building for the OEMs. But today, if you think about all the different autonomous vehicles in development, there are more than 200 based on Intel technology now. We have engagements with many OEMs, many tier ones, but we’re not naming those companies unless they’re at a stage where they’re ready to talk about it.

VB: I was chatting with ARM a bit. They were saying that the ecosystem is changing a lot. In the microcontroller era, everybody had some very specific function they were doing. The suppliers would handle that — chips for the brakes or whatever. It would stay in that section. But now it’s more general-purpose computing and software that spreads across the whole car. No single supplier can be so narrow anymore.

[aditude-amp id="medium4" targeting='{"env":"staging","page_type":"article","post_id":2146795,"post_type":"story","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,bots,business,transportation,","session":"C"}']

Davis: Obviously our focus, up until recently, has been around infotainment. We’ve done quite a bit for the past several years around driver assist and autonomous technologies, but the infotainment space is an interesting one. We’ve been advocating a technology road map that says you can start to consolidate numerous functions in our microprocessor.

We have entertainment and navigation in that center stack, the screen that sits in the middle of the dash. Then the instrument cluster is becoming a display. Rear-view mirrors are becoming a display. The outside mirrors will ultimately become a display. Rear-seat entertainment includes displays. We’ve been advocating for quite some time — why can’t you have all these systems running on a single microprocessor with an operating system that provides a level of robustness and reliability that’s necessary, and then virtualize those workloads on top of that microprocessor?

Above: The Intel Go platform will span from the car to the cloud.

Image Credit: Intel

If you get a chance to go by the Delphi booth, they have a great demo in there using our Intel Atom processor 3900, the Apollo Lake. They have a cockpit, if you will, where they’re running the infotainment system and the instrument cluster on one Apollo Lake processor. They can reboot the infotainment system and the instrument cluster remains fully functional, despite the fact that you’re rebooting the entire infotainment platform. If the system has a significant problem, it has the ability to still sustain the necessary level of performance. You don’t lose the instrument cluster.

It’s a great demo. It shows exactly what you described in this space. They’re even talking about the ability to drive seven different displays simultaneously with one microprocessor. They have a cool cockpit where they’ve taken the instrument cluster and the infotainment system, and then this little clock display down here. They’re multiple displays stacked together. The one on top is fully transparent, so the instrument cluster is three-dimensional. Your infotainment platform is three-dimensional. The little clock is three-dimensional. Each of them is two screens sandwiched together. These gauges really look like physical objects. It’s one of the coolest things I’ve seen. Sorry, but I’m a little excited about it. [laughs]

[aditude-amp id="medium5" targeting='{"env":"staging","page_type":"article","post_id":2146795,"post_type":"story","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,bots,business,transportation,","session":"C"}']

VB: You had your BMW announcement here.

Davis: Wednesday morning, Intel and BMW and Mobileye had a press event at BMW’s booth. We talked about the progress we’ve made over the last six months. We announced the partnership back on July 1. This was a report on the progress we’ve made. We’re making great progress on the development of the platform we’ve described, and on this whole notion of how we can work together with other OEMs and tier ones. We said at the beginning that we wanted that to be a state-of-the-art platform that could be used by others in the industry as well. The work that we’re doing isn’t just limited to BMW. We’re continuing that development to enable that capability.

We talked a bit about the work that’s going on. We defined this to a level of detail. There are 19 different work streams that define all the effort to make that happen. We also talked about the number of cars that are being built and put on the road in 2017. We’ll have as many as 40 cars on the road being used to do this development work. If you think about the number of autonomous vehicles out there now, that’s a pretty big number. We’re making good progress.

Above: Intel Go

Image Credit: Intel

We announced the stake and the work we’re doing with Here. There’s really two objectives there. The first is to be able to work together with them on the data center technologies and capabilities they’re developing to be able to create, deliver, and update high-definition maps, and then the way in which they’ll be deployed to vehicles on the road. From an Intel perspective, there’s a lot of technology around the data center and how that gets optimized, and the way in which these maps can be delivered through network service providers.

In the data center, Here will maintain these high-definition maps of, essentially, the planet. But when you’re in an autonomous vehicle, you probably don’t need a map of the whole planet in your car if you’re only driving around Vegas. Those cars will get updates based on their location. If you think about driving from Vegas to Phoenix, you’ll get updates of those maps as your trip progresses. The way in which those maps get deployed through the networks will be important. We’ll work together with them to figure out how you build and optimize and deliver these maps, as well as the way in which the car downloads and uses them. The other part is working together with OEMs as far as how they implement the maps within their cars, integrating that into their autonomous vehicles.

Above: Interior of Panasonic’s concept car for autonomous driving.

Image Credit: Dean Takahashi

VB: Do you have a sense of how big this whole auto ecosystem is, with so many startups involved? Nvidia was saying that just in AI alone, they’re working with 1,500 startups, and it seems like a lot of those are in cars.

Davis: When we step back and think about AI, it will span a lot of different applications: health care, manufacturing, drug development, all kinds of things. We’re building an artificial intelligence capability as a company, an end-to-end artificial intelligence capability. The car just happens to be a very complex, comprehensive implementation of artificial intelligence that will be very visible, because we interact with cars on a daily basis.

I don’t know about exact numbers. I haven’t heard a number like that. But it’s clear in the industry that it’s a hot area. A lot of startups, a lot of development. I think there will be a lot of technology investment in that space for a while. Part of it — I always say these things happen when they become economically viable. We’ve all talked about AI for a very long time, but we’re reaching a point where you have the compute and storage capacity in the data center to cost-effectively go do this broad range of AI implementations. You have the ability to deploy those cost-effectively through the networks. Thanks to Moore’s Law, we have the ability to put enough compute and storage in a device to deploy these kinds of solutions.

AI has been around a long time, but it’s a confluence of those things, as well as all the software we need to fuel it. It’s kind of like the internet of things. I like to say that the internet of things is an overnight sensation that’s 20 years in the making. It’s because the compute, the connectivity, the cost of sensors, all those things have finally reached a point where it’s economically viable to implement them.

Above: The Panasonic driverless concept car.

Image Credit: Dean Takahashi

 

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More