Skip to main content [aditude-amp id="stickyleaderboard" targeting='{"env":"staging","page_type":"article","post_id":880491,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"games,mobile,","session":"B"}']

Intel bets that perceptual computing will save the PC (interview)

Mooly Eden leads Intel's perceptual computing project.

Image Credit: Dean Takahashi

LAS VEGAS — Intel showed off a lot of cool technology last week at the 2014 International CES. CEO Brian Krzanich gave a keynote speech, while Mooly Eden, the senior vice president of perceptual computing and president of Intel Israel, held a press conference to show far Intel has come with perceptual computing, or using gestures and image recognition to control a computer.

I sat down with Eden a day after his presentation and quizzed him about the RealSense 3D camera, which can recognize gestures and finger movements. Intel plans to build an inexpensive version of the camera into laptops and other computing devices starting in the second half of 2014. Intel has a lot at stake in the project, as it hopes this will inject new life into the PC market.

[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":880491,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"games,mobile,","session":"B"}']

Eden said that Intel is making big investments in both technology and content to make perceptual computing real. He also told us that Microsoft isn’t the enemy despite Intel’s support for dual-boot Windows and Android computers as well as its support for Valve’s Linux-based Steam Machines.

Here’s an edited transcript of our interview with Eden at CES.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

Above: Intel’s RealSense 3D depth camera board up close

Image Credit: Dean Takahashi

VentureBeat: You debuted the RealSense 3D depth camera at your press conference. Was that difficult for Intel to create?

Eden: This is the first time we went public with the camera. No, a piece of cake. [Laughs] If you compare this to this, you can imagine — any of these things are proprietary, like the laser. It was very complicated to develop. We were trying to break some barriers. We were trying to defy the laws of physics, the laws of optics. We still have a lot of things to prove and move forward.

There’s good news and bad news. The bad news is that it’s very complicated. The good news is that it’s very complicated, so it won’t be easy [for competitors] to close the gap.

VentureBeat: It sounds like putting 2D and 3D together was another challenge.

Eden: No, that’s not a problem. If you’re trying to look at 2D and 3D, this is an infrared camera, and this is standard RGB. RGB is a piece of cake. We know how to take a picture. The problem is when you have that picture and what you call the point cloud. Then you need to take the 2D camera, which is totally different, and put it over the 3D and make sure it matches exactly. That’s what we call texture mapping. It’s not simple. If something heats up or changes, or if this isn’t quite rigid enough to keep it in calibration — We have a lot of challenges.

Above: Intel’s gesture recognition works up close to the screen.

Image Credit: Dean Takahashi

VentureBeat: Is the vision for perceptual computing similar to Microsoft’s Kinect?

[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":880491,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"games,mobile,","session":"B"}']

Eden: No, totally different. Kinect was a great solution. I salute them for being the first to develop it. But that was more long-range. We’re doing a much closer-range camera, capturing fine detail, doing finger tracking. We have the full depth, compared to other solutions, in order to track your hands and be able to do very fine segmentation. We can extract your face out of the background.

I’d like to see more complementary efforts from Microsoft. We’re going to try to co-develop and cooperate and see what we can do with Skype and things like that. We have different plans for usage.

VentureBeat: It seems like you have a big job ahead as far as getting applications on it.

Eden: Definitely. That’s the biggest challenge. We announced a lot of collaboration [at CES]. We have the right budget to invest and make it happen. We’ve already released the SDK. Once we start ramping up to a bigger installed base, we expect the developer community to use the SDK and the whole thing will feed itself. But it’ll need a little jump-start.

[aditude-amp id="medium2" targeting='{"env":"staging","page_type":"article","post_id":880491,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"games,mobile,","session":"B"}']

VentureBeat: When I look at some of the demos, it seems like they’re somewhat rough. You can see the outlines of the person in the green-screen effect where you put them into a new background.

Eden: It’s still not production, no. Some of the examples we’re showing use very complicated algorithms. Some of them are much easier than others. Some of them we’re still working on. Not all of them are done. But we still have time. We need to refine things. It’ll improve as we go forward.

Above: Intel produced the Hoplites gesture game demo.

Image Credit: Dean Takahashi

VentureBeat: When you’re able to launch this, do you think you’ll have a set of applications ready?

Eden: Definitely. We can’t launch without applications. How do you sell hardware if you can’t show software, if you can’t show the user experience? What are you buying? We need a set of compelling applications so people will want to use it, and then we can say, “By the way, we’ll have more and more as we go forward.”

[aditude-amp id="medium3" targeting='{"env":"staging","page_type":"article","post_id":880491,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"games,mobile,","session":"B"}']

VentureBeat: What are some of the other things coming along, like eye-tracking?

Eden: We didn’t comment on eye tracking. It’s a very interesting technology. We’re looking into it. But officially, we haven’t said anything. It could be part of the overall Perceptual Computing effort, because if I know where you’re looking, that’s important information. I can deliver data based on your interest. But for the time being, we haven’t announced anything regarding eye tracking.

VentureBeat: Among the things that Brian Krzanich showed in his keynote speech, were there some perceptual computing projects, such as the gaming thing?

Eden: The scavenger, the guy that’s going all over like this? That was done by our team, yeah. We were using the system in order to demonstrate the capabilities of what we think can be done. It’s a different usage, because it’s not in your face. You’re looking at the world.

[aditude-amp id="medium4" targeting='{"env":"staging","page_type":"article","post_id":880491,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"games,mobile,","session":"B"}']

Above: Intel set a whale aloft above the crowd at its keynote at the 2014 International CES.

Image Credit: Dylan Tweney/VentureBeat

VentureBeat: Was the Leviathan whale demo part of your project as well?

Eden: We’re part of it. What you saw there is augmented reality. It’s done by Interlab, at one of the universities, and our team in Israel. The idea was, I take the tablet and I look at you. Through the tablet and I see the real world. Now I see augmented things running that you don’t see as part of the picture on the screen. What we couldn’t demonstrate is the integration of 2D. When you go around, you see the world from a different place.

VentureBeat: Everyone was looking for the whale in the real world.

Eden: Try to imagine your kids playing a game, where suddenly you bring the whole universe to life. Think about education. I’m trying to educate people about the whale, and suddenly I just show it to them. Or a dinosaur. You can go exploring by yourself. If you want to know what it looks inside, you could even get closer and just move inside it.

[aditude-amp id="medium5" targeting='{"env":"staging","page_type":"article","post_id":880491,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"games,mobile,","session":"B"}']

The idea was just to whet your appetite and show some of the opportunities. Eventually, we’re going to release the SDK and harness all this great innovation around us.

Above: Mooly Eden relaxes after leading an Intel press event at CES.

Image Credit: Dean Takahashi

VentureBeat: What sort of path looks the most promising for the best applications? If I look back on history, hardcore gamers love their Xbox 360 and Xbox One games, but they don’t like Kinect games that much.

Eden: That’s a fair question. At the beginning, we’ll see more casual gaming. The hardcore gamer with a Core i7 processor and two graphics cards, they’re not much more than a small percentage of the market, and that’s not the first audience we’re going to move toward. But one of the biggest CEOs in the gaming industry has said to me, “If I can monitor your face while you’re playing a game and change the flow of the game based on your emotions, that’s very interesting.” When you look at perceptual computing, it’s not just what you can tell the computer through your gestures. It’s what it can learn about you.

If you’re asking me about shooting games, hardcore games, maybe not yet. But if we’re talking about using this technology to make games more immersive, yes, there’s definitely much more to see. What are you trying to do in a game? You’re trying to create an augmented reality, to take the real world and make it better. The more immersed you are, the better the game is. I’m not sure we’ll replace the mouse, but the more three-dimensional we are, the more we’ll be able to do.

Above: Intel’s RealSense camera can see your fingers.

Image Credit: Dean Takahashi/Intel

VentureBeat: At the same time, I think gamers also want something different now. Maybe they’re not satisfied with just one more generation of consoles and a standard game controller.

Eden: I don’t know all the answers, but I feel like bringing in additional senses will make games more interactive. When you speak to me and we’re looking at each other — in perceptual computing we call it multi-modality.

Today, when you’re playing with a computer, I think it’s safe to say that it’s engaging two things. You have sound created by the gameplay, shooting or whatever, and you can see what’s happening. But what if the game could measure your heartbeat and use that? We can bring additional dimensions to games. We’ll definitely work within the ecosystem. To bring out just one title costs tens of millions of dollars. We need to show what the SDK we can do and work with people to make it happen.

VentureBeat: If I were Microsoft watching everything Intel shows at its press conference, I would ask if these guys my enemy or it they are still my partner. You had Android onstage. You had Steam onstage. You’re doing your own perceptual thing.

Eden: First of all, we’re not enemies. If you look at the overall ecosystem, each company is trying to optimize its own things. Microsoft is doing things with our competitors. They’re working with ARM and such, which is the right decision for them. Intel is working with Microsoft, and working with Android as well. I’m not sure if we’re all happy, but at the end of the day there’s a huge amount of space for collaboration.

They have a great operating system. They’re still a great company. They’re a collaborator in some spaces, and in some spaces they’re working with someone else. I respect their decision, in the same way that I think they respect ours.

Above: iJustine showed off RealSense working with Microsoft’s Skype.

Image Credit: Dean Takahashi

VentureBeat: Where are you deciding what you have to do as opposed to something like Oculus, with their own headset?

Eden: It’s different. What they’re doing is nice. I had a chance to play with it, and I think it’s great for gaming. But it’s not what we’re doing. When I speak about collaboration and simple gaming—I don’t want my five- or six-year-old to sit with this on his head all the time. It’s complementary, but it’s a different market, a different usage. I believe there’s space in the market for more than one company.

VentureBeat: Within all of this, where do you see the biggest computing challenge?

Eden: There are two computing challenges. One is to close the gap between our computing capability and the human brain. It’s going to happen. It’s scary, but it’s going to happen. I can tell you what the futurist Ray Kurzweil said. I believe he said that we’ll be able to get to the computing power of a mouse by 2016, the computing power of a human brain by 2020, and the computing power of humanity by 2050.

Of course, we have a lot of redundancy in what we know as a race. Maybe 80 percent of our knowledge is common. And he might be wrong as far as the timing — it could be 2025 or 2035. But technology is moving forward. The challenge for computing is only in how the architecture is going to change. So that’s one challenge. The other challenge is the man-machine interface. I want a computer device that will work with me in the same way I can work with you. That’s my utopia. Will it happen? Definitely. The question is just when.

The one thing that’s a little bit scary, although this is just one man’s opinion — this is not a technical revolution. It’s a social revolution. It will change us. If you think we can have machines that will think like humans — you know Isaac Asimov’s stories about robots. If you believe that by 2050, 2100, a machine could have the capacity of the human race, some futurists claim there won’t even be a homo sapiens anymore. We won’t be the homo sapiens we’ve known for the last several thousand years. One way or the other, this won’t be something I know. It will be something totally different.

Above: Eden makes a point at CES.

Image Credit: Dean Takahashi

VentureBeat: You started perceptual computing before Brian Krzanich was CEO, right?

Eden: Yes. It took us more than six months to do this. [Laughs]

VentureBeat: Has he helped change your direction or your focus?

Eden: No. We can just focus on what we’re doing even more. In order to advance, you need resources. But we’re still exploring. There are a lot of questions we need to answer. You learn as you move. This is a very risky venture, but it’s been fun.

VentureBeat: When you had the brain chart in your presentation, it almost seemed like you had the computing part done. Then you put the sensors around it, and then you had the brain. It’s sort of like thinking that the computer is the brain, right?

Eden: It’s a good question. During my presentation, if I had done too deep a dive, I probably would have lost 90 percent of the audience. Overall, we need more computing power. It’s no secret that there’s a big difference between a microprocessor and the neurons in the brain. They say that one neuron can be connected to as many as 10,000 other neurons. One transistor can be connected to 10 transistors.

When I say that we’re going to close the gap with the brain, that implies we’ll need much more performance, and we’ll need to change many things about architecture. We’ll need much more acceleration. More than 50 percent of the brain goes toward simply deciphering what your eyes see. So what I was trying to allude to was the need for that additional performance. Even if you tried to simulate it differently — and I believe Microsoft is trying to do that – the compute power necessary is huge. There’s not enough. Today, our brain uses only approximately 20 watts.

Above: Eden explains perceptual computing at CES 2014.

Image Credit: Dean Takahashi

VentureBeat: Maybe, before we get there, getting all the sensors in place is still a big project.

Eden: It’s a big project, but now I believe it’s almost inevitable. You can argue that the jury is out and we still need to prove it. But the fact is that people want a natural man-machine interface. You’ve seen it in books. You’ve seen it in movies. That futuristic, science-fiction stuff, that’s what people want. It’s been a dream for many years. But we’re on the verge of taking the fiction away from the science.

This couldn’t have been done three years ago. Intel couldn’t have done it three years ago. We may be delayed by a year or two, but I’m confident that within a few years we’ll see this work.

By the way, you asked me how I know this is going to be successful. When you’re working with the consumer — why do you buy something today? Because you love it. Not because you need it. You already have what you need. How do you know you love something? When you smile, when you say “wow.” Here’s a picture of you when you were playing this thing. [Laughs] We did a lot of professional testing, and the jury is still out. But if you ask me about the adoption of stuff like this, if it leads to those kinds of smiles, we have a great opportunity.

 

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More