Some augmented reality applications should clearly never be created.

Above: Some augmented reality applications should clearly never be created.

Image Credit: Dean Takahashi

GB: It’s like design exhaustion.

Ellsworth: Right. It seems obvious, at least to me, that starting to replace the iPhones and tablets with other devices is the pressure that will force these different technologies to mature. Otherwise there wouldn’t be pressure to make investment happen or force prices down so these devices can exist.

A lot of our work in designing our product is going out and working with vendors and selling them on our dream so we can hit a price point that everyone in the world can afford. Some of the sensors are expensive because they aren’t being used in commodity products yet. Once they’re in commodity products, there will be tons of money for the industry, for whoever lets go of the higher-end markets they’re used to and ramps up production.

Beliaeff: When you watch the trends in engineers, it’s interesting. Engineers really are the straw that stirs the drink for this sort of innovation. On the hardware side it’s building in the efficiencies, but on the software side right now it’s really computer vision. When you look at all the investment going on right now, with everyone sponging up every computer vision engineer alive, if this level of investment continues and we start seeing more commercial successes in AR, there’s going to be a shortage of computer vision guys.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

We already have people working on toys — these guys used to work on displays for military helicopters. We have multiple PhDs doing stuff. We have guys who’ve contributed to EU computer vision white papers. The talent out there is amazing, but there’s not a lot of it. At some point it’ll either be a driver of consolidation, putting all those smart guys in a bigger room together, or it’ll be a stall-out point.

Ellsworth: On top of that, machine learning is going to be huge in AR. Just with all the volumetrics and the sensors we can put on someone with a wearable device, we can start to understand more about their life and help them. We can predict and learn their patterns, unlocking all of this benefit that couldn’t be there without smarter machines.

GB: It seems like we’ve had this fortunate path. GPU computing came along. It made the big data crunching that’s required for neural networks and pattern recognition possible. Then you get AI and computer vision. Those things almost become something you can drop into any product. You can train a neural network and it’s not going to take 10 years anymore. You’ve had some good progress on the required technologies for what you guys want to do. What else still has to happen, though, to make AR more pervasive and more affordable?

Beliaeff: Jeri touched on this a bit earlier. The components you use need to become available at a mass level. Stereoscopic cameras are an example. We’re seeing glimmers of those in Tango and other things. That’s helpful. The processing chips, getting those bought in mass quantities so they become available at a device level. You touched on getting people to start having AR libraries integrated in their OS, graphics libraries and physics libraries and sound libraries. It needs to be part of the package.

Ellsworth: You were talking about AI and how much benefit it can provide. One of the learnings I had at Valve Software, when we were trying to figure out how to make games more fun — we found all kinds of little things. You could read people’s skin resistance. When you fed that back into the game, it would be just that little bit more fun for the end user. It’s hard to name a product that only makes a game two or three percent more fun, but if you can take a device and pack a bunch of those two or three percents in there, you get real value for the end user. To make AR more viable over the longer term, we’re going to have to find all those little bits and smash them all together in a tight package and make it seamless for the end user.

GB: How do you look at the timeline of what’s going to be possible? When do you want to get products into the market? How much runway do you need as far as funding? Jeri, you guys basically transformed your company in the past year.

Ellsworth: It’s been an interesting ride. We’ve been at it a little more than three years now. Rick and I left Valve. We had to work on a modest product at the time, because it was just the two of us and a bunch of cats in his house. We made a PC peripheral. Now that we’re working with Playground Global and we have more investment, we can lay out some of our bigger dreams for AR. We’re looking at how we can make it seamless, make it 60 seconds to play. Hit the power button, flip the board open, and have your experience.

It’s been great for us. It’s given us the runway to make the product what we want and what we believe the user is going to want. We’ve been able to come up with a strategy, which is at least three stages to get the customer using the experience on a daily basis, get them using it on the table, and then take the full experience out into the world.

ODG's booth at CES 2017.

Above: ODG’s booth at CES 2017.

Image Credit: Dean Takahashi

GB: You’ve brought some interesting people onto your team — Peter Dille from PlayStation, Darrell Rodriguez from LucasArts.

Ellsworth: It’s pretty obvious that games are going in this direction. There’s a huge content gap right now. You have hardcore gaming VR experiences on high-end PCs. You have phones for your snackable content. But everything in between, where you can bring your whole family together — it’s really exciting, that we can start to make those experiences and tap into that market.

GB: Nick, how are you guys marshalling the resources in your company to focus on AR games?

Beliaeff: We’re a big believer in AR. Looking at how to bridge that gap between physical play and digital play, it’s very organic. From a kid’s perspective, they don’t see the “wow” in the technology that we do, because they’re growing up with it. They just expect it to work. But it does work.

For us, a lot of the play patterns we’ve built on top of it is the core experience you have with an RC toy. Then we add video game depth to it, as well as the social aspect. We all know that’s the glue that keeps people playing anything. Doesn’t matter if it’s a board game or a video game or a toy. If you can do it socially, that’ll keep you into it longer. That’s a lot of where we’re going, multiplayer play. With the car, not only can you play with people in the room and have multiple cars going against the AI cars, but you can do a race, save your ghost, and upload it for a friend across the country to race against. That competition really opens it up.

We’re taking a bit of a different route, because we use devices our customers already own. We use smartphones and tablets. We don’t care if it’s Android or iOS. There’s enough computing power and fidelity in the cameras that we can get a rich experience out of it. We’re going toward more multiplayer experiences, more immersive experiences, and using things that don’t require a massive investment on the consumer’s part.

GB: What’s going to be the technology that reduces AR glasses to the size of the glasses I have now, do you think? What leads to these things Mark Zuckerberg and Larry Ellison are talking about, AR devices that fit on your head like any pair of ordinary glasses?

Ellsworth: It’s going to be a blend of glasses and non-glasses experiences. If you project out five years, optic is getting better. Sensors will get better. Compute and batteries will be small enough to fit in glasses and let us have great experiences. But I imagine a day where I’m on a bus and I replace my phone with my Cast glasses. I call my friends by tapping on my hand. Or I replace my iPad by drawing a square on the back of the seat. Getting to work, it’ll replace one of my compute devices there.

But when I get home, I want to take the glasses off and have a similar type of experience throughout my home. I’m deeply interested in the internet of things. On its own an internet-enabled thermostat isn’t terribly interesting, but if I can interact with it the same way I interact with my glasses — if I can wear my glasses and see my thermostat across the world and turn it down with a gesture — that’s actually what we’re going to see. It’ll be a blend of different display techs, different glasses, different sensors and ways for us to control and interact with all the devices in our home.

Beliaeff: Whether it’s VR or what I’ve seen released of AR now, it’s definitely been function over form. It gets the job done, but if you put it on, you don’t look cool doing it

GB: Google Glass.

Beliaeff: Not exactly what you want. What’s going to drive adaptation is when the platform, whatever’s going on with the hardware, is stable enough that you stop worrying about function and you can start getting some engineers worried about the aesthetics of it. When you put them on, it’s not obvious that you’re wearing an AR device. It’s when you’re wearing glasses that look like glasses, when it doesn’t look like you’re spying on someone. Once you have that natural, organic integration, that’s when adoption will start going through the roof.

Ellsworth: I think it might be halfway in between. It might be when the end user gets comfortable with these glasses that look slightly different. I look at Google Glass and it was really interesting. It was pretty stylish for what it did. But it was so early. Our natural reaction is to buck anything so different and new. If you look at Snapchat glasses, I’m pretty excited about how that’s moving customers forward. It doesn’t seem like they’re getting the same negative reaction to that.

Beliaeff: But Snapchat’s glasses help make the point. They actually look like normal glasses. You don’t look like you’re putting on a construct.

Ellsworth: Yeah. The end user is going to move partway, and then technology is going to move the rest of the way over the next five or 10 years. Eventually we’ll all be walking around with AR devices.

Pokemon Go.

Above: Pokemon Go.

Image Credit: GamesBeat

GB: We’ve been talking about the ultimate nirvana of augmented reality, but what level of graphical fidelity is going to be good enough to blend with the real world? It seems like one of the lessons of Pokemon Go is that you don’t really need great graphics to make a very successful AR experience.

Beliaeff: The brand authenticity — you can critique Pokemon Go however you want, but if you watch the cartoons or play the games and see how the universe is presented, and then you play Pokemon Go, it’s so brand authentic. I’m walking down the street and there’s a Pokemon. That’s exactly what happens on the show. It was brilliant at that level. It did a great job of showing that.

From a graphics standpoint, I don’t necessarily know if it’s the fidelity or if it’s the, what is it, the motion to photon? It’s getting rid of the carsickness when you’re using it. When you use HoloLens, you can get that little bit of the dissociative experience. It’s having those really wide field of views when you’re using the device. You want to make sure that when you take the device off, there’s no hangover.