Underkoffler: There really isn’t. Down here, in the livestream bin, are all the sources that are presently connected. When we started that whole bit of the dialogue, I simply instantiated the live version there. In a way, there’s an appealing literalness to all of this. We can plug in AI. We can plug in machine vision and recognition, machine learning algorithms.
VB: The camera is seeing that as an image, though? It’s not trying to figure out what you’re saying.
Underkoffler: Right. But the opportunity always exists to simply plug in additional peripherals, if you will, which is part of what our technical architecture makes easy and appealing. We just built a prototype a month ago using a popular off-the-shelf voice recognition system where you could control the whole work space just by talking to it.
The multi-modal piece is important, because it gives you the opportunity to use the tool you want to control a space. The thing that’s most natural or most urgent for you—I want to talk to the room, point at the room, annotate or run with an iPad or smartphone. You use whatever utensil you want.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
VB: How did you get here from Minority Report?
Underkoffler: By about 2003, 2004, in the wake of the film, I was getting a lot of phone calls from big companies asking if the stuff I designed, the way it’s shown in the film, was real. If it was real, could they buy it? If it wasn’t real, could I build it and make it work? It didn’t take many of those before I realized that now was the moment to push this stuff into commercial reality.
We founded the company and started building out—it was literally building out Minority Report. Our very first product was a system that tracked gloves, just like the ones in the film. It allowed the gloves to drive and navigate around a huge universe of pixels. Early customers like Boeing and GE and Saudi Aramco purchased the technology and engaged us to develop very specific applications on top of it that would let their designers and engineers and analysts fly through massive data spaces.
Part of the recognition here is that–with our recent fascination with AI and powerful backend technologies, we’re more and more implicitly saying the human is out of the loop. Our proposition is that the most powerful computer in the room is still in the skulls of the people who occupy the room. Therefore, the humans should be able to be more in the loop. By building user interfaces which we prototyped in Minority Report, by offering them to people, by letting people control the digital stuff that makes up the world, you get the humans in the loop. You get the smart computers in the loop. You enable people to make decisions and pursue synthetic work that isn’t possible any other way.
For Saudi Aramco and Boeing and GE, this was revelatory. Those teams had the correct suspicion that what they needed all along was not more compute power or a bigger database. They needed better UI. What happened next was we took a look at all this very domain-specific stuff we’d been building for big companies and realized that there was a through line. There was one common thread, which was that all of these widely disparate systems allowed multiple people in those rooms pursuing those tasks to work at the same time. It’s not one spreadsheet up, take that down to look at the PowerPoint, take that down to look at the social media analytics. You put them all up at the same time and bring in other feeds and other people.
The idea of Mezzanine crystallized around that. It’s a generic version of that. All you do is make it so everyone can get their stuff into the human visual space, heads-up rather than heads-down, and that solves the first major chunk of what’s missing in modern work.
VB: What kind of timeline did this take place on?
Underkoffler: We incorporated in 2006. I built the first working prototypes of Oblong’s Minority Report system, called G-Speak, in December of 2004. By 2009, 2010, we’d seen enough to start designing Mezzanine. It first went live in late 2012. We’re four years into the product and we’re excited about how it’s matured, how broad the adoption and the set of use cases are. It’s in use on six continents currently.
VB: How many systems are in place?
Underkoffler: Hundreds.
VB: What is the pricing like?
Underkoffler: At the moment, the average sale price is about half of what you’d pay for high-end telepresence, but with 10 times the actual functional value. We sell with or without different hardware. Like I say, sometimes customers have already invested in a big VTC infrastructure, which is great. Then we come in and augment it. We make the VTC feed, as you’ve seen with Claire, just one of the visual components in the dialogue.
But again, the point is always that it’s—we can have the full telepresence experience, which looks like this, and at certain moments in the workflow might be critical. Then, at other times, Claire just needs to see us and know we haven’t left the room. We’re still working on the same thing she is. At that point we’re pursuing the real work.
VB: It’s taken a while, obviously. What’s been the hard part?
Underkoffler: Making it beautiful, making it perfect, making it powerful without seeming complex. The hard part is design. It always is. You want it to be super fluid and responsive, doing what you expect it to do. A spatial operating environment where you’re taking advantage of the architecture all the way around you, to be flexible for all different kinds of architecture, all of that’s the hard part on our end.
On the other side of the table, the hard part is we’re rolling off 10 years where the whole story was that you don’t even need a whole computer. You just need a smartphone. The version of the future we see is not either/or, but both. The power of this is that it’s portable. The liability is that it’s not enough pixels, not enough UI. If we have to solve a city-scale problem, we’re not going to do it on this tiny screen. We’re going to do it in space like this, because there’s enough display and interaction to support the work flow, the level of complexity that the work entails.
It’s taken a while, as well, for the mindset of even the enterprise customers that we work with to turn the ship, in a way. To say, “Okay, it’s true. We’re not getting enough done like this, where everyone is heads-down.” If we’re going to remain relevant in the 21st century, the tools have to be this big. There’s a very interesting, and very primitive, scale argument. How big is your problem? That maps, in fact, to actual world scale.
VB: How far and wide do you think Minority Report inspired things here?
Underkoffler: All of the technologies that were implied by the film?
VB: Mostly the gesture computing I’ve seen, in particular.
Underkoffler: There’s a great virtuous feedback loop that’s at work there. Minority Report was kind of a Cambrian explosion of ideas like that. We packed so much into the film. But it arrived at just the right time, before the touch-based smartphone burst on the market. We can attribute part of the willingness of the world to switch from—what did we call phones before that? Small keyboard flip-phones and stuff like that? And then the more general-purpose experience of the smartphone. We can attribute part of that to Minority Report, to a depiction of new kinds of UI that made it seem accessible, not scary, part of everyday life. It goes around and around. Then we put new ideas back into films and they come out in the form of technology and products.
VB: I feel like Westworld is another one of those points, but I’m not sure what it’s going to lead to. It’s being talked about so much that there has to be something in there.
Underkoffler: I think so. In one sense the ideas are cerebral more than visual, which is great. I hope what Westworld leads to is a renewal of interest in consciousness, the science of cognition and consciousness, which is fascinating stuff. Understanding how the wetware itself works. Westworld is definitely unearthing a lot of that.
To pick back up on Minority Report, as we were working on it, as I was designing the gestural system and that whole UI, I was consciously aware that there was an opportunity to go back to Philip K. Dick and do a thing that had happened in Blade Runner. In Blade Runner you remember the holographic navigation computer that Harrison Ford uses to find the snake scale, the woman in the mirror. It’s really appealing and gritty and grimy, part of this dense texture of already aged tech that fills up that film.
But for me as a nerd and a designer and a technologist, those moments in science fiction are a little frustrating. I want to understand how it works. What would it be like to use that? He’s drunk in that scene already and barking out these weird numerical commands and it doesn’t have any correlation to what’s going on. I knew we could show the world a UI where it’s actually legible. From frame one you know what John Anderson is doing. Viscerally, you know how he’s moving stuff around and what effect it has and what it would feel like to introduce that in your own life.
It was a great opportunity. The feedback, as you’ve been saying, is really immense. The echo that has continued down the last decade because of that is remarkable.
The other piece I wanted to show was a collaborative user interface. Of course Tom Cruise is at the center of those scenes, but if you go back and watch again, there’s a small team, this group of experts who’ve assembled in this specialized environment to solve a really hard time-critical problem. Someone is going to die if they don’t put the clues together in six minutes. They shuttle data back and forth, stuff flying across the room. That was a unique view of a user interface, at least in fiction, that allowed people to work together.
We’ve built literally that. In a sense this is that Minority Report experience. We have lots of pixels all over the room. We can be in the room together and work with everything that all of us bring here.