Skip to main content [aditude-amp id="stickyleaderboard" targeting='{"env":"staging","page_type":"article","post_id":2112684,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"arvr,business,enterprise,games,","session":"C"}']

Minority Report science advisor builds the most awesome conference room

Tom Cruise in Minority Report inspired lots of tech companies.

Image Credit: 20th Century Fox

John Underkoffler was the science advisor for the landmark 2004 film Minority Report, and he designed the gesture-controlled user interface that Tom Cruise used in the film to solve crimes in the sci-fi story.

In 2006, Underkoffler started Oblong Industries to build the next generation of computing interfaces, and in 2012, the company began selling commercial versions of the Minority Report interface. These are, famously, gesture-based systems where you can use a wand to make things happen on a big monitor.

[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":2112684,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"arvr,business,enterprise,games,","session":"C"}']

But the interface do a lot more than that. They are spatial, networked, multi-user, multi-screen, multi-device computing environments. Architects can use them to zoom in on a drawing on the wall, allowing everybody in the room or those watching via video conference to see what’s being discussed.

I watched Oblong’s Mezzanine in action at one of the company’s clients, the architectural firm Gensler, which among other things designed the new Nvidia headquarters building in Silicon Valley. It was by far the coolest work room I’ve been in. I picked up conferencing windows and moved them around the screens in the room as if they were Lego pieces.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

Oblong has sold hundreds of these systems to Fortune 500 companies, and it raised $65 million to bring these computing interfaces to the masses. I talked with Underkoffler at Gensler in San Francisco to talk about his futuristic interface, as well as the accelerating inspiration cycle of science fiction, technology, and video games. This subject is the theme of our upcoming GamesBeat Summit 2017 event this spring.

Here’s an edited transcript of our conversation.

Above: John Underkoffler, CEO of Oblong, envisioned the gesture controls in Minority Report.

Image Credit: Dean Takahashi

John Underkoffler: Claire is in a room very much like this one. Three screens at the front, some displays on the side. Part of what you’re about to see is that the visual collaborative experience we’ve built, this architectural computer you’re sitting in called Mezzanine, is actually shared. There is shared control in the room. Rather than being a one-person computer, like every other computer in our lives, it’s a multi-person computer. Anyone in the room can simultaneously, democratically inject content and move it around.

The pixels are owned by everyone. These are the people’s pixels. That’s true not just for us in this room, but for all the rooms we connect to. Anything we can do, Claire can do equally. She can grab control and move things around, contribute to the hyper-visual conversation if you will. The point here is to give you a sense of what we’re doing.

I’ll grab Claire there with the spatial wand, the conceptual legacy of the work I did on Minority Report with gestural computing, and we can move through a bunch of content like this. We can use the true spatial nature of Oblong’s software to push the entire set of slides, the content, back and scroll through this way. We can grab any individual piece and move it around the room – really around the room.

VentureBeat: You guys designed the wand?

[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":2112684,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"arvr,business,enterprise,games,","session":"C"}']

Underkoffler: Yeah, the spatial pointing wand. It’s next door to the Minority Report gloves, which we’ve also built and deployed for more domain-specific situations. The glove-based gestural work is more sophisticated, more precise in some sense, but it’s also less general. There’s a bigger vocabulary. It’s easy, in a generic computing collaboration context like this, for anyone to pick up the wand and start moving things around the room.

If you are game to type one-handed for a second, I’ll give you the wand. If you just point in the middle of that image, find the cursor there, click and hold, and now you can start swinging it around. If you push or pull you can resize the image. You can do both of those things at the same time. When you have true six degrees of freedom spatial tracking, you can do things you couldn’t do with any other UI, like simultaneously move and resize.

This truly is a collaborative computer, which means that anyone can reach in, even while you’re working, and work alongside you. If you let go for a second, there’s Claire. She’s just grabbed the whole VTC feed and she’s moving it around. Gone is the artificial digital construct that only one person is ever doing something at a time. Which would be like a bunch of folks standing around on stage while one blowhard actor is just talking. We’re replacing that with a dialogue. Dialogue can finally happen in, rather than despite, a digital context.

VB: This works in conference rooms, then?

[aditude-amp id="medium2" targeting='{"env":"staging","page_type":"article","post_id":2112684,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"arvr,business,enterprise,games,","session":"C"}']

Underkoffler: It works in any setting for which people need to come together and get some work done. The set of verticals–the Fortune 1000, Forbes Global 3000 companies that we predominantly sell to, occupy almost any vertical you can think of, whether it’s oil and gas or commercial infrastructure or architecture like Gensler. Commercial real estate. Hardcore manufacturing. Social media. Name a vertical, a human discipline, and we’re serving it.

The intent of the system itself is universal. People always need to work together. People are inherently visual creatures. If we can take work, take the stuff we care about, and deploy it in this hyper-visual space, you can get new kinds of things done.

https://www.youtube.com/watch?v=J3aPEx1v1hE

VB: Hyper-visual?

[aditude-amp id="medium3" targeting='{"env":"staging","page_type":"article","post_id":2112684,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"arvr,business,enterprise,games,","session":"C"}']

Underkoffler: It’s how it feels to me. It should be as visual as the rest of the world. When you walk around the world, you’re not just seeing a singular rectangle a couple of feet from your face. You have the full richness and complexity of the world around you.

Even if you imagine human work spaces before the digital era—take an example like Gensler here, a commercial architecture and interior design space. Everyone knows what that style of work is. If, at the one o’clock session, we’ll work on new Nvidia building, we’ll come into a room with physical models. We walk around them and look at them from different points of view. You’ve brought some landscape design stuff. You unroll it on the table. We’re using the physical space to our advantage. It’s the memory palace idea all over again, but it’s very literal.

For the longest time – essentially for their whole history – computers and the digital experience has not subscribed to that super-powerful mode of working and thinking spatially. Mezzanine gives the world a computer that’s spatial. It lets us work in a digital form the way that we’ve always worked spatially.

Everyone knows the experience of walking up to a physical corkboard, grabbing an image, and untacking it from one place to move it over next to something else. That simple gesture, the move from one place to another, the fact that two ideas sit next to each other, contains information. It makes a new idea. We’ve just made that experience very literal for the first time in a digital context.

[aditude-amp id="medium4" targeting='{"env":"staging","page_type":"article","post_id":2112684,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"arvr,business,enterprise,games,","session":"C"}']

Although the result is, in a sense, a new technology and a new product, it’s not new for human beings, because everyone knows how to do that already. That’s an advantage for us and for our customers. Everyone knows how to use this room because everyone is already an expert at using physical space.

VB: What kind of platform is it? Is it sitting on top of Windows, or is it its own operating system?

Underkoffler: At the moment it’s a whole bunch of specialized software sitting on top of a stripped-down Linux. It runs on a relatively powerful but still commodity hardware platform, with a bit of specialized hardware for doing spatial tracking. That’s one of our unique offerings.

https://www.youtube.com/watch?v=PJqbivkm0Ms

[aditude-amp id="medium5" targeting='{"env":"staging","page_type":"article","post_id":2112684,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"arvr,business,enterprise,games,","session":"C"}']

VB: Are the cameras more generic, or are they–

Underkoffler: Completely. Right now that’s a Cisco camera with a Cisco VTC. We’re equally at home with Polycom and other manufacturers. We come in and wrap around that infrastructure. A lot of our customers have already made a big investment in Cisco telepresence or Polycom videoconferencing. We’re not saying there’s anything wrong with that. We’re saying you need to balance the face-to-face human communication with the rest of the work – the documents and applications, the data, the stuff we care about. Although it’s nice to see people’s faces from time to time, especially at the beginning and end of a meeting, most of the time what we want is to dig in and get to the real work, the digital stuff, in whatever form that takes. From there you start injecting more and more live content, whatever that may be.

One of the experiences is browser-based. There’s a little tiny app you can download onto your Android or iOS platform, smartphone or tablet. A big part of our philosophy is that we want people to bring the tools they’re already most comfortable with as the way they interact with this experience. Anything I can do with the wand, I can also do with the browser. It’s very WYSIWYG. You drag stuff around.

If you like, you can take out your phone. The phone makes you a super-powerful contributor and user of the system as well. Anything you know how to do already in a smartphone context is recapitulated and amplified, so you’re controlling the entire room. You can grab that and move it around the space, dump it over there. You can upload content with the add button at the bottom.

That moment right there is indicative of what makes this way of working so powerful. If we were locked into a traditional PowerPoint meeting, there’d be no space, no way that anyone could inject a new idea, inject new content. Whereas here, in under three seconds, if we needed this bit of analog pixels stuck up there—you did the same thing simultaneously.

VB: So phones are the way to get a lot of analog stuff into the screens?

Underkoffler: Yeah. And we can plug additional live video feeds in. One thing that happens there is that we’re—again, we’re very excited about analog pixels. We’re not fully digital obsessives. We can do live side-by-side whiteboarding, even though we’re embedded in this more generic, more powerful digital context.

Then the pixels start to become recombinant. Let’s cut out that little bit and we can augment Claire’s idea with our crummy idea here. Then we can make one composite idea that’s both brilliant and crummy, just like that. That now becomes part of the meeting record. Everything we do is captured down here in the portfolio. Claire, on a tablet on that end, if she were inclined to correct our mistakes, could reach in and annotate on top of that.

In a way, what we’ve discovered is that the real value in computation is always localized for us humans in the pixels. Whatever else is happening behind the scenes, no matter how powerful, at the end of the day the information there is transduced through the pixels. By supercharging the pixels, by making them completely fluid and interoperable whatever the source may be – a PDF, the live feed from my laptop, the whiteboard, whatever – by making all the pixels interoperable we’ve exposed that inherent value. We make it accessible to everyone. Claire just used a tablet to annotate on top of the thing we’ve been working on.

Above: Mezzanine lets you visualize complex projects at a glance.

Image Credit: Oblong

VB: Is there some kind of recognition software that’s taking over and saying, “I recognize that’s being written on a whiteboard. Now I can turn that into pixels”?

Underkoffler: There really isn’t. Down here, in the livestream bin, are all the sources that are presently connected. When we started that whole bit of the dialogue, I simply instantiated the live version there. In a way, there’s an appealing literalness to all of this. We can plug in AI. We can plug in machine vision and recognition, machine learning algorithms.

VB: The camera is seeing that as an image, though? It’s not trying to figure out what you’re saying.

Underkoffler: Right. But the opportunity always exists to simply plug in additional peripherals, if you will, which is part of what our technical architecture makes easy and appealing. We just built a prototype a month ago using a popular off-the-shelf voice recognition system where you could control the whole work space just by talking to it.

The multi-modal piece is important, because it gives you the opportunity to use the tool you want to control a space. The thing that’s most natural or most urgent for you—I want to talk to the room, point at the room, annotate or run with an iPad or smartphone. You use whatever utensil you want.

VB: How did you get here from Minority Report?

Above: Tom Cruise in Minority Report uses John Underkoffler’s computing interface.

Image Credit: 20th Century Fox

Underkoffler: By about 2003, 2004, in the wake of the film, I was getting a lot of phone calls from big companies asking if the stuff I designed, the way it’s shown in the film, was real. If it was real, could they buy it? If it wasn’t real, could I build it and make it work? It didn’t take many of those before I realized that now was the moment to push this stuff into commercial reality.

We founded the company and started building out—it was literally building out Minority Report. Our very first product was a system that tracked gloves, just like the ones in the film. It allowed the gloves to drive and navigate around a huge universe of pixels. Early customers like Boeing and GE and Saudi Aramco purchased the technology and engaged us to develop very specific applications on top of it that would let their designers and engineers and analysts fly through massive data spaces.

Part of the recognition here is that–with our recent fascination with AI and powerful backend technologies, we’re more and more implicitly saying the human is out of the loop. Our proposition is that the most powerful computer in the room is still in the skulls of the people who occupy the room. Therefore, the humans should be able to be more in the loop. By building user interfaces which we prototyped in Minority Report, by offering them to people, by letting people control the digital stuff that makes up the world, you get the humans in the loop. You get the smart computers in the loop. You enable people to make decisions and pursue synthetic work that isn’t possible any other way.

For Saudi Aramco and Boeing and GE, this was revelatory. Those teams had the correct suspicion that what they needed all along was not more compute power or a bigger database. They needed better UI. What happened next was we took a look at all this very domain-specific stuff we’d been building for big companies and realized that there was a through line. There was one common thread, which was that all of these widely disparate systems allowed multiple people in those rooms pursuing those tasks to work at the same time. It’s not one spreadsheet up, take that down to look at the PowerPoint, take that down to look at the social media analytics. You put them all up at the same time and bring in other feeds and other people.

The idea of Mezzanine crystallized around that. It’s a generic version of that. All you do is make it so everyone can get their stuff into the human visual space, heads-up rather than heads-down, and that solves the first major chunk of what’s missing in modern work.

Above: You can use smartphones or laptops to control things in Mezzanine.

Image Credit: Oblong

VB: What kind of timeline did this take place on?

Underkoffler: We incorporated in 2006. I built the first working prototypes of Oblong’s Minority Report system, called G-Speak, in December of 2004. By 2009, 2010, we’d seen enough to start designing Mezzanine. It first went live in late 2012. We’re four years into the product and we’re excited about how it’s matured, how broad the adoption and the set of use cases are. It’s in use on six continents currently.

VB: How many systems are in place?

Underkoffler: Hundreds.

VB: What is the pricing like?

Underkoffler: At the moment, the average sale price is about half of what you’d pay for high-end telepresence, but with 10 times the actual functional value. We sell with or without different hardware. Like I say, sometimes customers have already invested in a big VTC infrastructure, which is great. Then we come in and augment it. We make the VTC feed, as you’ve seen with Claire, just one of the visual components in the dialogue.

But again, the point is always that it’s—we can have the full telepresence experience, which looks like this, and at certain moments in the workflow might be critical. Then, at other times, Claire just needs to see us and know we haven’t left the room. We’re still working on the same thing she is. At that point we’re pursuing the real work.

VB: It’s taken a while, obviously. What’s been the hard part?

Underkoffler: Making it beautiful, making it perfect, making it powerful without seeming complex. The hard part is design. It always is. You want it to be super fluid and responsive, doing what you expect it to do. A spatial operating environment where you’re taking advantage of the architecture all the way around you, to be flexible for all different kinds of architecture, all of that’s the hard part on our end.

On the other side of the table, the hard part is we’re rolling off 10 years where the whole story was that you don’t even need a whole computer. You just need a smartphone. The version of the future we see is not either/or, but both. The power of this is that it’s portable. The liability is that it’s not enough pixels, not enough UI. If we have to solve a city-scale problem, we’re not going to do it on this tiny screen. We’re going to do it in space like this, because there’s enough display and interaction to support the work flow, the level of complexity that the work entails.

It’s taken a while, as well, for the mindset of even the enterprise customers that we work with to turn the ship, in a way. To say, “Okay, it’s true. We’re not getting enough done like this, where everyone is heads-down.” If we’re going to remain relevant in the 21st century, the tools have to be this big. There’s a very interesting, and very primitive, scale argument. How big is your problem? That maps, in fact, to actual world scale.

VB: How far and wide do you think Minority Report inspired things here?

Underkoffler: All of the technologies that were implied by the film?

VB: Mostly the gesture computing I’ve seen, in particular.

Underkoffler: There’s a great virtuous feedback loop that’s at work there. Minority Report was kind of a Cambrian explosion of ideas like that. We packed so much into the film. But it arrived at just the right time, before the touch-based smartphone burst on the market. We can attribute part of the willingness of the world to switch from—what did we call phones before that? Small keyboard flip-phones and stuff like that? And then the more general-purpose experience of the smartphone. We can attribute part of that to Minority Report, to a depiction of new kinds of UI that made it seem accessible, not scary, part of everyday life. It goes around and around. Then we put new ideas back into films and they come out in the form of technology and products.

Above: Evan Rachel Wood as Dolores and Ed Harris as the Gunslinger in Westworld.

Image Credit: HBO

VB: I feel like Westworld is another one of those points, but I’m not sure what it’s going to lead to. It’s being talked about so much that there has to be something in there.

Underkoffler: I think so. In one sense the ideas are cerebral more than visual, which is great. I hope what Westworld leads to is a renewal of interest in consciousness, the science of cognition and consciousness, which is fascinating stuff. Understanding how the wetware itself works. Westworld is definitely unearthing a lot of that.

To pick back up on Minority Report, as we were working on it, as I was designing the gestural system and that whole UI, I was consciously aware that there was an opportunity to go back to Philip K. Dick and do a thing that had happened in Blade Runner. In Blade Runner you remember the holographic navigation computer that Harrison Ford uses to find the snake scale, the woman in the mirror. It’s really appealing and gritty and grimy, part of this dense texture of already aged tech that fills up that film.

But for me as a nerd and a designer and a technologist, those moments in science fiction are a little frustrating. I want to understand how it works. What would it be like to use that? He’s drunk in that scene already and barking out these weird numerical commands and it doesn’t have any correlation to what’s going on. I knew we could show the world a UI where it’s actually legible. From frame one you know what John Anderson is doing. Viscerally, you know how he’s moving stuff around and what effect it has and what it would feel like to introduce that in your own life.

It was a great opportunity. The feedback, as you’ve been saying, is really immense. The echo that has continued down the last decade because of that is remarkable.

The other piece I wanted to show was a collaborative user interface. Of course Tom Cruise is at the center of those scenes, but if you go back and watch again, there’s a small team, this group of experts who’ve assembled in this specialized environment to solve a really hard time-critical problem. Someone is going to die if they don’t put the clues together in six minutes. They shuttle data back and forth, stuff flying across the room. That was a unique view of a user interface, at least in fiction, that allowed people to work together.

We’ve built literally that. In a sense this is that Minority Report experience. We have lots of pixels all over the room. We can be in the room together and work with everything that all of us bring here.

VB: Do you think you’re close to being there? Or do you think you’re going to be doing improvements on this for years? Are there things in the film that you still can’t quite do?

Underkoffler: I think it’s fair to say we’ve already exceeded everything we put in the film and that I’d imagined behind the scenes. But our road map at Oblong, what we have, will occupy the next 10 years. We have enough new stuff – not incremental improvements, but genuinely new things – to keep us busy for a decade.

Above: Telling Siri to send money through PayPal.

Image Credit: PayPal

VB: I don’t know if I would guess, but putting computer vision to work, or voice commands?

Underkoffler: There’s a huge set of projects around integration with other modalities like what you’re discussing. Our view, our philosophy, is very clear. We never want that stuff to replace the human as the most important participant. But if machine vision can find business cards in the room, or documents we lay down on the table, and automatically import them into the digital work space, absolutely. If we can make speaker-independent voice recognition work flawlessly in an extended environment like this, where it can even respond to multiple people speaking at the same time, that would be immensely powerful. Then we have a room where all existing human modalities are welcome and amplified. That’s one of the vectors we’re pursuing.

VB: You mentioned that it costs about half as much as a telepresence room. How much do those cost? What order of magnitude are we talking about for one room? Or I guess you have to have two rooms.

Underkoffler: You don’t, actually, and that’s important. Even when there’s one room, the idea that a bunch of people can come in and work with each other, rather than one at a time or separated by these, is transformative. We do as much work internally in a single room, not connected to any other room, as we do when the rooms are connected together. Unlike the telephone or the fax machine, it’s fine if you have only one.

Pricing really depends on the size of your space, how many screens, what kind of screen, what size. Do you already own the screens? Do you want to buy screens? Typically, like we say, it’s half the price of telepresence. Telepresence is really about high fidelity voice and video delivery of people talking to each other, the in-room personal piece. This layers on infopresence, all the content and information everyone brings to the table to discuss. You have the opportunity to surround the entire team. We have two walls here with screens, but it could be three or even four. You can take the notion of immersion as a group to a whole different level that you can’t do with any other kind of technology.

VB: How long before you get to medium and smaller businesses being able to use this?

Underkoffler: Do you remember in Star Trek II, where they decide to not speak openly on an unencrypted channel? [laughs] The answer to your question is: sooner than you might guess. We did just expand our war chest, so to speak, to fuel some wonderful growth and development. There are lots of good things coming. We’ll be introducing at least two major iterations on the product in 2017.

VB: Digressing a bit from Oblong, back to Minority Report, what benefit do you see in attaching a science advisor to a sci-fi film, convincing people that what they’re going to watch is plausible? When did that become a common thing?

Underkoffler: I’d have to nominate Andromeda Strain, for which Michael Crichton undoubtedly did his own science advising. Robert Wise directed, but I remember seeing it and being blown away, because every element of it – all the technology, the dialogue, even the passion that the scientist characters infused into this fictional world – is real. It may be the world’s oddest instances of product placement. I don’t suppose they actually bought telemanipulators for the scenes where they pull the top off the crashed space probe and yank out the synthetic life form and the rest of that. But it’s all real. The excitement of the film derives from the fact that there’s no question it’s real.

The dialogue Paddy Chayefsky wrote in Altered States stands as the single best depiction of how scientists actually sound when they’re excited, and in some cases drunk. That’s cool. After that, one thing that’s interesting to look at is a 1984 film by Martha Coolidge called Real Genius, which was Val Kilmer in a comic mode at a lightly fictionalized Cal Tech. But all of the stuff, including the student experience and the political interactions and the alliances with DARPA and other government funding agencies, all of it was shockingly real and authentic. It’s because the production hired a guy named Dave Marvit, who has since become a friend. He was a recent Cal Tech graduate.

If you remember back in those days, there was Weird Science and some other movies, and they were all – with the exception of Real Genius – kind of in the same mold. Someone decided that they would have a very sketchy picture of what it’s like to conduct science, to be an engineer, to be creative in those worlds. Then you hang the presumably ribald comedy on that, where at the end of the day it’s about seeing other naked human teens. But with Real Genius the effect is completely different, because you’re immersed. There’s that word again. You’re immersed in a world that you can relate to. It’s only because of the authenticity of every little detail.

Minority Report was one of the next steps. The film made an investment in that kind of immersion, made an investment in that verisimilitude. It came from the top, from Spielberg, who said he wanted a completely real, completely believable 2054.

Above: HAL from 2001: Space Odyssey

VB: An example I think of is 2001, where you have the computer that goes rogue, and the HAL initials are a nod to IBM. Had they not put that in—the film is still good, but it’s somehow deliberately reminding you of reality.

Underkoffler: That was part of Kubrick’s genius. Now we’re going backwards from Andromeda Strain, and I should have started with 2001, but part of his genius was that he cared about the tiniest details. He went at it until he personally understood all of it. You have the really interesting implications, and it’s not about product placement. It’s about verisimilitude. Bell Telephone is in business and now has this different form factor, because you’re making a call from the space station. I forget what the other recognizable western brands are, but there’s a hotel and others. It was about showing how the world will remain commercial, even when we have everyday space travel.

Then you bolt that on to all the NASA-connected research he did. You have Marvin Minsky designing these multi-end effector claw arms for the little pod they fly around in. Everything is considered. It’s not just icing. It’s the cake, in a sense. It’s how you end up in that world, caring about it.

VB: I wonder whether these walls are coming down faster, or that the connections are getting stronger. The thing I point to—I had a conversation with the CEO of Softbank, and I asked him, “What are you going to do with the $100 billion you’re raising from the Saudis?” He says, “We’re investing for the singularity. This is something we know is going to happen in the next 30 years and we’re going to be ready for it.”

Underkoffler: So he’s predicting singularity in 30 years? I’ll bet five dollars that’s not true. Tell him that. I’ll see him in 30 years.

VB: I asked about what Elon Musk has been saying, and everything science fiction has predicted about the dangers of AI. He says, “Fire was not necessarily good for humans either. But it does a lot of good for us. Someone’s going to do this.”

Underkoffler: If science fiction becomes part of the standard discourse, if everyone expects to see a lot of that on TV, that’s good, because it leaves room for—let’s call it social science fiction. Pieces that aren’t just about technology, but about the kind of social, political, and philosophical consequences of technology. That’s why Philip Dick was so fascinating as a writer. It’s why Westworld is interesting.

How about Black Mirror? Black Mirror is specifically about that. That’s way more interesting, way more exciting than just seeing a bunch of people flying around. For my money, that’s when stuff gets really good. That’s when humanity is actually talking to itself about decisions — what matters, and what happens if we do or don’t decide to pursue this or that technology. It’s probably a dangerous thing to say we’re just going to pursue technology. Did Prometheus think what might happen when he gave us fire? Or was he just like, “I’m pissed at the gods, here you go”? That ladder feels to me like a lot of what’s happening. Let’s pursue technology because we’re technologists.

VB: That’s why I thought the story in the new Deus Ex game was interesting. Human augmentation is the theme, so they predicted that it would divide humanity into two groups. The augmented humans are cutting off their arms to have better arms and things like that. Then the natural humans are saying this isn’t right, that it’s going too far. Terrorism happens and one group blames the other, tries to marginalize and ghettoize the other. The notion is that division in society is an inevitable consequence.

Underkoffler: I think that’s smart, and that’s right. There’s always haves and have-nots. It’s important to go back to the earlier days of sci-fi, too. There’s an amazing short story by Ray Bradbury called “The Watchful Poker Chip of H. Matisse.” This guy who nobody’s ever paid attention to, because he’s really boring, loses an eye in an accident. He’s able somehow to commission Henri Matisse to paint a poker chip for him and he puts it in his eye socket. Suddenly people are really interested in him, and the rest of the story is about him having these willful accidents. He loses a leg and builds this gold bird cage where his thigh would be. He becomes an increasingly artificial assemblage walking around like a curiosity shop. But there’s a social currency to these alterations.

Are you seeing anything really great coming to light in the game world these days around collaborative gameplay? Not multiplayer, but collaborative.

Above: Mezzanine is being used by researchers to help cure cancer.

Image Credit: Oblong

VB: There was one attempt by the people who made Journey, a very popular game. It was only about four hours long, but they had a multiplayer mode, where you could go on your adventure with somebody else. That other person was a complete stranger. You couldn’t talk to each other. But there’s a lot of puzzle-solving in the game, and you could work together on that and progress. It’s a very different kind of cooperation than you’d expect.

Underkoffler: I have a great friend who was a large-scale team lead on Planetside. I used to study them playing, because they’re all separated by distance, but wearing headsets, so there’s communication through one channel that allows them to operate as a conjoined unit. That was interesting.

VB: There’s almost always cooperative modes in games these days, where you’re shooting things together or whatever. But collaboration—it makes me think of some of those alternate-reality games, like the I Love Bees campaign for Halo 2. I don’t know if you ever read about that. They had hidden clues scattered around the web and the real world to market the game. They made 50,000 pay phones ring at the same time, and people had to go to as many of these could and answer them. They recorded what they heard, and it all patched together into this six-hour broadcast of the invasion of Earth, like War of the Worlds.

Underkoffler: The crossover with the real world is really fun there.

Above: Ilovebees was an ARG for Halo 2.

Image Credit: 42 Entertainment

VB: Alternate reality games became a popular thing later on, although not on as large a scale. They’re very hard to do. Only a couple of companies were doing them. A guy named Jordan Weisman ran one of them. And Elan Lee, but he’s off making board games now. Getting the masses to collaborate, crowdsourcing, is pretty interesting.

I wonder what you get when you put these people together. You take the AI experts and the science advisors and the video game storytellers and moviemakers all together. Something good has to happen.

Underkoffler: I think so. It’s always worth studying, studying games in particular. Before Oblong got started, some of the best work in next-generation UI was happening in the gaming world. People didn’t want to pay attention. SIGGRAPH or the ACM people didn’t want to hear about games, because you need this academic thing and all the rest of it. But the fact is, before anyone else was thinking about it, game designers were figuring out how to do incredibly complex things with a simple controller. A reward system was in place to make it worth learning how to pilot a craft around with six degrees of freedom using a game pad. It bears studying. As you say, once you start colliding these different disciplines, interesting stuff is going to come out.

Above: In ilovebees, 42 Entertainment made 50K pay phones ring at once. Players recorded the calls and put together an hours-long broadcast on the Covenant invasion.

Image Credit: 42 Entertainment

VB: VR seems like an interesting frontier right now. People are inventing new ways to control your hands in it and so on.

Underkoffler: It’s pretty primitive. A lot of the foundational technology isn’t even there yet. How do you really want to move around that world? Mostly people have been building the output piece, the headsets. There’s been less work on the UI. But that’s what we’re interested in.

VB: Games are teleporting you from place to place because you get sick if you try to walk there. What would you say is the road map going forward, then? What’s going to happen?

Underkoffler: We’re going to make the kind of computing you’re looking at now – architectural, spatial, collaborative computing – more and more the norm. It’ll be a layer on top of the computing that you expect and understand today on your laptops and tablets and smartphones. As you suggested earlier in the hour, that’ll start permeating through various layers – small and medium business, all kinds of organizations at different levels.

And at the end we get to actual ubiquity. When you sit down in front of your laptop, it’s not just you and your laptop. It’s also the opportunity to communicate and collaborate with anyone else in the universe. We’re going to give the world a multitude of collaboration machines.