Oculus Rift Crystal Cove prototype.

Above: Oculus Rift Crystal Cove prototype.

Image Credit: Dean Takahashi

GamesBeat: Nate, having Unreal Engine 4 in mobile is probably a good thing for what you guys have as your vision. But what are some technologies you need to see companies develop above and beyond what Oculus does, in order for you to succeed? What else has to happen?

Mitchell: VR input is probably one of the toughest challenges out there. If you look at the industry, especially Kickstarter, in the last year, there’s been a ton of virtual reality startups, and a huge number of those revolve around input — the Sixense STEM, Tactical Haptics, a huge amount of technology in the VR space centered on input. I don’t know how many people in the room have tried the Oculus Rift, but a mouse and a keyboard are not the ideal inputs for VR. That’s something we’re exploring quite a bit at Oculus. A lot of people are trying to tackle it.

We’re going to see a lot of exciting developments happen over the course of the next year or two, as we have more breakthroughs in that space. Display technology needs to come along a bit further. There are only so many companies making high-end panels, and those panels aren’t truly designed for virtual reality. If we can get more display manufacturers excited about virtual reality and the possibilities for high-end, high-fidelity VR, there’s a lot more we can do to make VR immersive and more fun.

GamesBeat: With VR, one question is, do you really need 360 degrees? A lot of the TVs being shown here are curved. They’re 4K, so you can get closer to them without seeing pixels now. If you have a very curved screen in front of your face, that seems pretty immersive.

Mitchell: Yeah. But it’s a different experience from what we’re trying to go for. We want to deliver on the dream of VR, which is about mimicking reality as much as we possibly can, creating a synthetic reality where you feel a sense of presence, the way we all feel in this room. I can turn and say, “Hey, how are you,” and I feel like I’m talking to another person. To do that, you need super low-latency tracking. You need higher resolution. You need 360 degrees. You need VR input. We’ll get there. We won’t have all the pieces in place today, but for the consumer V1 of the Oculus Rift, we think we’ll deliver something compelling. It’ll open up more possibilities for developers.

Avegant Glyph

Above: Avegant’s Glyph virtual display.

Image Credit: Dean Takahashi

GamesBeat: Avegant has its Glyph system, which shines light directly on your eyeballs, so you’re looking at a couple of lenses. The eye-tracking technology from Tobii is also interesting, where you use your eyes to control a first-person shooter game.

Mitchell: One of the neat things about this CES especially, and the past year we’ve seen with the new consoles and the explosion in mobile, we like to say that it’s a great time to be a gamer. The possibilities in gaming now feel endless. We have things like the cloud opening up new opportunities. The new consoles bringing entirely new experiences. VR is opening new doors. That’s an entirely new canvas. There’s so much that can be done on mobile. It’s not even about fidelity. It’s just about empowering developers and opening up new opportunities for them. As a gamer, there’s a lot to be excited about in 2014.

GamesBeat: Switching to cloud a little, it’s interesting to see a bit of a difference between Sony and Microsoft on working with the cloud. We have cloud game distribution. We also have game streaming. Then we have cloud processing, which Microsoft has talked about a lot, where you’re actually using servers in the cloud to do processing for a game, like the artificial intelligence in Forza Motorsport 5 for the Xbox One. I wonder if you could talk about some of these differences as far as vision for the cloud, and what this means about the technology required for it.

Stevenson: Following Nate’s comment, a rising tide lifts all boats. Whether it’s offloading the full game, streaming it, or just offloading part of the game’s processing, all of this stuff allows for great game experience. From our point of view, on the cloud streaming side, we see a lot of capability on the back end, a lot of things we can do.

The stuff we’ve discussed so far, streaming PlayStation games out into the context of PlayStation devices, there’s a lot of possibility there, but imagine the PlayStation 4 is a fixed piece of hardware. Imagine the back end is a data center. You can scale those servers and upgrade them. You could potentially have multiple servers. The developer could potentially select a more powerful experience. They could say, “Hey, I want 10 CPUs and 12 GPUs on this game.” There’s a lot of capability down the road, both in a cloud processing context and in a cloud streaming context.

In terms of limitations, we’re always battling against bandwidth and latency. We see those moving in a positive direction. If you’ve seen the announcements about 4K video streaming, yesterday Samsung and LG announced 4K streaming solutions for movies. All of these things consume bandwidth. As we move forward, the way I say it in the office is that there are little trucks from Comcast and Cox and Charter and Google Fiber running around the country right now, installing better and better connections to end users to solve these things.

Penello: Whenever we try and talk about something that hasn’t been executed yet, customers try and put it into certain pockets. They try to think about it like a cloud streaming service or cloud processing. The reality is that almost every device you own today is connected. We’ve been running a cloud service in Xbox Live for 12 years, 13 years now.

We talk a lot about the cloud in terms of cloud processing. We’re experimenting with that. We’ve talked about cloud streaming. What we have to do is observe how bandwidth and latency are different, and how the different developers are able to implement it. What types of devices are customers on? It’s going to be a continuum of experiences for gamers, not different segments.

Creators will come up with interesting uses based on the device they’re on, the bandwidth available to them. But strategically, we look at it in a similar way. It’s a canvas that we’re going to paint a wide variety of experiences on. We’re executing that in slightly different ways, but we’re all heading toward the same goal.