Skip to main content [aditude-amp id="stickyleaderboard" targeting='{"env":"staging","page_type":"article","post_id":2127417,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"arvr,business,games,","session":"D"}']

Why Qualcomm believes that the future of VR and AR is mobile

Tim Leland, vice president of product management at Qualcomm.

Image Credit: Dean Takahashi

Qualcomm Technologies is bullish on virtual reality and augmented reality, but it isn’t satisfied with the technology yet.

The company is crafting its latest mobile processors and other technology so that VR and AR can become untethered, show imagery with higher resolution, use better displays, and be lightweight and energy efficient enough so that we’ll be able to wear headsets for a long time. Over time, Qualcomm wants to bundle all of the tech necessary for VR in smaller and more comfortable packages.

[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":2127417,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"arvr,business,games,","session":"D"}']

One of the people responsible for that vision is Tim Leland, vice president of product management, at Qualcomm in San Diego, Calif. I caught up with Leland at the VRX conference in San Francisco last week, where he gave a talk on Qualcomm’s view of virtual reality and the progress we’re making toward high-quality, untethered VR and augmented reality.

Here’s an edited transcript of our interview.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

Above: Tim Leland of Qualcomm speaks at VRX.

Image Credit: Dean Takahashi

VentureBeat: What’s the summary of everything you’re speaking about?

Tim Leland: We believe at Qualcomm that VR is happening. It’s going to grow in 2017 and evolve quite a bit after that. VR and AR will eventually merge, with AR subsuming VR as new technologies are developed — on the display side in particular. The display can be fully transparent or opaque if you want to go into a VR mode. It’ll be the same device. I don’t think we worry too much about the continuum of terms within AR, mixed, or merged reality. That’s just where everything is headed. You’ll have virtual objects you can place in this augmented world and interact with them. They’ll be correctly anchored.

We think the future of AR is mobile because that’s what consumers want. Consumers don’t want to be tethered to anything. There are physical limitations to the performance of an application in terms of the latency, in terms of motion-to-photon and photon-to-photon latency, going forward. You hit limits if you have the rendering system decoupled from the inertial or positional tracking system. You want to have those as tightly coupled as you can — at least on the same [printed circuit board] PCB but ideally in the same system-on-a-chip (SOC).

In future products we’re going to be making, we’ll be significantly improving the 3D graphics and multimedia performance at improved power efficiency, so you don’t get into a situation where you have to pick between visual quality with a cable and lesser visual quality without a cable.

VB: I’ve seen Facebook and Intel talk about their untethered headsets — stand-alone, without a PC. Do you see those happening on top of mobile VR?

Leland: In order to play in the VR head-mounted display market the way we are — we have about 20 active projects for all-in-one primary-purpose VR [head-mounted displays] HMDs — you have to be able to do all the necessary processing at five watts or less. We don’t think some of these architectures that were built primarily for PCs can be shoehorned into that topology without heavy or exotic thermal solutions. We also don’t think you can get the four or five hours of battery life that some of these OEMs are going to want. We do have a lot of experience in this area, after having learned through Daydream and other HMD development.

[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":2127417,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"arvr,business,games,","session":"D"}']

Above: Oculus wireless standalone Santa Cruz prototype.

Image Credit: Oculus

VB: So the non-smartphone stand-alone headset.

Leland: But there’s both. Some of the requirements are not that different between a stand-alone HMD and a smartphone-slotted VR device. A lot of the requirements are the same. We’ve been working on things like six degrees of freedom, motion tracking, inside-out tracking. For that to be useful, you don’t want to be tethered to anything. You don’t want to yank something off the wall or break your neck tripping over a cord.

We think that the future of VR is a convergence with AR and that it’s mobile. The definition of a smartphone will also change over time if you project this out for a number of years. All of this could be some kind of highly integrated mobile wearable device.

VB: [Qualcomm and Microsoft are announcing that your] Snapdragon chips will be able to run Windows 10. I guess at some point you guys don’t really care which platform you’re on. You’ll be across all of them. Does VR fit in that larger strategy?

[aditude-amp id="medium2" targeting='{"env":"staging","page_type":"article","post_id":2127417,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"arvr,business,games,","session":"D"}']

Leland: We work with Microsoft. We work with Facebook. We work with Google. We think there will be some VR cloud applications, so you’ll want to be able to stream data back and forth to the cloud. There will be a certain amount of processing you’ll want to have natively. You’ll want to have certain elements of machine learning being carried out natively versus conversations with a cloud service.

We have 5G coming, which is going to be an awesome technology in terms of reducing latency between wireless clients and server and also increasing the amount of uplink and downlink bandwidth for these devices that are going to be producing a lot of data. We’re recording a lot of video data, for example, and sending it back somewhere in addition to downloading streaming content that’s going up to even 8K resolution going forward. It’s a pretty good combination to have the connectivity piece integrated as well — multi-mode 5G connectivity — and having integrated Wi-Fi and wi-gig technology like we have. It’s all very close with the sensor sampling and rendering subsystems for display management. Quite a bit of integration.

Above: Tim Leland points out Qualcomm’s vision for VR at VRX.

Image Credit: Dean Takahashi

VB: Did you hear [Epic Games founder] Tim Sweeney’s talk at all?

Leland: I did, yeah.

[aditude-amp id="medium3" targeting='{"env":"staging","page_type":"article","post_id":2127417,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"arvr,business,games,","session":"D"}']

VB: I’m wondering how doable some of that tech will be. He mentioned Oakley glasses with 8K in each eye.

Leland: I think those resolutions will be there. Some of the brute force methods folks who’ve worked in the console and PC space are used to — that will need to change in order to meet some of the requirements of these thin and light devices if you want them to run for hours at a time. Some of the nice visuals where you spend an extra four or five watts to provide a fancy shader effect, that comes at a cost. It’s probably not the right decision. We think that some of the middleware, like the game engines, will have to evolve, so you don’t end up with an experience like, “Hey, look at these great visuals,” but you have a cable to a $2,000 PC.

If we can’t do better in this industry than having to carry around a PC in their backpack with an HDMI cable — that’s not what consumers want to buy. We need to focus on what consumers want to buy, which is ultra-thin-and-light glasses that you can use for AR and VR, that give you interesting and useful data in AR mode, that can optionally go into VR mode and provide a great experience that’s comfortable, lightweight, and never has to be tethered to anything.

VB: He mentioned that displays right now are mainly designed for smartphones and that if you purpose-build things for VR, you wind up with something different that gets you the resolution as well as low cost.

[aditude-amp id="medium4" targeting='{"env":"staging","page_type":"article","post_id":2127417,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"arvr,business,games,","session":"D"}']

Leland: It’s not just resolution. It’s also going to be the color gamut. It’ll be working on things like foveation, so you don’t have to brute force the visual quality with more pixels. Instead, you focus the processing on the area where you’re actually — it’s just a few degrees within that wide field of view that you’re looking at. That’s what I’m talking about when it comes to not just using the traditional console and PC methods for processing visual data, providing photorealism. The industry has to be flexible to make this happen. We have to work together to make this happen.

VB: Are you in sync with Sweeney on the Metaverse thing?

Leland: He saw some of our graphics demos. He was very surprised at what was possible visually on a mobile platform when we showed those on Snapdragon 820, the Unreal Engine 4 demos.

As far as the Metaverse goes? Yeah, I like the analogy he said. You used to have to know how to write HTML and do all sorts of fundamental coding to create a webpage and post visual information to the internet. Over time, that tool set has become more general, so that everybody can create their own website. His feelings are very similar to mine when it comes to the convergence toward a single device. It’s hard to say what the website of the virtual world ends up being, but I agree with quite a few of the concepts he was talking about.

[aditude-amp id="medium5" targeting='{"env":"staging","page_type":"article","post_id":2127417,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"arvr,business,games,","session":"D"}']

Talking about what we have to announce soon, with the Snapdragon 835, we made a number of improvements. Some of those are in the camera subsystem. Some are in the completeness of the solution we provide to our customers for the camera functionality. As an example, in the camera subsystem, we’re now supporting zig-zag HDR, which is a spatial means of multiplexing different exposure levels, different exposure times, so you can have this HDR effect you get from the sensor and process it internally inside of Snapdragon. It does a better job of mapping the darkest darks to the brightest brights. You get a better resolution for shadows. If you look up at this window here, you won’t just see the white of that window washing out all the colors around it.

We also made some improvements related to high dynamic range for supporting 10-bit displays. We can not only decode 4K HDR 10 video at 60 frames per second, but we can also output that. A 10-bit display, without having to go down to standard dynamic range, can keep at high dynamic range all the way through the pipeline.

We improved the graphics performance. We’ve been working on speeding up our Vulkan performance. We’ll show some great VR demos on Snapdragon 835 that’ll be using both high-end 3D graphics and 4K HDR video on our new Snapdragon chip reference design. We haven’t said exactly what that is. “The new chip.”

We’ve made some improvements to the video encoding subsystems, so we’re doing things like perceptual quantization. Instead of just having a uniform quantization for every possible macroblock in an encoded scene, we’re using the fact that — if you have this wall behind you, you might be much more sensitive to some encoding artifacts, as opposed to if you had something textured like a bush or a tree behind you. Or something that’s moving very fast. Instead of taking a uniform approach to quantization, our new video encoder in the Adreno 540 subsystem of Snapdragon 835 is doing perceptual quantization.

On the module side, we’ve been working with module manufacturers to pre-tune, pre-optimize three different modules. One of them is a 16-megapixel PDAF sensor with excellent optics, very high image quality. Another is a dual-camera mono Bayer sensor for maximum low-light photography. The third is an optical zoom dual-camera solution that has a wide field of view. You have a telescopic lens on a different sensor for maximum zoom. We’ve also enhanced some of the zoom frameworks. We can go seamlessly back and forth between optical zoom and digital zoom.

We’ve made some improvements to electronic image stabilization, which is particularly useful when you’re doing zoom. We have new trajectory algorithms that can more accurately sample some of the sensor data to predict movements and dampen vibrations.

VB: Do you think these things are going to make someone like [Oculus tech expert] John Carmack happier? He’s been very picky so far.

Leland: We talk to Carmack all the time. Generally, he’s said some pretty favorable things about Snapdragon on his ubiquitous Twitter feed. We’re interested in getting his feedback on some of the development tools as they start to do their prototyping and make sure they can get the best performance on Snapdragon.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More