It’s no fun when you fire up a heavy-duty game on your tablet and it starts to get warm. Imagination Technologies, the chip design company that owns the MIPS processor and PowerVR graphics technologies, wants to create a future where tablets are both capable and power efficient.

We should all hope that it succeeds, because we’re going to want our tablets to handle increasingly difficult workloads, like figuring out hazards on the road or taking data from sensors and making it meaningful.

We talked with Pete McGuinness, director of technology marketing at Imagination, at the recent 2015 International CES. He didn’t hold back his opinions on the right way and the wrong way to approach this critical computing problem.

Here’s an edited transcript of our talk.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

Lane detection demo

Above: Lane detection demo

Image Credit: Imagination Technologies

VentureBeat: What do you have on your tablet?

Peter McGuinness: This is a piece of software written by a partner, a company called Luxoft. What it’s showing is a proximity warning and a lane departure warning. When you drift out of the lane or change lanes, it gives you this red warning. It gives you the distance to the vehicle ahead and things like that.

The framerate is the nice thing. We have a framerate counter here and a central processing unit (CPU) utilization counter here. Two things are going on. First of all, if you like, it’s a heterogeneous app. It’s running partially on the CPU, but the heavy lifting on the image processing is being done by the graphics processing unit (GPU). The underlying algorithms have been ported to OpenCL. It’s on Android, with an Intel Atom. It has a four-cluster Rogue 6-series GPU in it.

VB: And this tells us what?

On Android, if you’re doing imaging of this type, you’re passing video buffers between the various hardware components. You import it from the camera, decode a bitstream, and that creates a buffer with the image data in it. Then you have to take that buffer and copy every frame into an area of memory owned by the GPU, in this case. It could be a video encoder, for videoconferencing, or a display controller if you’re just going to put it on the screen. But the point is that with every movement between the different hardware blocks in the SOC [system on chip], Android tries to make a copy of the data. That turns out to dominate performance on apps like this.

What the imaging framework does, we’ve extended some of the APIs in EGL, the buffer management in EGL, and we’ve added some utilities. It doesn’t completely eliminate buffer copies, but it minimizes them. Instead of having six or seven buffer copies, which is easily possible if you just use stock Android, it’ll go down to a single copy, which makes something like this possible on an embedded system on chip. You can get the framerate and you don’t overheat the device.

You’ve seen the Nvidia Tegras are big in autos, right? One of the reasons for that is because in a car, you can put a fan on top of the device. That’s the limitation in most cases: the power dissipation. In a form factor like this, without OpenCL and without the buffer — we call it Zero Copy technology — this would just go into thermal shutdown after just a few seconds. Instead, this will run indefinitely.

VB: Is that the Dell Venue 8, the brand-new one?

McGuinness: Exactly, that’s right. This is just one example. It’s a device that’s already launched. We have another tablet we can’t show publicly from another manufacturer, with the same chipset. What they did was, they used our Zero Copy and the imaging framework to port a lot of the computational photography and image processing tasks onto the GPU. Of course they want video filters and Instagram filters and things like that.

Imagination Technologies' board

Above: Imagination Technologies’ board

Image Credit: Imagination

VB: Is this going to get hot as well?

McGuinness: Feel it. It’s barely using the CPU.

VB: I use an Nvidia Shield, which can run very hot.

McGuinness: Yeah. But not to get into Nvidia bashing. Or why not? I mean, Tegra is not really a mobile device. It runs too hot. It’s just not suitable.

There’s a couple of messages attached to using GPU compute. The first one is that for highly parallel tasks like this, it’s a power optimization method. You can get much better performance at much lower power. We’ve seen instances where we’ve multiplied the performance by a factor of six and divided the power by a factor of 10. That’s 60 altogether, just by moving onto the GPU. When you’re streaming video, the architecture of the GPU is so much more appropriate for that sort of task.

This image framework was the thing we found was necessary to make that feasible in Android, because of the way Android tries to manage buffers. That’s why we put it out there. The way we’re deploying it, we’re working with OEMs who want to differentiate their products. This is something laying on top of Dell. But this other tablet manufacturer — I don’t know when they’re going to launch the thing. What they’ve done is taken our imaging framework, taken the standard camera app for Android, and extended it to differentiate their tablet from everyone else’s. They’ve added camera features like anti-shake and face detect by using that framework, and also added other fun effects like face beautification, or this app that will enlarge your eyes. Apparently in China they think that’s a wonderful thing to do.

This is where we see the technology going. Everyone’s talking about GPU compute, heterogeneous compute. HSA [heterogeneous system architecture] is coming along, all that sort of thing. This is where we see it going. It’s going to be in visual imaging applications. It’s usable in Project Tango, in Glass, in lots of other image-based appliances and other things out there.

We have a really nice demonstration at our booth based on a camera sensor. It’s mounted up in the ceiling and mapping an area about the size of this room. When someone walks into it, it maps them and tracks them and detects where they go. It’s a point-of-sale monitoring thing. They can see when people walk to the register, what they look at, whether they pick something up and buy it or just walk away without buying anything. The intelligence for that is shared between the camera and the point-of-sale terminal, but in any case it has to be using a consumer-level IOT device rather than a big PC. That’s where we see a lot of this technology going.

VB: Do you think that for things like augmented reality glasses, that there’s some sort of preferred platform yet? What kind of graphics do you need to make that acceptable and cheap? Some of these glasses come from the military, so they’re really high-end, around $5,000.

McGuinness: And they don’t work very well. Oculus Rift is much cheaper and much better. The first-generation Glass was kind of disappointing. The screen is tiny. It’s limited in functionality and it really has no graphics, apart from just putting up messages in front of your eye. It’s definitely not augmented reality. It doesn’t overlay things on your view. You need a pair of glasses with a screen that covers your entire field of vision.