Skip to main content [aditude-amp id="stickyleaderboard" targeting='{"env":"staging","page_type":"article","post_id":119438,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,mobile,social,","session":"C"}']

Put your finger on it: The future of interactive technology

Put your finger on it: The future of interactive technology

[This is the first in a series of posts about cutting-edge areas of innovation. The series is sponsored by Microsoft. Microsoft authors will participate, as will other outside experts.]

Imagine I walk into a bar, and see three women. A camera on my chest captures their faces, and sends the information to a phone in my pocket. That phone can recognize who they are instantly by using software to check the web for their profiles on social networking sites. A tiny projector underneath my camera then beams information about them directly on to say, my arm — all of which I can read. The info tells me whether they’re single or married, whether they own a home or not, and if so, how much that home is worth.

[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":119438,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,mobile,social,","session":"C"}']

I can then take my hand out of my pocket, gesture with a flick to signal a scroll, and thus sift through pages of more information about the women. Eventually, instead of reading all this on my arm, I could do it via a chip implanted into my head.

We’ve long dreamt of wielding ultimate power — and there’s now power like information so readily available like this: All of it literally at our finger tips, extractable from the air’s radio waves.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

To be sure, while the above scenario is already basically nearly possible — something like it was demonstrated in February by some MIT researchers at a conference called TED in Long Beach, Ca. (image left) — it is still a long way off from being adopted by the mainstream. Carrying around cameras, projectors and phones that have to work together is still complicated — but it’s the sort of thing that is coming, even if it will takes years.

Combined with “augmented reality” technology, which can tell you instantly other things about the bar — for example, what cocktails on the menu you can buy for the women — all this technology will make things much more interesting.

In the immediate future, however, there’s plenty of territory for companies and researches to conquer more basic interactive technologies, based on finger touch and speech-recognition.

Called “natural human interface” technology, it translates our gestures — touching a physical screen, or swipe our fingers across it, for example — into commands that let us be more productive than ever before. Touch technology is where we’ll see the most clear opportunities over the next two years, because even a small screen now has so many channels to translate different gestures. Other technologies — using things like infrared, motion detector sensors and gyroscopes to track movement of your hands or fingers — will let you interact with screens with no touch at all. For now, such non-touch technologies are used for things like games (think of the Nintendo Wii, for example, or Microsoft’s Natal, which let you move a hand control or simply your body, to interact with a screen), but they are much less interesting because they are still relatively imprecise. Most of the coolest action right now is in touch.

That’s where recent developments by Apple (which produced the massively popular iPhone, based on touch technology) and Microsoft (which produced Surface, a way to interact with graphics on a tabletop screen) are leading the way. Apple’s iPhone has popularized the use of single and double finger taps, swiping and other touch commands to do things like zoom in and out. Synaptics, a Santa Clara, Calif., company supplying such touch technology to Apple and other touch-phones such as Android, is generating ever more complex gesture features. Last month, Synaptics released a new screen technology boasting 48 sensing channels on an 8-inch screen that can accept up to 10-finger touch commands at a time — useful things like multi-player games, for example. Apple’s advantage is that it owns its devices outright — and integrates such technology seamlessly into its software and hardware — letting the touch features function smoothly within its graphical user interface.

Other companies, such as Microsoft, are pushing things forward, even when they don’t own the entire device, and thus lack such native integration. Microsoft’s Media Room project, for example, will announce in October new features that let you interact with a TV using a touch-screen remote control, the company says. The remote will let you use standard swiping and tapping gestures, but will also support speech commands. Microsoft is working with remote control makers such as Philips and Ruwido. The remote will also be integrated with the offerings of content providers and set-top box makers.

[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":119438,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,mobile,social,","session":"C"}']

So with all of this new touch-based technology coming out, we have plenty to do before we need to graduate to the chip in the head. “The tip of the iceberg,” is what we’ve seen so far, says Andrew Hsu, product marketing director and strategist at Synaptics. Only recently have device makers experimented by making touch-screens bigger, so that multiple people can control them at the same time.

Synaptics is also developing ways to move beyond touch, including knowing when a finger is merely close to the screen, through its existing capacitive technology. (Other technologies, such as HP’s Touchsmart, are doing something like this using infrared sensors. Correction: An earlier version of this piece suggested that Synaptics is working on using infrared; it is not.). These efforts can then let you control a device simply by waving your finger or hand above the surface. Synaptics is also developing technology to take things in the other direction: detecting when you’re forcing your finger down on the screen beyond just a simple touch. Finally, technology is being built to detect how you’re holding the phone. This will improve on existing accelerometer technology, to make it more sensitive (the iPhone’s switch to and from “landscape mode” works, but not that well). It will also allow you to issue additional commands by gripping in the device in certain ways. Finally, for safety reasons (to avoid car accidents), Synaptics and others are looking for ways for users to more easily control a device with a single hand, or with feedback technology such as haptic technology (which has the screen push back against your finger slightly, when you’ve pressed it in the right area) or through speech recognition.

Waving my hand in the air to solicit immediate information about three women at the bar may be interesting, but it’s a long way off. Right now, look forward to all the cool stuff that’s about to happen on phones and other interactive displays.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More