Think back on how rapidly our computing habits have changed over the past decade and try not to get whiplash.
We’ve gone from desktops to laptops to touchscreen smartphones and tablets. Now with wearable computing on the horizon, gadgets that we’ll literally be attaching to our bodies, it’s hard to ignore one fascinating trend: We just keep getting closer to our technology.
The next logical step? Gadgets inside our bodies.
With all the talk of user experience these days (heck, we dedicated a whole conference to exploring the mobile experience), it’s important to step back and consider what these evolving computing habits actually mean. By the time we start regularly implanting devices in our bodies, we may have to rethink our definition of humanity altogether.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
Symbiotic computing
When computers were sitting on our desks, they were mainly just productivity tools. But it wasn’t too long before we became attached to them thanks to message boards, instant messaging, and the Web 2.0 wave of socially connected services. With smartphones — which are generally always within an arms’ reach — computers have now become an essential part of our lives.
A big reason for that is capacitive touchscreen technology, first made popular by the iPhone. Those touchscreens introduced us to a whole new array of computing gestures that have fundamentally changed the way we interact with our gadgets. Think about that moment when you first learned the now ubiquitous pinch-to-zoom gesture, or how often you’ve tried to interact with a non-touchscreen display with gestures out of habit.
Capacitive touchscreens simply felt natural. With keyboards, mice, and trackpads, we were basically manipulating our computers like marionettes. With touchscreens, the only barrier we have in our interactions with the digital world is a very thin layer of glass.
I’ve noticed this among young children raised on touchscreen devices. It doesn’t take much for them to learn how to swipe to open up a smartphone and launch Angry Birds. But put a mouse in their hands? You see only confusion. A disembodied computing experience simply doesn’t make sense to them.
“I’m amazed at the nighttime usage for mobile — do people not sleep? … It kind of tells you we’re already in the wearable space; it’s [mobile] glued to consumer’s bodies.”
Wearable devices like the Jawbone Up, Nike FuelBand, and a plethora of smartwatches bring us even closer to our smartphones. Wearables typically offer motion sensors and displays of their own, which serve as both a conduit and window into the pulse of information from our mobile devices.
But even though the wearable category is still so young, we’re already seeing how quickly society can reject devices that bring us too close to our technology. Bluetooth headsets have gone from an innovative wireless communication technology to a symbol of urban blowhards.
Google Glass is seeing similar pushback: Google says it will bring you closer to people in the real world because you won’t have to pull out your smartphone, but in my experience Glass wearers are even worse than smartphone owners when it comes to eye contact. There’s a creepy far-off gaze that Glass owners seem to adopt, as if their brains can’t quite decide if they should pay attention to the people in front of them or the information coming through their tiny heads-up display.
Glass also represents one of the more worrying aspects of our self-imposed technological evolution: It clearly distinguishes the haves from the have-nots. The only people wearing Glass now are Google employees or the exclusive few who’ve been given the privilege to spend $1,500 on a unit that doesn’t really do much. The price will certainly come down eventually — just like it did for the once-exclusive iPhone — but that won’t do much to fix the obnoxious Glasshole image it’s currently inspiring.
Where do we go from here?
For all of its issues, at least you can still take Glass off and revert to being a plain-old human. That won’t be the case once tech starts getting under our skin. Biotech implants will offer an untold wave of benefits — Image enhancement! Invisible headphones! Making NFC finally useful! — but the costs will be even higher.
We’re already seeing bars and casinos banning Google Glass — now imagine if you had no way to remove the offending tech. And let’s not even get into the potentially horrific ways thieves could target expensive biotech implants. (And you can be sure a black market for that stuff will appear.)
The rise of biotech will force us to confront new philosophical issues far beyond the level of discourse we’re used to in the technology world. It’ll also be a boon for the Singularity movement, which has been preaching for years that we’ll see a technological singularity — the point where computers become more intelligent than humans — between 2030 and 2045.
It’s hard to tell if the Singularity will occur as futurists predict — but when it comes to our personal relationship to technology, it’s clear that we’ll only get closer over time.
The big takeaway? The wearable computing era may be the last time we’re clearly separated from our tech.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More