Skip to main content [aditude-amp id="stickyleaderboard" targeting='{"env":"staging","page_type":"article","post_id":1676985,"post_type":"exclusive","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,media,offbeat,social,","session":"B"}']
Exclusive

When Oculus meets Kinect, virtual reality gets a whole lot more real

Avatars inside AltSpaceVR's Oculus environments have arms thanks to an integration with Microsoft's Kinect.

Image Credit: AltSpaceVR

REDWOOD CITY, Calif. — In the real world, we have arms and we like to use them. To point. To make points. To scratch our heads. To gesture randomly, or wildly. So why wouldn’t we want the same thing for our avatars in a virtual world?

That’s the rationale for the newest feature of AltSpaceVR’s Oculus virtual reality environments — an integration of Microsoft’s Kinect motion sensor that recognizes your arm and body movements and translates them to your virtual doppelganger.

[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":1676985,"post_type":"exclusive","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,media,offbeat,social,","session":"B"}']

AltSpaceVR, a startup developing shared VR spaces — places where people from across the globe can gather to do things like watch movies, the Super Bowl, or have meetings — may be the first to combine Kinect with Oculus, at least in a commercial application. And thanks to that implementation, you can now have much richer, more realistic VR exchanges.

Last week, I visited AltSpaceVR’s offices in this city 27 miles south of San Francisco for a demo. Although the company also showed me its new multi-user Netflix content synchronization — meaning multiple people can “sit” together and watch movies or TV shows on Netflix together in a virtual theater — I was much more taken with the Kinect feature.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

Standing up in a small conference room, a set of Oculus goggles on my head, I began waving my arms. Immediately, my “arms” appeared in front of me, moving just as I was moving my real limbs. Up, down, left, right. Together, or one at a time — whatever I was actually doing.

Above: Look, it’s my arms!

Image Credit: AltSpaceVR

“We imagine this as a one-to-many type of thing,” said Eric Romo, AltSpace’s CEO, “where someone on stage could be gesturing or pointing to a slide. Or musicians, for example, where you might see the musician making realistic gestures.”

Yes, that’s right. VR air guitar.

Or, as Romo shows me, as he mimed the letters from the famous Village People song, a virtual “YMCA” dance.

Right now, there’s not much more to it, and AltSpace’s avatars are still pretty rudimentary. Without the Kinect integration, they look like thin robots with no arms, and with Kinect, they … have arms.

But spend any time inside AltSpace’s Oculus environments and you quickly realize that the addition of realistic gestures brings a level of emotional realism to the avatars that wasn’t there before. Although there was some realism thanks to the movement of the avatars’ heads — which mirrored the Oculus-wearing user’s head movements — it was pretty limited. And despite the fact that the Kinect doesn’t capture the movement of individual fingers, meaning that the hands are sort of monolithic, there’s definitely something important going on.

[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":1676985,"post_type":"exclusive","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,media,offbeat,social,","session":"B"}']

It’s not hard to imagine how, as AltSpaceVR improves the integration, or other VR developers work out their own implementations, that our digital representations in virtual reality, be it Oculus, or Magic Leap, or Microsoft’s HoloLens, or Sony’s Morpheus, will successfully convey all the little bits of information and nuance that we do every day with our carbon-based bodies.

But it might be awhile, as those in the movie industry know all too well. As Romo put it, recalling a recent conversation with someone in film visual effects, “‘We can’t even make you like you if we have 20 hours [of image processing time] per frame.’”

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More