This startup is from the most recent batch of Y Combinator companies, and it’s all about making gestural interfaces faster and easier to create.
[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":323791,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"dev,","session":"D"}']For developers, the company provides bindings to the Unity3D game engine, and it also gives developers a handful of sample apps to start their own work.
“We haven’t released our UI components yet (we still have to remove the curse words from the code comments), but it takes about a minute to make a hand-gesture navigator for a list of elements,” said Hirsch. “For example, we can turn a catalog browser or slide show presentation into a motion-controlled application in a matter of a few minutes.”
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
We asked the ZigFu team exactly how their offering works. Hirsch told us it’s “one part laser, two parts camera and three parts software magic.”
He continued, “We use the OpenNI framework for natural interaction, which provides an abstract framework for using skeleton tracking or hand tracking.”
OpenNI is an organization that’s trying to make sure gesture-focused hardware, middleware and software will all operate on the same standards and work well together. Its open-source framework provides an API for natural interaction with hardware and software.
“We’ve bound OpenNI to the Unity game engine to provide high-quality cross-platform development,” said Hirsch. “In Unity, we expose skeleton tracking and hand tracking; and on top of hand-tracking, we provide user interface components like menus, lists and feeds.”
In addition to OpenNI and Unity, ZigFu also uses PrimeSense NITE, a middleware that allows computers to perceive a three-dimensional world and a technology that’s been used extensively in Kinect hacking.
Naturally, ZigFu supports the Kinect software development kit, and Hirsch said the startup is “working with other vendors of computer vision middleware to incorporate additional functionality like gesture and face recognition.”
[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":323791,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"dev,","session":"D"}']
Using all these tools, Hirsch said, “It takes about two minutes of click-and-drag work to get your own motion-controlled avatar up and running. The task involves connecting the shoulders, elbows, knees and hips on your 3-D character to the data output by the skeleton tracking module. Once you have a game engine like Unity combined with avatar control, the possibilities expand from there.”
Here’s some footage of the technology in action:
Hirsch said that before he started working on ZigFu, he had been working on motion apps for teaching dance routines and even military maneuvers. As a result, he sees potential for motion-controlled, natural interfaces in real-world, real-work environments — think air traffic control training or physical therapy — just as much as in creative play and gaming.
“We think we’ve changed the task of producing a motion app into a design job rather than a development job. There is a lot of interest in this technology for digital signage and interactive marketing as well as simulation and training.”
[aditude-amp id="medium2" targeting='{"env":"staging","page_type":"article","post_id":323791,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"dev,","session":"D"}']
Right now, Microsoft’s Kinect is the most efficient way for developers to get started with ZigFu’s tools. The startup also supports the ASUS Xtion depth sensor.
However, said Hirsch, “The TV is definitely the next frontier for motion tracking technology. Flicking through channels and navigating your DVR is just one use case. Imagine waving your hand through a list of friends to place a video call. Or imagine shopping on your TV with hand gestures and using a virtual fitting room to try on products. We believe that an ecosystem of apps and games will be the major differentiating factor in the smart TV race.”
Stay tuned for ongoing coverage of new Y Combinator comanies from the 2011 class.
Image courtesy of popculturegeek.
[aditude-amp id="medium3" targeting='{"env":"staging","page_type":"article","post_id":323791,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"dev,","session":"D"}']
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More