Aquifi is coming out of stealth to announce Fluid Experience, a new software platform that aims to create a new generation of gesture controls that work better than the Kinect technology for Microsoft’s Xbox One video game console.
The main mission is to create adaptive gesture controls that make machines adapt to humans rather than vice versa, so that they really work as magically as they’re supposed to, given all of the hype around motion-sensing in the past few years. Aquifi says that more precise gesture technology will work over wider areas, interpret more than just hand gestures or body positions, and adapt based on machine learning.
[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":1460783,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,games,mobile,","session":"C"}']The Fluid Experience will work on commodity gesture-control hardware, making innovations in human interface controls much more affordable than in the past, said Nazim Kareemi, the chief executive of Aquifi, in an interview with VentureBeat.
“If Kinect was the first generation, we’re building the second generation,” he said. “In the past, you had to adapt to the machine. We want it to adapt to you.”
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
The Palo Alto, Calif.-based Aquifi has raised $9 million from Benchmark Capital (the backer of eBay and Instagram), and private investors including Blake Krikorian, founder of Sling Media, and Mike Farmwald, cofounder of Rambus.
“The vision that Aquifi’s founders saw a decade ago for 3D tracking and its consequences is becoming a reality today,” said Bruce Dunlevie, Benchmark Capital partner. “The team has learned a lot from the collective experience of its members, and understands what is needed to make a fluid experience available to everyone, on all their devices.”
Kareemi has assembled a team of veterans who worked at Canesta, the gesture-control chip company that Microsoft bought and used for its second generation of its Kinect motion-sensing game controls in the Xbox One. Kareemi was a cofounder of Canesta, and he believes that the Fluid Experience technology will improve upon it in several ways. They are developing technology based on computer vision, machine learning, and cloud services.
Costly custom sensors held back earlier versions of the technology in the Wii and Xbox 360 game consoles. The Xbox One uses the Canesta hardware and comes with Kinect, but that system still pricey at $500. Kareemi wants something that has a much lower cost and that’s based on software that can run on a lot of different platforms. People will be able to upgrade the software in the field.
“That will make it adaptable and easier to use,” he said.
It will also be more precise. Back in 2010, the first Kinect technology could detect 3D points in a 320-by-240 grid. The newer Kinect can detect on a 640-by-480 field, while Aquifi can detect points in a 1,280-by-720 grid.
[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":1460783,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,games,mobile,","session":"C"}']
He wants the machine to be able to interpret someone’s movements and gestures, removing barriers and anticipating actions. The technology will work on commodity image sensors and be available for use with smartphones, tablets, PCs, wearable devices and other machines.
The wearable apps could use augmented reality, or the combination of virtual imagery with the real word, for things like scanning an object or mapping a room with a smartphone. The technology could also work in safe car applications that combine voice recognition and motion detection.
Devices could go into “autolock” when a face not recognized looks at them, enhancing security. They could also power down when they detect that no one is looking at them, saving on power, and power up when they detect a face, reducing start times.
“Within the next decade, machines will respond to us and our needs through intuitive interpretation of our actions, movements, and gestures,” said Kareemi. “Our fluid experience platform represents the next generation in natural interfaces, and will enable adaptive interfaces to become ubiquitous, thanks to our technology’s breakthrough economics.”
[aditude-amp id="medium2" targeting='{"env":"staging","page_type":"article","post_id":1460783,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,games,mobile,","session":"C"}']
Aquifi will introduce the technology to developers over the next six months, and it expects the first Fluid Experience devices will debut in the first half of 2015.
Kareemi cofounded Aquifi in 2011, and it has 29 employees. The founders addressed questions like how to make machines adapt to people, how to create an enduring platform, and how to enable a ubiquitous solution. The company has four patents and applied for more than 30.
[aditude-amp id="medium3" targeting='{"env":"staging","page_type":"article","post_id":1460783,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,games,mobile,","session":"C"}']
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More