[Full disclosure: Body Labs is backed by Intel Capital and is working with Intel RealSense to develop 3D body scanning software for smartphones.]
2016 marks the beginning of a fundamental leap forward in smartphone hardware: depth-sensing cameras. We’ve already seen accelerometers, gyroscopes, barometers, cameras, and fingerprint sensors become common on even the most budget-friendly smartphones. And, due to the accelerated hardware arms race between the world’s top manufacturers, we’ll see depth-sensors on some consumer tablets hitting the market this year.
These sensors supplement today’s monocular RGB images with per-pixel depth information (often derived by projecting a pattern of 3D infrared light into a scene). The technology will enable enhanced object, body, facial, and gesture recognition.
Depth sensors will not only make a smartphone more aware of its immediate environment but will also improve the ability to accurately isolate and identify a user’s body in space. Ultimately, this will spark an explosion in consumer applications ranging from virtual apparel try-on to personalized VR experiences mapped to your living area.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
The true game-changing element will be the form factor, not the technology. Depth-sensing cameras are hardly new — Microsoft released the Kinect in 2010. However, if you’re Google Maps trying to navigate indoors or Oculus attempting to improve the immersive experience of virtual reality (VR), you need to solve what I refer to as an “input barrier” with enabling hardware. This barrier largely consists of three main challenges:
1. Cost. Previously, comparable technology has cost anywhere from $10,000 to more than $250,000 for a high-end laser scanner — not within the range of the average consumer. However, the recent commoditization of depth-sensing cameras has made the cost of implementing them into smartphones more justifiable.
2. Convenience. With sensors coming from Intel RealSense, Google Project Tango, and Apple PrimeSense, depth-sensing cameras are now small enough to be included in smartphones. By being included in a smartphone, the user also only has to commit to purchasing one device.
3. Adoption. This technology isn’t valuable if a large number of hardware manufacturers refuse to adopt it. Fortunately, the smartphone industry benefits from a very fast product refresh cycle. Unlike televisions, which are upgraded by U.S. consumers every seven or eight years, smartphones are upgraded approximately every 18 months. It’s why more Americans could have a depth sensor in their phone before having a 4K TV in their living room.
With Intel, Google, and possibly even Apple poised to push their sensors onto mobile devices this year, the data from 3D sensors could quickly become a viable platform for developers to build on. Building software around 3D data is challenging, but companies like mine are already working to transform the raw data generated from these sensors into easy-to-use 3D models that enable new applications and functionality. As a result, this new platform could disrupt several major markets before the year is over:
1. Digital photography
By incorporating 3D information with photos and video, we will have new options when it comes to editing digital content. For example, you could automatically remove and replace the background of an image, or segment (e.g., “cut out”) a specific object for use as a standalone graphic, which could become a valuable feature of smartphone photography.
2. Mapping and navigation
Google Maps is the most widely used navigation software in the U.S., but it’s usefulness ends when entering a building. With no access to GPS, depth-sensing technology can provide mapping applications with accurate 3D models of building interiors. They can also provide a user’s position and orientation within these buildings to guide them directly to a product or service. The University of Oxford has also been experimenting with depth sensors for a few years to provide the visually impaired with a set of “smart glasses” that could also assist them in navigating through the world around them.
3. Fashion and apparel
Apparel fit has been estimated to be a multibillion-dollar problem for many retailers with more than a third of their online sales returned due to inaccurate sizing. But depth sensors in smartphones could enable accurate sizing recommendations and custom tailoring without a user having to leave their living room. Retailers can use applications that provide sizing recommendation engines such as True Fit with additional tools that can capture personalized body shape to drive down their returns and improve their knowledge of their customers.
4. Virtual reality (VR) and augmented reality (AR)
A challenge with VR is enhancing the sense of presence related to three major factors: 1) the use of your hands, 2) occlusion — the effect of one object blocking another from view, and 3) moving into the environment. By using a VR headset like Samsung’s Gear VR — enabled by a depth-sensing smartphone — a game could identify obstructions in the real world to inform it how to animate them in the virtual one. By also maintaining a sense of presence in reality — as well as virtually — users could roam freely about the game and also customize it to their living space.
5. Product design and 3D printing
The 3D printer market is estimated to grow to $5.4 billion by 2018. With depth sensors, users could quickly scan real-world objects or people from their smartphones in a matter of seconds. Artists could then seamlessly build, print, and manufacture personalized products at scale. This technology will reduce the expertise required and the overhead demanded to design and print in 3D. We’re now seeing companies such as Nervous System that are designing in 3D and then using a Kinematics system for 4D printing that creates complex, foldable forms composed of articulated modules. This, potentially combined with companies like Voodoo Manufacturing, which is delivering fast, affordable and scalable 3D printing, would drive down the cost and time associated with product development cycles.
6. Health and fitness
There are more than 138 million total health and fitness clubs worldwide with an estimated market size of $78.17 billion. These clubs have three priorities: 1) bring in new members, 2) retain current members, and 3) get existing members to spend more on additional services. In order to justify new services, health clubs are looking to equip trainers with depth-sensing cameras to efficiently visualize recorded progress like weight loss or muscle growth during a workout regime. This could also unlock new features for apps such as Google Fit, Apple’s HealthKit, and Samsung’s S Health by enabling them to track shape change over time. Companies such as VirtualU are also taking 3D scanning technology and partnering with health clubs to providing vivid health and fitness metric tracking beyond antiquated measurements like BMI.
Those are just a few of the many possible applications for depth-sensor enabled smartphones. But it will take hard work and a lot of investment to bring this potential to life. Even high-quality 3D sensors are only useful if supported by a robust collection of software libraries.
And sensor makers will have to adhere to a standard set of APIs to prevent platform fragmentation, an issue that’s already prevalent in the smartphone industry. Such APIs will also need to mitigate what is currently a steep learning curve when it comes to application development around 3D images. From first-hand experience, my colleagues and I can attest that working with raw RGB-D data currently draws too heavily on PhD-level machine learning and requires more than a passing familiarity with relevant academic research.
We anticipate that companies releasing new 3D sensors will need to invest heavily in the software development resources these sensors require. Even so, the enormous potential value of these new devices will more than outweigh the investment needed for its adoption. With depth sensors making their way into devices this year, the currently iterative smartphone industry is potentially set for another exciting shake-up.
Eric Rachlin is a cofounder of Body Labs and leads the company’s product development. Before Body Labs, he worked as a senior research scientist at the MPI for Intelligent Systems, where he helped manage a team of computer vision researchers to build a statistical model of human pose and shape.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More