Skip to main content [aditude-amp id="stickyleaderboard" targeting='{"env":"staging","page_type":"article","post_id":1497488,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"mobile,","session":"A"}']

Recording video with Google Glass will never be the same

LiveLight in action

Image Credit: Bin Zhao

Shooting video on Google Glass is about to get interesting.

A small team at Carnegie Mellon University’s Machine Learning Institute has created an app called LiveLight which evaluates action and important sequences in video. The video highlighting algorithm, as it were, was developed by CMU student Bin Zhao and professor Eric Xing. It aims to revolutionize video.

[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":1497488,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"mobile,","session":"A"}']

LiveLight easily scrolls through video footage while searching for what the researchers call “visual novelty” sequences. A video summary is automatically created that enables viewers to automatically see important segments in the footage. The technology has broad implications for the security and personal video markets, which are enormous.

“If something happens within the video, we can red-flag it. If you have a 5-minute-long video, you have to endure the first 3 minutes before the action starts. Our algorithm tags the parts of the action,” you need to see, Zhao, who is a CMU Phd student, told VentureBeat.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

“The algorithm,” Zhao said, “never looks back.”

LiveLight’s technology holds such promise that the industrious Xing and Zhao have formed a company around it, called PanOptus, to market their algorithm. While the app is market ready, the version being designed for Google Glass, of which Zhao and company have three pairs, will be available shortly.

Above: Bin Zhao

Image Credit: Bin Zhao

Indeed, shooting video with GoPro or Google Glass, the algorithm can automatically upload trailers to social media sites. The duo said this summarizing “avoids generating costly Internet data charges and tedious manual editing on long-form videos.”

So, shooting video using Google Glass with LiveLight means instead of scrolling through 60 minutes of footage, the algorithm automatically ignores the boring stuff and highlights the meat.

“There are two markets for this,” the studious Zhao said via cell phone. “The professional market, like security and law enforcement, and the civilian market, which is huge. Right now, people don’t have an effective way to process videos.”

You can check out the video here.

[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":1497488,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"mobile,","session":"A"}']

Zhao said the LiveLight video summary occurs in “quasi real-time” with a single scroll through of the video. While not instantaneous, the process of editing a one-hour video looking for the best segments can be done on a conventional laptop in less than 30 minutes. With more powerful devices, production times can be shortened further.

The duo laid out the best aspects of LiveLight this way:

“As the algorithm processes the video, it compiles a dictionary of its content. The algorithm then uses the learned dictionary to decide in a very efficient way if a newly seen segment is similar to previously observed events, such as routine traffic on a highway. Segments thus identified as trivial recurrences or eventless are excluded from the summary. Novel sequences not appearing in the learned dictionary, such as an erratic car, or a traffic accident, would be included in the summary.”

LiveLight provides a ranked list of novel sequences for a human editor to consider for the final video. In addition to selecting the sequences, a human editor might choose to restore some of the footage deemed worthless to provide context or visual transitions before and after the sequences of interest.”

Research for LiveLight’s development came from, surprise, Google, the National Science Foundation, the Office of Naval Research and the Air Force Office of Scientific Research. Heady stuff.

Zhao and partner Xing see a steady stream of clients lining up for LiveLight.

[aditude-amp id="medium2" targeting='{"env":"staging","page_type":"article","post_id":1497488,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"mobile,","session":"A"}']

“We see this as potentially the ultimate unmanned tool for unlocking video data,” Xing said.

“Video has never been easier for the average person to shoot, but reviewing and tagging the raw video remains so tedious that ever larger volumes of video are going unwatched or discarded. The interesting moments captured in those videos thus go unseen and unappreciated.”

Not any more.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More