Media metadata company Gracenote is embracing machine learning and audio analysis as it focuses on helping music-streaming providers identify music styles on a track-by-track basis.

Genre classification is a relatively easy task for most music-obsessed humans. But in the music industry, it’s usually applied at a broad artist level, which doesn’t always convey the full picture. For example, Taylor Swift may be classified as “pop” or “country,” even though many of her individual tracks may lean toward a different style. Similarly, the Beastie Boys have shifted between punk and hip-hop, while Bob Dylan was very much a folkie before he picked up an electric guitar.

The point is, not all artists can be shoehorned into a single category of music. That is why Gracenote is lifting the lid on a new product it calls “sonic style,” a song categorization system that automatically classifies the style of individual music recordings.

The story so far

Though Gracenote has expanded its metadata platform to sports, movies, and TV, music is very much its bread and butter. Gracenote probably isn’t a name that rests on the lips of most music lovers around the world, but the company played a pivotal part in the early days of music-recognition technology — it powered a database used to automatically recognize songs, albums, and artists when a CD player was inserted into a PC drive. This meant that when you ripped a CD to your hard drive, you wouldn’t have to manually input all the key information related to an album. The company was formerly known as the Compact Disc Data Base (CDDB), after all.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

Much has changed in the two decades since the company was founded — for example, Gracenote now supports iTunes and Apple Music, Amazon Music, in-car entertainment systems, and on-screen TV listings. But the underlying premise remains the same: Gracenote holds vast swathes of media metadata that it licenses to broadcasters and technology companies alike.

This value unlocked by its arsenal of data led Sony to acquire Gracenote in 2008 for around $260 million, though five years later Sony sold the company to Tribune Media for $170 million, and it was sold for a huge profit to Nielsen three years after that.

Music and the machines

Spotify has built its business off intelligent music recommendations, powered in part by its 2014 Echo Nest acquisition. For the uninitiated, the Echo Nest is a music data intelligence platform that maps out musical tastes to give fans more informed recommendations. Spotify went on to launch the much-lauded Discover Weekly playlist, among other technology-powered playlists, which have together managed to steer some listeners away from traditional albums.

Playlists, curation, and recommendations play a crucial part in Spotify’s consumer pitch, as they do with Apple Music and other rivals in the space. And being able to identify specific styles of music automatically through machine listening can help with these endeavors.

Gracenote will serve the music industry with a new style-descriptor dataset that promises to help music-streaming providers deliver “a more perfect playlist,” according to a company statement. For example, if someone loves loud guitar music but not softer folk music, a streaming provider can ensure that Bob Dylan’s early years are omitted from that user’s playlists.

Gracenote wouldn’t reveal which, if any, music streaming companies it is working with for the launch of Sonic Style, however.

“These new turbocharged style descriptors will revolutionize how the world’s music is organized and curated, ultimately delivering the freshest, most personalized playlists to keep fans listening,” said Brian Hamilton, general manager of music at Gracenote.

Besides helping match listeners with artists, this technology could also play a big part in winning Gracenote new fans at record labels and publishing houses, given that its data could help them spot trends.

The style “taxonomy” was developed after analyzing various elements of a recording, such as melody, rhythm, harmony, vocal character, timbre, and instrumentation, according to a spokesperson, with humans helping to refine the final classification system of more than 400 styles. Using this dataset, Gracenote can then train its systems to identify millions of songs.

“Now that playlists are the new albums, music curators are clamoring for deeper insights into individual recordings for better discovery and personalization,” added Hamilton. “To achieve scale, sonic style applies neural network-powered machine learning to the world’s music catalogs, enabling Gracenote to deliver granular views of musical styles across complete music catalogs.”

This has interesting implications for the burgeoning smart speaker space, with the likes of Amazon’s Echo and Google Home encouraging users to request music through verbal descriptions. If someone doesn’t have a specific album or artist in mind but just wants to hear “teen pop” or “contemporary country,” for example, Gracenote could be in a position to power such services while offering a selection that spans more artists.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More