Unlike every other hot topic from 2016, voice interaction and AI does not bore me. I hope you feel the same, because we’ll be hearing even more about them in 2017. Here’s what we should keep our eyes (and ears) on.

1. Standards emerge

As standards and intuitive features emerge, and digital designers everywhere recognize how momentous that is, voice interactions will improve greatly. The set of definitions and models we create for voice interaction with assistants like Alexa and Siri today will influence the shape of things for a long time to come. For an analogy, think back across the models of interaction that caught on over the past 20 years — how we browse the web, or the common icons, forms, and gesture styles we use across the app market. Standards for how we interact with voice assistants will emerge in the same way.

2. Voice-activated AI encroaches on Google search dominance

Voice interaction experiences represent a small window of opportunity for competitors to loosen Google’s stranglehold on the search market. For the most part, voice interaction remains an exercise in precision and contextual search results. Even though it’s still a “best algorithm wins” game, all algorithms are pretty good now. Just as everyone got hooked on Googling for information, they could easily get hooked into a new interaction habit with a voice assistant that isn’t powered by Google. In fact, Microsoft’s Bing currently sits behind Amazon’s Echo, Apple’s Siri, and of course, Cortana. I’m personally not betting that Google gets unseated  —  they are running hard at this space  — but again, it’s a window of opportunity. I wouldn’t be surprised if new entrants steal coveted user loyalty.

3. They give parental advice

Are you, or have you ever been, a new parent? If not, let me bring you up to speed: New moms and dads have their hands full. Literally. The conditions for voice interactions are perfect in a home with young children . In addition to being able to give your child a bath or cook a meal without interruption, every parent can tell you that if you try to stay connected with your mobile phone in proximity to a young child, the phone will be ripped from your hands, used as a teething toy, and then flung across the room at the cat. And it turns out phones are expensive to replace. Want to know how to improve your voice efforts? Talk to new parents.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

4. AirPods help you communicate with the AI

I think Apple sees AirPods as something akin to Google Glass, but with less risk of negative social bias. AirPods, which have been called a computer for your ears, present an always-on, always at-the-ready opportunity for personal connection to content. Apple is simply placing a first bet on a nonvisual connection  — at least until optical wearable technology matures. In 2017, we will see this new product become more and more useful as an interface to AI assistance.

5. Brand personalities go multi-platform

If Max Headroom were around today, he’d be impressed by what’s to come in 2017. The closer voice interactions and AI get to natural language conversations, the more we can apply personality traits to our virtual agents. It’s important to craft unique personalities and a tone of voice that aligns with a core brand  — or potentially introduce a new character for a brand that can shift perceptions or attract a new audience. In either case, look for these new personalities to make their debuts through voice in 2017, but more importantly, look for them to take hold across multiple channels, platforms, and apps. In the very near future we will see a new TV brand character that was originally born out of a digital voice experience.

6. Hardware peripheral support emerges

Siri now works with multiple apps. It’s a clear reminder that we’re heading to a voice-enabled future. The only issue is that many of those products were not engineered with listening and speaking in mind. The quality and ability of the microphone in most multi-use devices can’t keep up with purpose-built ones such as the beloved Amazon Echo and Google Home speaker. We will continue to see PC devices that have voice interaction software capabilities, but imperfect hardware out of the box. I expect to see Apple launch purpose-built peripherals as well.

7. Your CMS starts to listen to you

It may be the last thing many people think of, but it will be one of the first things companies notice. The current state of major website content management systems (CMS) and digital asset management (DAM) is about to fall woefully short of consumer demands, because companies are still building infrastructure around text, forms, and image assets. Many companies have not prepared themselves for the next 3 to 5 years, and, as they begin to look into delivering voice-enabled experiences in 2017, the gaps will begin to appear. That won’t do. Companies need to start building for the very near future if they want to deliver product-related audio content across new sets of channels and interactions. Now that the CMS has become the hub of the content marketing cloud, it must evolve to facilitate voice interaction — just as it evolved to syndicate content across channels.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More