While we think of them as the latest thing in tech, conversational interfaces have been around for quite some time. From Cleverbot and Smarter Child to labyrinthine phone trees (“say REPRESENTATIVE”), we have been trying for years to build technology that mimics how we interact with humans. Recent advances have positioned these tools for substantial growth and brought them back to the foreground of the conversation on the future of technology.
Conversational interfaces for both speech and text have risen to prominence thanks to virtual assistants, or “chatbots,” such as Apple’s Siri and Amazon’s Alexa. Also, text-based chatbots, or messaging platforms such as Slack and Facebook Messenger, have seen a huge spike in utilization. All of this excitement is causing an influx in investment and interest in the market. Major brands like Domino’s are using virtual assistants to help customers order food, and healthcare companies like HealthTap have leveraged Facebook Messenger to connect patients with doctors.
[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":2133740,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"bots,business,","session":"C"}']While these companies have seen success with their platforms, they are also going to run into inevitable roadblocks. Is this technology as close to performing complex tasks as we think? In what contexts will users decide to turn to chatbots in place of other options?
While we don’t have all the answers, we can make some educated guesses as to what problems chatbots may face based on what we know about the history of chatbots and how people have historically interacted with technology.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
Roadblock #1: Language is complicated
Modern applications of conversational interfaces are built on advances in AI and tap the ubiquity of connected devices to provide users with shortcuts to complete generally simple requests, such as getting an answer to a straightforward question (“What’s the weather like in Chicago?”) or completing a quick task (“Remind me to call Linda in 30 minutes”).
Like software, languages are built on a system of rules that develop and evolve over time. However, people who speak a language are unlike computers, not locked down by these rules and free to form sentences and even coin words to convey a message. Beyond regional dialects, individuals develop unique patterns of speech, and humans are generally pretty good at understanding each other — even when the syntax strays far from the rules of language. When Stephen Colbert spoke of “truthiness,” it wasn’t hard to discern what was meant, despite the fact that the word doesn’t yet exist in the dictionary. Computers, historically, have had a harder time understanding words like that. Machine learning has catalyzed speech recognition, but we’re not yet at a point where AI can keep up with the rapid evolution in speech or understand every particular way of speaking.
In traditional interfaces, accessibility is often tied to the user’s abilities with regard to visual processing or physical manipulation. As conversation enters the fold, the system’s ability to understand and respond to the user’s unique patterns of speech, whether spoken or typed, adds a new layer of accessibility. A system that understands only perfectly formatted speech will be inaccessible to vast numbers of people, who will find it tedious, at best, to rephrase their request to suit the computer. Likewise, a system that doesn’t respond with similarly familiar language will never be able to form the sort of familiar bond many product makers desire.
Roadblock #2: Trust and understanding
We have seen a number of effective chatbots on the service side, such as those for banking. These days, it’s not uncommon to call your bank and check your balance without ever speaking to a human. Here, we have already begun to see handoffs between bots and humans, bridging the gap between automation and personal attention. These interactions, which are generally objective in nature, work well for AIs. People, in general, seem to trust computers with basic facts and figures, especially numbers (as in the case of an account balance).
When you move past objective insights toward more subjective thinking, things get complicated. Telemedicine, which provides patients with the ability to see a healthcare provider without leaving the house — via remote communication tools — is increasing in popularity in the healthcare space. This shift from an in-person to digital approach works because it’s clear there is still a human on the other end. If you took away the human and replaced it with a bot, would the level of trust stay the same? Almost certainly not. The more objective part, taking a list of symptoms and spitting out a list of potential diagnoses, might work — in much the way people use WebMD today. But getting from a list of potential diagnoses to an actual outcome, as your doctor would, requires complex understanding and judgment. Beyond the difficulty of getting the diagnosis right lies the even greater burden of gaining trust and the need to earn user acceptance of a computer-generated outcome.
Roadblock #3: Ease of use
So far, chatbots that have come to market are not introducing wholly new behaviors, but rather taking new approaches to old problems. You can check your account balance, buy clothes, and order flowers from any web browser, and in many cases it’ll be a pretty familiar process. For chatbots to succeed, they’ll need to provide an experience that is fundamentally better than the familiar alternatives.
[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":2133740,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"bots,business,","session":"C"}']
With chatbots, a better experience will mean simplicity and seamlessness. Waiting while a machine reads out a list of options isn’t fun, especially when you know you could select an option on the web in a fraction of the time. For some tasks, chatbots may never be the answer, as they necessarily can’t be simplified enough to work through this sort of interface. For other tasks, however, combinations of product simplification, personalization, and effective machine learning can help solve many of these problems. By understanding the user more thoroughly, these systems can provide unique value and streamline tasks to the point that a conversation may provide the simplest interaction.
Looking to the future
As time goes on, language processing and overall AI will continuously improve, opening new opportunities for chatbots to perform complicated tasks with minimal input. While most bots are designed to replicate human processes, AI is a long way from being able to solve many complex human problems, or to do so in a way that’s easier than alternatives. But we’re much closer now than we were when SmarterChild or Moviefone made waves, and significant investment is being made to fuel continual advancement in this area.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More