This has been the year of the chatbot. Siri opened up to developers recently. The Facebook Messenger bots arrived. The Slack App Store continues to evolve quickly with hundreds of bots.

Chatbots have been in many ways disappointing to me, but they are an important step in the evolution of conversational technology. So what comes next? I think we are at the dawn of the intelligent assistant. Advances in natural language processing, artificial intelligence, and conversational interfaces are finally making truly intelligent assistants possible.

As bots mature into true assistants that can do many things for us, their autonomy will present some interesting challenges. Intelligent assistants need a framework for how to interact with humans, how to make decisions, and how to understand the information they have and how it can be used.

At Talla, we’ve been building an intelligent assistant for knowledge workers, and thus we’ve thought a lot about these issues. Internally, we decided to use Isaac Asimov’s Three Laws of Robotics as inspiration for our own three laws of intelligent digital assistants. Here they are:

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

1. An intelligent assistant must always work to serve you

An assistant isn’t very useful if it creates more problems than it solves. To be useful, an intelligent assistant should anticipate your needs and be ready with information and ideas. It should be able to perform simple tasks that you teach it to do, and should provide visibility into all the work it does, so that you can understand what it’s working on and how your assistant makes certain decisions.

For instance, in the early days of working with your assistant, the A.I. should be extra-transparent in its processes and decision-making with the user. Like a human assistant would, once the A.I. has built trust and showed clear competence in completing tasks, they can dial back what would perhaps be oversharing after working together for a while.

This means intelligent assistants don’t need to be programmed with their own goals, because their only goal is to serve you. They shouldn’t be programmed with ulterior motives by the companies that build them, like advertising to you or providing some backdoor information.

Intelligent agents are only successful when they make you successful, so that has to be their focus.

2. An intelligent assistant must represent your best interests in all transactions

An intelligent assistant may engage in certain transactions on your behalf. It should learn your preferences and priorities so it can do this effectively. An assistant may need to perform tasks with very little direction sometimes (e.g. “schedule a meeting with Fred” or “rent a car for my trip to New York”). Representing your best interest in these transactions requires an assistant to know a lot about you.

In the not too distant future, we may even see assistants negotiating with other assistants. So, when you have two A.I.s talking to each other, each representing the wishes of their owner, you have to be sure your assistant isn’t built to respond to incentives that aren’t best for you. It is part of the reason that, it’s better to have an intelligent assistant that can rent a car for you, rather than using an intelligent rent-a-car assistant built by the rental company. That assistant may not have your best interests at heart.

3. An intelligent assistant must adapt and learn

Intelligent assistants may begin their existence with a specific set of skills or features, but it’s important that they also learn from their human counterparts. Over time, developers can teach the bot to perform more and more complicated tasks, and become more useful.

This may be the most difficult piece of building an intelligent assistant, because adapting and learning isn’t the way we build software today. And building software that learns means we don’t always know in advance how it may turn out. Learning agents will respond, by definition, to what they are taught, so how do we prevent another Microsoft Tay debacle?

Intelligent assistants will need rules over what they can learn, and how they can learn, to make sure that they don’t behave in ways that embarrass their owners.

Intelligent assistants will transform how we work over the next five years. We will use them for dozens of simple tasks a day, and they will free us up from doing some of the monotonous parts of our jobs so that we can focus more time on the work we really love to do.

As our ecosystem develops, there will certainly be more questions that arise. What use case of intelligent assistants do you find most promising? Let us know in the comments.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More