Computers have been pretending to have feelings since the first Macintosh computer screen showed a smiley face on startup back in 1984. Or maybe earlier, if you count Star Wars‘ R2-D2 and C-3PO (though it took human actors inside to make those feelings come out). Almost as long as people have interacted with machines, they’ve wanted to have some reassurance that the machines were listening to them.
Software designers (and special effects designers, too) used sleight of hand to give the impression of feelings to very early computers. Those “feelings” were exchanged only one way: No one really thought that the early Mac was happy, and it certainly didn’t know if the user was smiling back. Let’s face it, with early computers, it didn’t matter. Your program did the same thing whether you were smiling or cursing. Now computer programs are being called on for much more sophisticated tasks — and far more elaborate interactions. We’re starting to see this trend with artificial intelligence and bots.
[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":2108333,"post_type":"guest","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,bots,","session":"C"}']At the moment, many of the bots on the market can only act in a programmatic way. They don’t conform to what humans need. Bots run through their own scripted approaches and frequently leave humans with a bland experience. For bots to be most helpful, they have to recognize the feelings of the people they are interacting with and use that intelligence to guide their responses throughout the rest of the conversation.
Showing the outward signs of basic human emotions might have been the first stage in creating an emotional dimension for computers. Recognizing human feelings is the next stage and is becoming a key discipline in what is known as affective computing — the science and development of machines that can interpret, respond, and simulate human emotion. You may already have interacted with some machines that do that to a very limited degree. If you furiously hit the zero key enough times in a customer service system, for example, you’re likely to trigger the process that will transfer you to a human operator.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
Just recently, my credit card got lost while being delivered to my home. When I called the credit card company to resolve the issue, I wrestled with an interactive voice response (IVR) system for 50 minutes. The IVR system lacked the ability to recognize my emotion, or sense that I didn’t like the service I was getting. And it certainly didn’t suggest a temporary solution to appease my dissatisfaction (e.g., making dinner free when I use my credit card for the first time).
In the near future, however, digital agents will be far more effective at solving these kinds of problems as they take into account a range of both verbal and nonverbal cues — mood, personality, user satisfaction, etc. The idea isn’t just to give the right answer to questions. It’s to make sure that digital agents understand the questions, and that people feel like they are getting the right answer. By capturing and acknowledging a person’s emotions and mood, agents can offer emotional satisfaction too, and that can have a dramatic impact on overall customer satisfaction.
Some may argue that the key value of AI systems is that they can make decisions unhampered by human emotions. But when it comes to human communication, recognizing and showing emotion is very important to the outcome you are trying to achieve. An expanding body of research shows that humans respond to emotional cues by mirroring the expressions they see. This mirroring is a basic part of social interaction.
Some of the most interesting findings in AI research show that humans will treat virtual agents like a fellow human being if they get the right visual and emotional cues. This ability becomes increasingly important as computers take on more emotionally loaded roles. Medical diagnosis, for example, may be a key test of the ability of virtual agents. Computers can already, in many instances, deliver the right diagnosis. A key test is whether they can deliver it with the right bedside manner.
In a foundational book on affective computing, MIT artificial intelligence researcher Rosalind Picard wrote, “There is a time to express emotion, and a time to forbear; a time to sense what others are feeling and a time to ignore feelings. In every time, we need a balance, and this balance is missing in computing.”
Whether AI systems need to “have feelings” may well be a red herring. But recognizing feelings, and expressing the right feelings in return, is a skill that AI agents do require and are quickly learning. Rather than debate whether computers can express feelings, we should examine when they should do so. That requires more than understanding computers. It requires getting a deeper understanding of humans — and teaching computers to do the same.
[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":2108333,"post_type":"guest","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,bots,","session":"C"}']
For decades, computers have had a great poker face, projecting efficiency and accuracy. Now one of the toughest and most fruitful challenges in AI is teaching them how to offer empathy, too.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More