Skip to main content [aditude-amp id="stickyleaderboard" targeting='{"env":"staging","page_type":"article","post_id":2163556,"post_type":"guest","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,bots,","session":"D"}']
Guest

AI and the importance of trust and respect

Image Credit: Shutterstock.com/xtock

We are hardwired for judgment. Our paths up from the primordial soup have imbued in us the spirit of quick conclusions, especially when it comes to one another. As Harvard Business School psychologist Amy Cuddy puts it, we size each other up along two key questions: Can I respect this person? Can I trust this person? And the old adage about first impressions checks out — we’re prone to answer these two questions quickly upon first meeting, and our initial answers can prove hard to shake.

These questions of competence and warmth are answered intuitively and naturally, but only through an extraordinary feat of logic, emotion, information processing, and judgment involving verbal, nonverbal, and other visual cues. We come to our conclusions quickly, but they serve as an important scale by which we’ll interact with that individual — and often others like him or her — from that moment forward (even as we continue to revisit the questions throughout our interactions).

[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":2163556,"post_type":"guest","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,bots,","session":"D"}']

We reserve the full scope of these questions for our human counterparts (“Can I respect this dog?” is probably a question not often asked), because we consider each other sufficiently intelligent to be relied on and to be confided in, but also to have intentions and to harbor motives. We need to answer our questions about respect and trust because of the potential value or threat of any individual in our lives, and then we act accordingly.

This is a window into the world of how we judge one another. And it actually tells us a lot about how we’ll judge advanced artificial intelligence.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

New experiences require trust and respect

Throughout my years of design work, I’ve learned that a key challenge for AI goes beyond the technical inner-workings of the machine itself. We spend a lot of time thinking about how the machine will come to understand and respond to us, but it is just as critical to think about how human beings interact with the machine, rely on it, and understand what it knows. In the pursuit of creating machine intelligence that will be adopted widely to benefit people’s lives, we can learn a lot from how we judge one another through the lens of respect and trust.

This is the bedrock of UX for AI.

Those of us collaborating to design the interfaces of AI systems are traveling in unmapped territory. We’re creating user experiences that users haven’t experienced before. We’re designing interactions that are less “helping people understand machines” — a key pursuit of UX over the past few decades — and more “teaching machines to understand people.” In doing so, we’re mapping out conversations between two types of intelligence, each foreign to the other. The implications and complexities are vast.

So how do we approach that problem? Let’s start with a variation on the questions of respect and trust: How complex is this interaction? How important is it to the user?

Complexity and importance drive expectations

To illustrate that point, let’s return to the world of human-to-human interaction: Your answers to Amy Cuddy’s questions matter differently to you when in conversation with your doctor than when making small talk with your cab driver. In each situation, they earn your trust and respect differently. We must start by understanding this distinction.

Complexity is intended here to be a judgment of how likely it is that the user understands — at least at a high-level — the actions an intelligent system is taking and (relatedly) how sophisticated those actions are. The more complex an interaction is, the more the user has to respect the competence of the machine they’re relying on.

[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":2163556,"post_type":"guest","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,bots,","session":"D"}']

Importance, meanwhile, indicates how critical or valuable this interaction is to the person’s life. What hangs in the balance? What is the goal? In part, importance helps designers understand how the system needs to explain outcomes, how time-critical the nature of the interaction is, and what level of transparency is expected.

There is a vast spectrum of AI-powered interactions we experience today, at all levels of complexity and importance. For some, we generally understand the algorithms at work, and the outcomes have little lasting impact on our lives. For others, the algorithms become wildly complex, and the outcomes are life and death.

On the low end of the spectrum, take recommendation engines backed by machine learning algorithms for products, social connections, and films. These are low complexity (no one’s really confused why Netflix recommended a particular film or Amazon suggests a particular product, especially when it can tell you in simple terms) and low importance (they’re suggestions, not conclusions, and they can be ignored at will).

As we move up the complexity/importance spectrum of AI interactions in our daily lives, we pass things like search engines, image management through subject recognition, home security systems, personal money management, and transportation (from self-driving cars to self-flying planes), to arrive at the high-importance, high-complexity world of medical diagnosis.

[aditude-amp id="medium2" targeting='{"env":"staging","page_type":"article","post_id":2163556,"post_type":"guest","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,bots,","session":"D"}']

Medical diagnosis is about as high-complexity and high-importance as it gets. How Watson arrives at the diagnosis of an illness, anything from the flu to cancer, is likely difficult for the average person (and sometimes doctors, too) to fully comprehend, and suffice it to say that that person would care a whole lot about the outcome. Today, this level of importance generally relies on the “human interface” — the doctor who sits between Watson and the patient and brings the art of bedside manner to interaction — but it’s not difficult to imagine a world in the future where a patient goes straight to Watson for assessment, diagnosis, and a course of treatment. Here, communication is key — that future requires that Watson can clearly explain, in human terms, its conclusion and the full line of reasoning that got it there.

Designers have addressed the UX challenges across the AI interactions above in a variety of ways — from context and hints about reasoning to helpful guidance towards success in various forms. But as we consider the emergence of increasingly sophisticated AI interactions, contextual clues on the margins will no longer suffice. Instead, design will become the process of defining the nature of conversations (in whatever medium) between two intelligent personas: person and machine.

We recognize collectively that effective communication requires the ability to listen, interpret, understand, judge, and respond (in whatever way appropriate: answers, explanations, clarifying questions). It requires intention and mutual understanding. Natural language processing systems pull intent from natural language, while advanced natural language generation — the clear articulation of meaning informed by the machine’s inference and judgments — is becoming more insightful, impactful, and fluid (and, as a reflection of that, is also becoming more prevalent throughout industry).

AI needs to be the highest humanity

So, the system can speak and explain itself. But what should it say, and how should it say it? Well, the UX practice has long been focused on the science of us in hopes that we can be taught to better understand machines. We now move toward a world where that science of us will inform the behavior of the machine itself.

[aditude-amp id="medium3" targeting='{"env":"staging","page_type":"article","post_id":2163556,"post_type":"guest","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,bots,","session":"D"}']

Each generation’s understanding of AI has been largely influenced by the popular fictional interpretations of that time. But for Generation Z, AI has leaped entirely across the boundary of science fiction to become woven into our daily lives — potentially affecting every interaction across any number of interfaces. And the generations before are grappling with cognitive dissonance when considering the promise and peril of intelligent machines becoming a reality. In short, AI has arrived, and it’s going to need to be human in ways not even Arthur C. Clarke could have imagined.

If it sounds like we are putting too much pressure on ourselves, and that we are demanding too much of technology — that we expect it to be almost human (and then some) — consider all we’re asking of AI. We’re talking about having AI help us make business decisions, manage our money, search and shop for products and groceries, fly our planes and drive our cars, and even diagnose potentially life-altering medical conditions. Error is rarely — if ever — forgivable.

The things is, we don’t want AI to be almost human. We want AI to be very much like the very best humans we know, the ones we admire the most, the ones whom we believe most deeply. If that’s where we’re headed, respect and trust are everything.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More