One of the essential phrases necessary to understand AI in 2019 has to be “ethics washing.” Put simply, ethics washing — also called “ethics theater” — is the practice of fabricating or exaggerating a company’s interest in equitable AI systems that work for everyone. A textbook example for tech giants is when a company promotes “AI for good” initiatives with one hand while selling surveillance capitalism tech to governments and corporate customers with the other.
Accusations of ethics washing have been lobbed at the biggest AI companies in the world, as well as startups. The most high-profile example this year may have been Google’s external AI ethics panel, which devolved into a PR nightmare and was disbanded after about a week.
Ethics washing is a problem not just because it’s inauthentic or sends the world a mixed message. It also distracts from whether or not actual steps are being taken toward building a world where professional standards demand AI that works just as good for women, people of color, or young people as it is does for the white men who make up the majority of people making AI systems.
These trends raise the question: Where does ethics washing come from? This phenomenon does not appear to always be rooted in disingenuous PR practices, but spawns from a series of missteps or lack of willingness to tackle ethical challenges.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
Cloudera general manager of machine learning Hilary Mason’s Fast Forward Labs has followed the ethical implications of AI deployment in its applied machine learning operation for years now. Onstage at VentureBeat’s Transform conference, Mason talked about what she believes leads a business to practice ethics washing.
“That instinct to ethics-wash comes from where people are just trying to handle the risk in as minimal a way as possible, and also because doing this right is hard, and it requires embracing a grey zone where you might make mistakes, and owning those mistakes can be expensive, but it is probably the right way to go,” she said.
Most people in the AI community actually want to build great products, Mason said, “but because of a lot of the backlash, and attention that’s been on the lack of ethical behavior for many tech companies who are also the leaders in this field, a lot of companies are waking up and saying ‘Wow, there’s actually reputational risk here as well as an actual product risk. How do I get rid of the risk?’ And they don’t think it through fully, and think ‘OK, I want to handle the reputational risk’ instead of ‘I want to build great products that actually make people’s lives better.'”
Mason wasn’t the only person at VentureBeat’s two-day Transform conference to bring up ethics washing or share how their company is trying to responsibly design and deploy AI systems. Ethical AI leaders at Accenture, Facebook, Google, Microsoft, and Salesforce shared their thoughts on how to deploy AI systems that work for everyone and avoid ethics washing.
Welcome ‘constructive dissent’ and uncomfortable conversations
Accenture responsible AI lead Rumman Chowdhury said that if businesses want their employees to raise doubts or concerns with AI systems their company makes, then businesses must allow for what she refers to as “constructive dissent.”
“Successful governance of AI systems need to allow ‘constructive dissent’ — that is, a culture where individuals, from the bottom up, are empowered to speak and protected if they do so. It is self-defeating to create rules of ethical use without the institutional incentives and protections for workers engaged in these projects to speak up,” Chowdhury said.
Wise enterprises will welcome conversations that confront problems and do not shy away from or ignore issues simply to avoid certain conversations.
“It’s not just been building the technical products, it’s actually been [about] how do you govern this technology. And often when you think about creating something that’s more inclusive, that welcomes diversity — that actually comes from having a culture that welcomes these conversations,” she said.
The need to be open to uncomfortable conversations was also prescribed by Opportunity Hub CEO Rodney Sampson, who moderated a talk with Chowdhury.
AI community stakeholders cannot, for example, address a lack of women, Latinx, and African American people in the industry without naming the problem.
To help businesses get started, Accenture created an AI governance guidebook that shares how to build a company culture that sets a tone from the top and, for example, welcomes reports from employees that may turn out to be false alarms.
Start an inclusion initiative inside your company
Lade Obamehinti currently acts as Facebook’s AR/VR business lead and also heads up the company’s Inclusive AI initiative. She found herself in that role about a year ago, after discovering that Facebook’s Smart Camera AI on Portal devices was able to frame and compose video calls much better on her white colleagues than it did on her. It’s a story she told onstage at Facebook’s F8 developer conference earlier this year.
To begin an initiative, Obamehinti suggests beginning by defining the problem, because “you can’t fix what you don’t understand.” She also advises against trying to solve every problem at once. In the case of Inclusive AI, operations were limited in its first year of work to computer vision use cases only. Natural language is next.
Finally, her advice is to keep trying.
“You can have roundtable after roundtable about this topic without touching the product at all, so don’t get stuck on trying to have this perfect solution or framework from the get-go,” she said. “It’s really a matter of having concepts, iterating, and trial and error, and that’s what it’s going to take to build lasting frameworks.”
Include affected parties
Make sure the affected parties are in the room when designing AI systems. A diversity of opinions in the room is no fail-safe, but if people from diverse backgrounds feel empowered to add their unique perspective about risks and opportunities, it can improve products or help ensure better decision-making than a homogeneous group.
An clear recent example of this, of course, is Obamehinti’s experience sounding the alarm about Facebook’s camera working best on people with light skin tones. “If you weren’t in that room, they would have never known they had a problem,” Chowdhury told Obamehinti.
In a separate panel with Microsoft and Salesforce employees, Google senior research scientist Margaret Mitchell said the need for a diverse range of perspectives should shape hiring practices.
“I think this is really where the diversity and inclusion starts to come in, when you’re thinking about human-centric design and figuring out your values,” Mitchell said. “What really matters there is what the diverse perspectives are at the table from day one, making the decision not to use this data set because ‘I don’t see people who look like me,’ you know, these kinds of things there. So this is really where I think diversity and inclusion really strongly intersects with the ethics space, because that’s the different perspective.”
Don’t ask for permission to get started
At the start of a panel conversation about how to responsibly deploy AI, Microsoft general manager of AI programs Tim O’Brien described how his interest in fairness, accountability, and transparency (FAT) research grew in recent years and how he left an influential role to dive deeper into ethical AI.
O’Brien suggests anyone with a genuine interest in this space should just get started. “If you have a passion for this and you think you can contribute, don’t ask for permission to engage and don’t wait for someone to invite you. Just do it, regardless of what your role is and where you are in the company,” he said. “Ethics is one of these weird domains in which being a pest, banging on doors, and being an irritant is acceptable.”
Encourage leadership from the top
The need for leadership is often posited as a prerequisite for businesses to begin their first AI projects. That’s why Microsoft and Landing.ai made training courses earlier this year especially for business executives.
A number of Transform speakers called top-down leadership or buy-in from company executives as an essential element for success, including O’Brien, who talked about Satya Nadella’s concept of collective responsibility.
As Mason previously mentioned, deploying AI responsibly can be hard work. Moving beyond participating in ethics simply to manage reputational risk and toward pursuing genuine progress may benefit from top-down support.
Ultimately, senior leadership will be vital, O’Brien said, because businesses trying to make ethical systems still have to adhere to corporate governance that places power in the hands of senior leadership, shareholders, and the CEO. O’Brien noted that shareholders and investors would likely deem it unacceptable to hear a CEO say an ethics board made the final decision about when to deploy an AI system.
Share your shortcomings
Mitchell wants more companies that use AI to share how things went wrong. “One call to action would be to share with the world more of the risks that you’ve taken, and work with this communication. So transparency is one of the big issues here, and no one wants to go first. So the more open we can be about the kinds of things that we’re seeing, that we’re concerned about, and that we’ve mitigated, the better we can all resolve this ethical AI space together,” Mitchell said.
Salesforce architect of ethical AI practice Kathy Baxter agreed with Mitchell, and added that companies should consider working with like-minded organizations.
“High tide raises all boats, and so coming together, sharing with each other … what’s working, what’s not working, and supporting each other,” she said. “It’s easy to be critical of each other, to be very divisive, and accuse one another of virtue signaling or ethics washing, but if we support each other and come together, I think we’ll all be much stronger, and society will benefit as a result of that, and we can all move in that direction.”
Look at things from a developer’s point of view
O’Brien thinks the ethics in AI cause can be helped by doing more to understand the perspective of developers who are tasked with deploying AI. A 2018 StackOverflow survey of 100,000 developers found that opinions about who exactly is responsible for unethical code found that a majority believe management should bear the blunt of the blame, while about 22% say the creator should, and 19% put the onus on the developer. “A lot of the technical people in our industry have never been asked to think about this — not at university, not in their careers — so I think we just need to be respectful of where they’re starting from and meet them where they are,” he said.
O’Brien endorsed checklists as a way to help developers ensure ethical AI deployment. “Checklists, for example, get a bad rap, or they get kicked around on Twitter all the time, but I’m actually in favor of them,” he said.
In March, Microsoft VP Harry Shum said the company plans to add an ethics review for each of its products, alongside things like privacy and security; however, he provided no launch date.
Be prepared for gray area decision-making
Ethics don’t tell you whether a decision is right or wrong, Mitchell said. Rather, it gives you the tools to understand different values. An ethical framework can provide guard rails, but it comes down to how a company wants to be defined.
“Once you start to actually dig into ethics, you realize that it’s more about understanding different ways of thinking about and looking at the problems and weighing your priorities. So you can have a theological perspective, you can have a virtue perspective, you can have a utilitarian perspective; these are also schools of thought of what is worth prioritizing,” Mitchell said.
Avoid creating new things wherever possible
Baxter said her company surveyed some members of the AI ethics field and found one easy tip: Avoid the creation of programs from scratch wherever possible. Instead, use things that are already there and build on them.
“In the case of Salesforce, we already had a machine learning for PMs class, and so [I] reached out to the instructor and said, ‘Hey, can I add in ethics into that course?’ And so now that process [is] caught up every single month,” Baxter said.
Understand that ethics has few clear metrics
An embrace of AI ethics means making AI models in the best way possible, but it also means embracing an impact that’s not always directly measurable in the same ways as, say, a business’ bottom line or return on investment.
“Companies certainly have the support of IT systems, and you are figuring out how well you’ve done based on quarters. But with something like ethics, you know you’ve succeeded when there’s not a headline,” Baxter said, noting that success requires “support high up in management that understands the difficulty in measurements, and the long-term investment in technology and IP [required].”
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More