Skip to main content [aditude-amp id="stickyleaderboard" targeting='{"env":"staging","page_type":"article","post_id":2357301,"post_type":"guest","post_chan":"none","tags":"category-news","ai":true,"category":"none","all_categories":"ai,big-data,","session":"A"}']
Guest

The tangled relationship between AI and human rights

It was a pleasant 21 degrees in New York when computers defeated humanity — or so many people thought.

That Sunday in May 1997, Garry Kasparov, a prodigal chess grandmaster and world champion, was beaten by Deep Blue, a rather unassuming black rectangular computer developed by IBM. In the popular imagination, it seemed like humanity had crossed a threshold — a machine had defeated one of the most intelligent people on the planet at one of the most intellectually challenging games we know. The age of AI was upon us.

[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":2357301,"post_type":"guest","post_chan":"none","tags":"category-news","ai":true,"category":"none","all_categories":"ai,big-data,","session":"A"}']

Or perhaps not.

What is artificial intelligence?

While Deep Blue was certainly an impressive piece of technology, it was no more than a supercharged calculating machine. It had no intelligence to speak of beyond its ability to play chess. It was very, very good at playing chess but was absolutely hopeless at anything else. We’ve had technology like this for decades: If you went to school in the ’80s or ’90s, you probably had a pocket calculator which, in its own way, was a very rudimentary form of artificial intelligence.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

Much more sophisticated AIs than a pocket calculator surround us today — think Siri, Google searches, and Nest — but they still have a very narrow range of capabilities. If you look beyond the harmless, consumer applications of AI like these, however, you will find that the increase in the power of AI applications has been explosive, with vast consequences for the world. This evolution of AI applications has been enabled by the technological revolution that preceded it: data.

The tremendous amounts of data generated by the internet, mobile phones, connected systems, and sensors has supercharged a certain type of AI technology called deep learning. Machines deploy deep learning to analyze very large amounts of data to look for patterns and find meaning. Researchers “train” the machine on a large dataset; the more data it has, the more refined its results are.

By analyzing very large amounts of health records with AI technology, doctors can improve diagnostics; with data from cars, phones, and urban sensors, cities can optimize traffic, reducing pollution and travel times; by analyzing demand on its servers and changes in temperatures, a company can save millions of dollars on cooling and electricity in its data centers, simultaneously reducing its costs and environmental impact; by analyzing satellite data, countries can anticipate crop shortages and predict deforestation. This is all possible because of computers’ ability to process very large datasets and make sense of them, a task well beyond human ability.

The good, the bad, and the ambiguous

Like any technology, AI has both good and harmful applications. AI that helps reduce power consumption in a data center will have a positive social impact. An autonomous swarm of armed military drones is unlikely to help humanity much (even if you don’t think autonomous weapons are an inherently bad idea).

In between these two lie most AI applications. They can have positive or negative impacts on human rights, depending on how they are developed and used. Let’s look at a real-life example: predictive policing.

This technology already exists in some countries. Various U.S. and U.K. police forces use software to predict when and where citizens might commit crimes so they can allocate more resources to crime hotspots. In theory, that’s a good idea. The police, like all organizations, have to make choices and prioritize the use of their resources — and having police officers ready to respond somewhere where a crime is likely to occur is certainly better than having them patrolling the other end of the city.

[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":2357301,"post_type":"guest","post_chan":"none","tags":"category-news","ai":true,"category":"none","all_categories":"ai,big-data,","session":"A"}']

Or is it?

We may not have Precogs like in Minority Report, but predictive policing is already here. The problem starts even before a government entity uses crime prediction software — it starts with the problem still plaguing the data revolution: bias.

To understand the problem bias presents, consider the case of a city that wants to introduce predictive software to help tackle crime rates and make better use of its frontline police force. The police force contracts a tech company that has developed predictive policing software. The company asks the police for 15 years of data on arrests and crime, classified by type of crime, date and time, location, and conviction rates, among other related data.

The company uses the data and the algorithm starts churning out predictions. It directs police forces to existing crime hotspots, but because it does so more systematically and predicts the timing of crime better, it leads to higher rates of crime detection, arrests, and convictions. The first pilot is seemingly successful and political leaders are pleased.

[aditude-amp id="medium2" targeting='{"env":"staging","page_type":"article","post_id":2357301,"post_type":"guest","post_chan":"none","tags":"category-news","ai":true,"category":"none","all_categories":"ai,big-data,","session":"A"}']

The data problem

But here’s the catch: The city has a history of over-policing certain ethnic and religious minorities and inner-city areas. Politicians and police leadership have said that tackling the problem was a priority and, in fact, the move to predictive policing was seen as a way of removing human bias — after all algorithms do not have feelings, set ideas, or human bias.

But the algorithms used biased data. The areas where higher crimes were recorded in the past also happened to be parts of the city with a higher concentration of ethnic and religious minorities. The algorithms started predicting more crime in these areas, dispatching more frontline police officers, who made more arrests. The new data was fed back into the algorithm, reinforcing its decision-making process. Its first predictions turned out to be accurate, as indeed there was crime to be stopped, which made it refine its focus, continuing to send a disproportionate amount of police resources to these parts of the city. The resulting higher crime detection, arrests, and convictions in fact mask increasingly discriminatory practices.

The result is a feedback loop that the city can only escape if it corrects the historical and ongoing bias.

Such data bias and discriminatory automated decision-making problems can arise in numerous other current and potential AI applications. To name a few: decisions on health insurance coverage, mortgage, and loan applications; shortlisting for jobs; student admissions; and parole and sentencing. The effect of discriminatory AI on human rights can be wide-ranging and devastating.

[aditude-amp id="medium3" targeting='{"env":"staging","page_type":"article","post_id":2357301,"post_type":"guest","post_chan":"none","tags":"category-news","ai":true,"category":"none","all_categories":"ai,big-data,","session":"A"}']

The transparency problem

Another major problem with the use of AI in automated decision-making is the lack of transparency. This is because deep learning, which has exploded in importance in the last few years, often uses neural networks, the digital equivalent of a brain. In neural networks, millions of computational units are stacked in dozens of layers that process large datasets and come up with predictions and decisions. They are, by their nature, opaque: It’s not possible to pinpoint how the AI came up with a specific output.

This is a serious problem for accountability. If you can’t figure out why a machine made a mistake, you can’t correct it. If you can’t audit the machine’s decisions, you can’t find problematic outcomes that would otherwise remain hidden. While financial audits help reduce accounting errors and financial misconduct, you can’t do the equivalent with a deep-learning AI. It’s a black box.

On the upside, many AI scientists, companies, and policymakers take this problem seriously, and there are various attempts to develop explainable AI. This is how DARPA, the U.S. Defense Advanced Research Projects Agency, describes the aims of its explainable AI program:

Produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and

Enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.

Human rights solutions

Here, I outline a few potential ways to tackle some of the human rights challenges that the use of AI poses.

[aditude-amp id="medium4" targeting='{"env":"staging","page_type":"article","post_id":2357301,"post_type":"guest","post_chan":"none","tags":"category-news","ai":true,"category":"none","all_categories":"ai,big-data,","session":"A"}']

1. Correcting for bias

If we know that data we plan to feed into an AI system carries a risk of bias, then we should first correct for it. The first part is recognizing there is bias — this is often not something that people responsible for the data will readily admit, either because they don’t believe it is biased or because it would be embarrassing to admit it.

Either way, correcting for bias should not be optional. It needs to be mandatory in any AI system that affects individual rights and relies on data from individuals. The first step is testing the datasets for bias: Racial, religious, gender, and other common biases should be routinely tested for, as well as more use-specific biases. The second step is to correct for the bias — this could be complicated and time-consuming, but necessary to prevent AI systems from becoming enablers of discriminatory practices.

2. Assigning accountability

We should expect the same accountability from an institution when it employs AI as when it employs a human worker. With AI, the transparency problem means that companies cannot interrogate automated decision-making in the same way it would interrogate a human employee. This should not affect institutional accountability: A company or public institution using AI that makes discriminatory decisions that affect individual rights should be responsible for remedying any harm. It should regularly audit decisions for signs of discriminatory behavior.

With AI as with other digital technologies, developers also have a responsibility to respect human rights — they must ensure their technology is not inherently discriminatory and that they do not sell it to users who could use it for purposes that are discriminatory or otherwise harmful to human rights.

[aditude-amp id="medium5" targeting='{"env":"staging","page_type":"article","post_id":2357301,"post_type":"guest","post_chan":"none","tags":"category-news","ai":true,"category":"none","all_categories":"ai,big-data,","session":"A"}']

3. Avoiding unexplainable AI in high-impact industries

This applies to AI applications that may have a direct impact on the rights of individuals and that are not inherently harmful. Whether it’s predicting crime or approving mortgages, if decisions are made by an AI system that doesn’t have an effective means for accountability (as in point 2 above), it shouldn’t be used. Is this too radical? Hardly; we would not accept accounting systems that do now allow auditing or judicial systems that don’t allow for appeals or judicial reviews. Transparency and accountability are essential for respecting human rights.

I am not advocating that all AI applications should pass this test. Many commercial and noncommercial AI applications will not need to because their impact on individual rights is either remote or negligible.

Many of the issues I highlighted — data bias, accountability, and others — are similarly applicable to automated systems that don’t use deep learning AI, with the key difference that these systems are, at least theoretically, more transparent.

These are neither comprehensive nor tested solutions to the problems that the use of AI poses for the protection of human rights; my aim is to highlight some of the challenges and possible solutions — and start a conversation.

This article originally appeared on Medium. Copyright 2018.

Sherif Elsayed-Ali is the director of global issues at Amnesty International.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More