Having established that we are light years away from full artificial intelligence, and that true A.I. is an inevitable part of the present and future — now what? What should we concern ourselves with next?
Like any transformative technology, A.I. carries risks and presents challenges along several dimensions, with the most complex and urgent issue being its liability and accountability.
[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":2065273,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"bots,","session":"C"}']While science fiction has focused on the existential threat of A.I. to humans, researchers at Google’s parent company, Alphabet, and those from Amazon, Facebook, IBM, and Microsoft are teaming up to focus on the ethical challenges that A.I. will bring.
Addressing machine accountability
People with knowledge on A.I. are no longer worried about the kind of scenarios where machines take over the world in doomsday fashion. Instead, they are shifting focus to the moral aspect of letting computers make certain decisions and the kind of machine — or human — accountability that follows.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
These issues are more real and pressing.
These are the consequences of humans relying so much on machines and making them so smart that it’s easy to create and overlook some bug somewhere that could cause a chain reaction down the line. A fine line separates a smart and a stupid machine, and when you give a lot of responsibility to the latter, it is easy for it to make disastrous mistakes.
Automating human-machine interaction
When we interact with computers, the machines always ask us for our preferences and settings before carrying out a command on our behalf. Right now, A.I. still does not, and cannot have its own thoughts, it merely functions on the logical thinking and rules that we teach it. However, we are moving away from that into autopilot mode.
Computers have become more autonomous and less dependent on human handlers. We’re seeing more automated machines on the rise — self-driving vehicles, technology that writes news stories and performs surgery, and even automated combat on the battlefield.
“We’re just at the verge of where the machines may take off and go much further than even we humans could make them go,” said Apple cofounder Steve Wozniak at an innovation summit recently. “It is a new revolution in my mind, the revolution of artificial intelligence, machines that will learn, that will be able to do things much better than we know how to tell them.”
The most controversial issue is how to make A.I. take responsibility for the decisions it makes. It stems from the concept of responsibility, and the tricky part is that a computer cannot take responsibility.
[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":2065273,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"bots,","session":"C"}']
Helping machines to take responsibility
When it comes to self-driving vehicles and unmanned aircraft systems, A.I. has the potential to save thousands of lives and transform global transportation, logistics systems, and countless industries over the coming decades.
On the flipside, a self-driving car is also a high-speed, heavy object with the power to harm its users and the people around it.
This means that aside from improving the technology and reducing risks for accidents, we also need to set rules and regulations. For example, when it comes to operating drones, they must be within the owner’s field of vision, they have to be in physical proximity of each other, the human must be able to control it all the time, and more — and even then, we are not allowed to fly drones over a crowded area.
Similarly, for self-driving cars, a licensed driver is required to be in the car at all times, with their hands on the wheel so they can take the responsibility (instead of the machine or manufacturer) should something bad happens.
[aditude-amp id="medium2" targeting='{"env":"staging","page_type":"article","post_id":2065273,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"bots,","session":"C"}']
Forcing a human to always be in control is the only way around this thorny issue at the moment.
This cannot be more essential when it comes to our plans to weaponize artificial intelligence. The U.S. military already uses a host of robotic systems in the battlefield, from reconnaissance and attack drones to bomb disposal robots. However, these are all remotely piloted systems, meaning a human has a high level of control over the machine’s actions at all times.
Spurring public dialogue
Are we ready? Do we have the answers to the legal and ethical challenges that will definitely arise from the increasing integration of A.I. into our daily lives? Are we even asking the right questions?
Companies, both tech giants and small startups alike, are all acquiring or developing their own A.I. technologies in hope of catching this rising wave, which is why we need to make sure all these ethical, legal, and technical aspects are well considered.
[aditude-amp id="medium3" targeting='{"env":"staging","page_type":"article","post_id":2065273,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"bots,","session":"C"}']
“Now is the time to consider the design, ethical, and policy challenges that A.I. technologies raise,” said Barbara Grosz, Higgins Professor of Natural Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences, in an interview with Robohub. “If we tackle these issues now and take them seriously, we will have systems that are better designed in the future and more appropriate policies to guide their use.”
The One Hundred Year Study on Artificial Intelligence is an ongoing project hosted by Stanford University to inform debate and provide guidance on the ethical development of smart software, sensors, and machines. Every five years for the next 100 years, the AI100 project will release a report that evaluates the status of A.I. technologies and their potential impact on the world.
The report doesn’t offer solutions, but rather is intended to start a conversation between scientists, ethicists, policymakers, industry leaders, and the general public. And perhaps, initiating a century-long conversation about ways A.I.-enhanced technologies might be shaped to improve life and societies would be our first step towards creating a morally conscious technology form.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More