The big budget Channel 4/AMC series Humans is with us. It presents a world where “synths,” eerily life-like robots, become part of the household.
Humans isn’t just the new Twilight or Divergent, offering the story-telling freedom to shoot‘em, bite’em, kiss’em in an improbable apocalyptic fantasy world like Terminator. Instead, it raises some hugely important questions about artificial intelligence (AI) and our relationship with it.
[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":1747984,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,","session":"A"}']Of course the opportunities afforded by AI haven’t escaped the entertainment makers: Her, Transcendence, Ex-Machina, and Chappie — and before that, iRobot, Terminator, 2001: A Space Odyssey, and many others. But concurrent to the debut of this new series, the world’s A-list brains are expressing their concern over artificial intelligence — Bill Gates, Stephen Hawking, and Sir Tim Berners-Lee amongst them. Meanwhile, Nokia is spearheading a global debate called #maketechhuman to explore the relationship between technology and humankind. But why has this all blown up now?
The answer is because the theoretical questions about AI, around for decades and addressed for the first time by Isaac Asimov’s Three Laws of Robotics in the ‘40s, are suddenly very relevant again. Let’s use an example: Today’s aircraft have enough artificial intelligence to prevent the pilot putting the aircraft in danger, but that intelligence can be overruled by human beings. As the AI gets better, should that always be the case?
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
And this is just the beginning; can you imagination the nightmare litigious society where companies are sued because they didn’t put enough AI into the machine to prevent harm to humans? Can we see the day when machines can completely override humans and make decisions against their will?
This is not theoretical or big picture, we are already ascending the thin end of the wedge, AI doesn’t have to look like a robot; machines, in the form of algorithms, are already making decisions that affect people and shape society every day.
Upworthy Chief Executive Eli Pariser conducted an experiment using Google’s search engine. He asked people with very different interests – and therefore search histories – to search for “Egypt.” Those whose profile was more leisure oriented received links to holidays; those with more of a current affairs bias learned of the Middle East’s most bloody coup. This is a learning machine on its way to creating cultural ghettos.
Another example, termed Inadvertent Algorithmic Cruelty, is Facebook’s year-end summary troubles, which failed to recognize that not everyone wants to be reminded of what happened last year; life has tragedies as well as parties.
There is a huge growth in the use of technology to help brands manage their relationships with fans and customers, and we are already seeing the flaws in that. The corporate world should be heading towards what we refer to as “Human Era” — brands treating people as people — but the use of AI can send things the wrong way and make things much, much worse. The kind of organizations and brands that need to watch out for this especially are those whose product or service is low emotion, whose relationships are automated, and where category differentiation is low: banks, insurers, and energy and utility companies are the most obvious examples.
If relationships between organizations and individuals need to be more human, yet the management of those relationships becomes increasingly dependent upon technology, isn’t the only way forward to ensure that technology has human values in its DNA?
[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":1747984,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,","session":"A"}']
To make this happen we need to look at these questions:
1. Is it possible to give machines human values?
2. Can we agree on what those are?
3. Should the creators of AI be “compelled” to program them in?
4. What is the mechanism for making this happen; government?
5. Would we allow exceptions such as military drones?
As of today I would guess the answers to these questions, at least in the U.S., tend towards:
1. Yes
2. No
3. Yes and no
4. Absolutely no idea
5. Definitely
Bill Gates et al are alarmed, but not in my view alarmist. We cannot afford to sleepwalk into a world where we cede control. If we do, our relationship with machines can take three forms: we become peripheral and ignored, pets and patronized, or pests and exterminated. Enter Arnie.
Ian Wood is Senior Partner in Brand Strategy at Lippincott.
[aditude-amp id="medium2" targeting='{"env":"staging","page_type":"article","post_id":1747984,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,","session":"A"}']
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More