I recently consulted with the US Navy on all things “transhuman.” In those conversations about how science and technology can help the human race evolve beyond its natural limits, it was clear that military is keen on replacing human soldiers with both fighting and peacekeeping machines so American military lives never have to come under fire or be in harm’s way.
However, it’s the peacekeeping technology that is particularly interesting for many civilians. While you wouldn’t want an armed Terminator in your home, you might like a robot that travels with you and offers personal protection, like a bodyguard. In a survey by Travelzoo of 6,000 participants, nearly 80 percent of people said they expect robots to be a significant part of their lives by 2020 — and that those robots might even join them on holidays.
The robotics industry is already considering this, and recently debuted some security models. A few months ago China came out with its Anbot, which can taser people and be used for riot control. And South Korea already uses mobile robot guards in its prisons. Even in San Francisco, you can rent out robot guards to protect your businesses and property. However, the rent-a-robot company, Knightscope, recently came under fire for accidentally running over a toddler at the Stanford Shopping Center.
Needless to say, problems are expected as the burgeoning field of robot-human interaction evolves. The good news is, there’s already years of information to draw on. Human-robot interaction and protection have been here in the form of robotic dogs for nearly a decade. There are dozens of different brands and models available — some of which offer motion detector warnings to protect against burglars and can be programmed to bark at intruders. While some will say robot pets are no more efficient than well-placed cameras, microphones, or speakers, they do offer genuine and personal protection for consumers – not to mention a sense of novelty and enjoyment.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
Of course, it’s not just robots that offer personal security. Security drones, which can follow you while mountain biking in the forest or your child walking to school, are already here. And driverless cars that take college students from a bar to their homes are just months away from hitting the market. Even personal residences are now being wired with basic AI intelligent systems — including fire alarms — that can communicate with residents and alert police if something is wrong. Some apps on smartphones, like SafeTrek, alert authorities if a held phone is dropped. This can be especially useful if you’re walking in a dangerous area late at night.
The age of near-total robot security protection will likely be here in less than a decade. America got a small inkling of that when it was reported that the shooter in Dallas — who took the lives of five police officers — was killed by a police robot that detonated a bomb. Media reported that it was the first known killing of a human by a police robot. And given the increasing number of police forces around America that want to own a robot, it surely is the start of a much broader system of security across the land. For example, in my presidential campaign, I advocate for tens of thousands of drones monitoring America’s borders instead of a giant wall, as Donald Trump proposes. Drones would cost far less and be far more environmentally friendly.
While the robot that killed the Dallas shooter is not yet capable of offering much security to the average person, the writing is on the wall. Executives, public figures, and even presidential candidates like myself worry about personal safety. I’d love to regularly have a robot watching me to make sure no one is going to harm me or my family. And so might millions of other people. They may want robot protection in the same way tens of millions of Americans have guard dogs — to protect family, property, and persons. Who doesn’t want a protective butler programmed to care most about your safety? They could even greet guests at the front door or accept packages from UPS.
The four US Navy officers I spoke with recently agreed the future would be heavily dominated by robots — and that those robots could likely be made to protect people. What wasn’t so easily determined is who would decide the rules of protection and engagement. Do we follow Asimov’s outdated laws? Do we give robots power to kill in the pursuit of safety? Will a government body be responsible for regulating robots? These are the types of questions that will dominate conversations around robot bodyguards as they become more of a reality, questions I’ll look to address in my keynote next week at RoboBusiness 2016 in San Jose.
Multiple government agencies will have to be involved with the regulation of personal robot bodyguards — including the creation of one central agency that greenlights robot endeavors and applications in the first place, an initiative I have advocated for. Even more so than the Internet, the age of robots presents a plethora of ethical questions humans have not faced before. And given that we humans haven’t yet mastered the art of providing security for ourselves — which is why there’s been such controversy this year around police brutality across America — it’s clear we don’t have all the answers needed to fail-proof the process in robots. Philosophers, ethicists, roboticists and politicians will have to come together to determine the best path forward — and to decide who the liabilities will fall on when failure occurs.
One thing is for sure, despite the accidents that will occur, there’s nothing quite like the physical presence of an 8-foot-tall piece of intelligent machinery ready to confront a rogue individual or element when you need it. And bear in mind, those rogue elements aren’t always just people — they could include wild animals, poisonous snakes, mean dogs, or a smoke-filled burning house.
Interestingly, another issue will be protecting humans against other robots. While developed nations might program robots to be our bodyguards — there’s always the possibility that, in nations where civil strife is prominent, people could do exactly the opposite. Will there be a market for robots that carry out dirty or criminal work — robots programmed outside of all civility? Will a black market for those types of robots emerge? The answers to those questions are almost certainly yes.
Then, there’s the questions of machine intelligence. If a machine is smart enough to know the difference between a bad guy and a good guy, would that machine have any thoughts of what it is: good or bad? Humans — and governments — don’t want machines to make too many decisions on their own, at least not until we have nearly perfected security robots. And that is a long way away. The good news is, self-driving car technology will be about five years ahead of robot security guard technology, and I’m sure it will provide a wealth of real life experience to draw upon — especially the complex moral choices that machine intelligence faces. The classic question with self-driving cars is, when faced with a choice of whether to harm a family of five or harm a single person, what does the car choose? This type of programming must also be built into robot security guards.
Regardless of all the thorny questions and conundrums coming in the age of robotics, a personal robot bodyguard is something that is just years away from purchase. I suspect many people will want one.
Zoltan Istvan is the 2016 US Presidential candidate of the Transhumanist Party, a political organization dedicated to putting science and technology at the forefront of American politics. To learn more about Zoltan and his views on robot bodyguards, visit www.robobusiness.com/conference/speakers/zoltan-istvan/.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More