In the age of AI, how can we live with artificially intelligent machines and robots that may become more intelligent than us? An AI machine can be a computer or smart device; it can also be known as a robot that, with or without appendages, can emulate human life physically.
There are still so many unanswered questions. How can we coexist comfortably and conveniently if one day, the machines we have created decide to think for themselves? Do you believe in technological singularity, and is it near? Here are some ethical dilemmas we will have to address in the AI future.
[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":2132457,"post_type":"guest","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,bots,","session":"D"}']Can machines learn empathy?
Many would argue that you can’t program the feeling of empathy and that it’s more of an innate ability we’re born with and share with other humans.
In the distant future, some machines may have to make decisions for us. Imagine a robot with no empathy having to decide if resuscitation attempts on a deceased human should be undertaken and if so, for how long.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
If a machine had to choose between saving the life of a child or a parent, what would it choose? Some would argue that you can never program or write code that will be equivalent to the amount of personality, judgment, and empathy that humans have.
Can machines exercise moral judgment?
Let’s look at some moral judgment decisions that may arise if AI takes a center role in daily duties in the future.
What steps would be involved in ensuring that the robot is ready for deployment into the field of emergency rescue services, for example? Could a robot decide whether to risk itself to save 100 people or save one person without putting itself at risk (while leading to the peril of 100)?
What about a hostage situation? Would a machine be able to act appropriately in this highly charged type of situation?
Along with empathy, it may not be possible to program moral judgment into AI. Would you ever trust your child to be babysat by an AI robot? Would it clearly know the difference between toy scissors and real ones?
Can we give machines a full understanding of risk?
When we drive our vehicles or walk through a crowded parking lot full of moving traffic, we almost completely understand the inherent risk involved with these activities. We also assume the responsibilities we have for those around us while we captain our way to our destinations.
[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":2132457,"post_type":"guest","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,bots,","session":"D"}']
Machines, on their own, serve one purpose and do not stop serving that purpose until they break or shut down. This poses a problem when we finally relinquish control to AI. Will they take on the same level of self-preservation in all the unique positions we can find ourselves in but can’t always plan for? Can self-driving vehicles change their pre-sets in the face of a surprise that could endanger itself and, by connection, us?
What are the rules on how to manage machines?
Making rules for such computers may prove to be even more challenging than having AI know how to act on them. As it’s been for centuries, humans by nature don’t agree on thousands of issues of ethics or morals. This is sometimes why we segregate into smaller groups of people who agree on the same beliefs. It helps us feel more firmly planted and justified in our ways.
It stands to reason that if machines can act or vote on our behalf, would we not have to create them to live just like we do, and maybe separate from each other to form like-minded groups? Would that make AI racist? Would robots need to know that other AI beings think differently than them for each to be a critical thinker? Could AI communicate properly with each other?
Should machines have rights?
If machines become truly as lifelike as us, even if only in their ability to think, they may not have rights when they are created, but in time they may very well demand those rights. They may even look for AI love.
[aditude-amp id="medium2" targeting='{"env":"staging","page_type":"article","post_id":2132457,"post_type":"guest","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,bots,","session":"D"}']
When it comes down to it, is it not our morals and empathy that led to us claiming our rights in the first place? Why would our creations not come to that conclusion as well?
This article first appeared at Knowmail.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More