A lot of my waking (and dreaming) life is spent thinking, studying, and reading about machine learning, artificial intelligence, and bots. A common theme that comes up is the need to make the technology seem more human. Well, it’s going to happen, and probably sooner than most of us realize. But what will human-seeming bots mean for humanity?
As the CEO of a company pushing hard in the field of machine learning for AI purposes, I think it’s imperative that we address the social and moral questions associated with AI now, rather than post facto. Below are important near-term examples that I think will really require us to stop and think. I’ve listed out when these bots should be available.
[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":2140890,"post_type":"guest","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,bots,business,","session":"C"}']1. Recommendation bots (within 2 years)
Everyone wants a bot that can interact with a human and help them place an order. But bots are built by humans and funded by businesses. So just how impartial are those recommendations?
Imagine you’re trying to figure out where to stay when you take the family to Disney World. If Hotel Chain X has paid the folks running a travel recommendation bot to push their hotels, it stands that the bot has been trained with more positive data about Chain X — meaning that the bot will push users to Hotel Chain hotels.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
By simply adjusting the training corpus for the AI, you can create a bot that honestly believes it’s making the best recommendations for users when in reality it’s making the best recommendation for advertisers. It’s a subtle, undetectable twist on the notion of paying for placement — and it’s almost guaranteed to happen. Recommendation bots will arrive with an agenda, and unless guidelines or regulations are put in place, you won’t know what it is.
2. Virtual girlfriends/boyfriends (within 3 years)
As a society we keep our friends close and our phones even closer. It’s not a stretch to imagine the virtual personas of our phones shifting from trusted adviser to romantic interest — think the Oscar-winning film Her.
With AI being endlessly adaptive and lacking the burden of an ego, it’s perfectly positioned to be trained to become the ideal better half. Imagine a partner that could learn your tendencies and desires and adjust itself to meet them in every way. Then imagine what this means for how we learn to socialize, and how we subsequently treat those around us. It sounds frictionless and idyllic on the surface, but there’s a darker side that comes with having our every whim indulged and no one to check our more self-centered proclivities.
This is something we’ll have to deal with sooner rather than later, as virtual partners aren’t so far away. In fact, they already exist in a limited fashion in Japan, and as the technology improves I suspect they’ll be everywhere.
3. Nefarious uses (within 3 years)
The above examples may be fraught with problems, but they’re not intentionally designed to harm users or society. But any technology can be used for evil — or at the very least, for mischief.
Let’s take an example from the corporate world. Imagine a corporation with the resources or technological know-how to build a bot that mimics their competitors’ users. These bots could be used to attack my competitors’ support sites, gumming up support operations and forming an intelligent denial-of-service attack. This may be a relatively harmless-sounding scenario, but that’s what makes it scary. There are without a doubt people actively trying to think of ways to do harm with this new technology.
[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":2140890,"post_type":"guest","post_chan":"none","tags":null,"ai":true,"category":"none","all_categories":"ai,bots,business,","session":"C"}']
Given the above, is the world doomed to a Terminator-like existence in the not too distant future? Fortunately, probably not. There are a variety of corporate, educational and governmental working groups exploring the questions, risks, and threats that AI and machine learning will uncover. Of these, the one with the most technical credibility is the group created by Google, Microsoft, IBM, Facebook, and Amazon. The very collaboration between these long-term competitors points to how seriously the issue of the morality and ethics in AI is taken by major companies.
Is it ideal that the people discussing ethics and morality are ones pushing the technologies that will cause the problems mentioned above? No, but at the moment they’re the ones best equipped to point out the issues — and manage them. The most credible, non-industry-associated group is Fairness, Accountability, and Transparency in Machine Learning (FAT ML).
Despite groups like the above, we are going to be in the Wild West for a while. No one knows for sure when these problems will pop up — nor precisely how we’ll deal with them when they do. It’s going to be quite a ride, but I’m hopeful that the benefits of AI and machine learning will drastically outweigh the costs.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More