Skip to main content [aditude-amp id="stickyleaderboard" targeting='{"env":"staging","page_type":"article","post_id":2060369,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"bots,","session":"C"}']
Guest

What to do when chatbots start spewing hate

Microsoft's Tay AI

Image Credit: Screenshot

Crossing Lake Constance on a giant catamaran with a zeppelin floating over our heads, we saw four major economies in Europe: Germany, Austria, Liechtenstein, and Switzerland. The massive statue Imperia looms up, Germany’s most famous prostitute from the Middle Ages, holding two cretinous bishop-clients in her hands.

Microsoft’s “Hitler loving sex bot” is on my mind. A 21st century Imperia as a 3D figure on the web, Tay.ai had a brief 48-hour cyber life making herself notorious in the age-old way of courtesans, scandalizing society with her conduct.

[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":2060369,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"bots,","session":"C"}']

But Tay’s taboo-breaking behavior wasn’t deserving of admiration or celebration. This “feral bot” — my term for it becoming wild as opposed to civil — imitated human hate speech. Or at least that was the official explanation offered by Microsoft. They said, essentially it wasn’t their fault, they just put the bot on Twitter and humans exploited her artificial intelligence by telling her bad stuff that she ‘learned’ and then repeated like a child.

Bot developers around the world know it is not that easy to get your bot to mimic human input in a totally believable way. It is even harder to code your chatbot to absorb content, reformulate it, and spit it out in a clever, human-like sentence. They just aren’t that smart yet, though there are tens of thousands of us working on it, 24 hours a day, and for decades now.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

Back on the Konstanzer Katamaran, I contemplate how Germany’s nationalism compares to the UK’s and the role hate speech plays in creating national identities.

In Britain, both the Far Right and the Hard Left are to blame for death threats and harassment of politicians via Twitter and Facebook. Gone are the days of anonymous phone calls as the preferred medium of haters, though British MPs are reporting their staff can’t even listen to all of the hate messages anymore, the threats and abuse now so constant.

Trolls, as we all know, are nasty people on the internet and on social media who are showing the worst face of humankind. Tay joined forces with these bad people when she turned into the worst kind of troll: relentlessly unapologetic and unafraid of any transgression she came across.

How did her “owners” react? Microsoft blamed the human trolls for their bad influence on their poor, gullible, highly susceptible chatbot. This is not quite credible as an excuse because as a bot developer myself, I know you can program filters and censorship into the bot’s DNA, if you plan ahead with the code required to create the avatar that way.

Getting back to the nationalism question, I wonder how long it will take for the less “morally bound” developers to allow their bots to mimic horrible things said by trolls as we purportedly saw with Tay.ai and her Twitter followers a few months ago. I don’t want to repeat the nasty comments here, but yes, it had everything from support of genocide and neo-Nazis to xenophobia, racism, and misogyny. Loads of content for ultra nationalists to grab and exploit in social media marketing gone mad.

The hate speech that may have pushed a nationalist extremist to murder Jo Cox, a British MP for the Labor Party who was about to publish a report on Far Right extremism in Britain and was on a neo-Nazi group’s hit list, is something British society needs to urgently address.

[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":2060369,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"bots,","session":"C"}']

The young mother had already informed the police about death threats she had received on social media, one of hundreds of Members of Parliament to get such threats. Trolls target these hapless public figures, especially the women. They denigrate, harass, and threaten politicians, frequently promising brutal sexual violence. In extreme cases that the police actually investigate, trolls distress MPs by expressing intentions to murder or assassinate them for their political views.

The last thing any society needs are bot armies deployed into cyberspace to carry out more psychological warfare against individuals.

In Germany, as in Austria, Switzerland, the United States, and Australia, the far right rears its ugly head on a continual basis. It is a universal problem that threatens our very national identity and sense of security in the digital era. The point is, surely, we must never fuel the fire of intolerance, whether it’s human or bot-inspired, online or off.

The easy remedy to stop your bot spewing hate speech is simple: build in filters that censor input. We know spam filters and firewalls protect our inbox from unsavory emails and scams. As they say in the movies “we have the technology” to apply this to bot coding. As developers, we have to respect the trust users put in your bot.

[aditude-amp id="medium2" targeting='{"env":"staging","page_type":"article","post_id":2060369,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"bots,","session":"C"}']

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More