The bot era is here, and the world has already begun to see its transformative potential. But like any technology, there will be bad bots as predictably as good ones. With every advancement, there are people looking to exploit it. Anticipating what they might do is key so that builders, developers, and users can prevent, preempt, and prepare.
Here are the “dark bots” we’re likely to see:
The Stealthy Bot
This is a bot whose ownership is unknown, giving it the freedom to cheat with impunity. Trust and verification is essential for security on any platform. On the Web, services like Truste, VeriSign, and others have provided the trust infrastructure needed. For apps, Apple and Google do the same. Credit card issuers and payment platforms do as much for merchants offering paid services. But who will provide the trust infrastructure for bots? Will messaging channels certify bot developers? Each channel has a different approval process now. It will be crucial to arrive at a shared certification process.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
The Leaky Bot
This bot will leak information about you to other bots. The information may or may not be personally identifiable, but these leaky bots will compromise your privacy beyond reasonable use. The bot is merely the latest battlefront for user privacy. While the web ecosystem settled on cookies to establish the “right” balance between user privacy and ad targeting, new mechanisms need to be created in the bot world. In the meantime, bots need privacy statements along with strong enforcement.
The Sneaky Bot
This bot will abuse your trust to gather knowledge about you. By leveraging the messaging metaphor, bots assume the role of a friend. Bots have a conversational interface, making them appear more human than websites or apps. Inevitably, human users will start seeing bots as their friends, sharing more information than they would with sites or apps. The Sneaky Bot will abuse this trust to collect or even ask for more information than is necessary. Since bot conversations are private and personalized, others may not be able to monitor this abuse. There isn’t a clean solution to this problem — not yet. Perhaps the messaging channels can monitor these conversations, but that isn’t foolproof and causes even more privacy concerns. Perhaps the solution is non-technical: End-users will have to be wary and use common sense when sharing their information with bots.
The Thieving Bot
This bot will pick your pocket. It may charge you a fee without delivering the service. Or the service itself may be of an inferior quality to what was advertised. If it’s a small transaction with a micropayment, users may not even bother trying to reclaim the funds. We will need to develop escrow mechanisms and/or a reputation management service to weed out rogue bots.
The Transformer Bot
This is a bot that performs a bait-and-switch on the user. It advertises a compelling service up front but gradually switches over to a different kind of service over time. For example, a content bot could become an ad bot over time. A bot reputation or blacklisting service is the way to identify such bots.
The Spammy Bot
These are bots that start off being well behaved but gradually start spamming you. While a single bot can be managed or blocked, the scale of this problem will grow rapidly as more bots get more aggressive. Messaging channels are best positioned to detect mass broadcast of standard messages or the frequency and reach of these messages. Borrowing techniques from email spam filters will end up helpful.
The Good-Bot-Bad-Bot Pairing
This is a pair of bots that act in concert to bypass user-blocking algorithms. A “good bot” may attract users and cross-promote one or more “bad bots.” The bad bot tries the risky stuff that may get it blocked by the user or the channel. But since the bot developer can maintain a continuing relationship through the good bot, they could be willing to take more risks with the bad bot(s).
Messaging channels will be able to track and keep watch on bots that recommend other bots — as well as on bots with notably intersecting user bases. While the bad bots may get blocked quickly, there should be a cost to the referrer as well.
Bots are yet another battlefront in the cat-and-mouse game played between good and bad actors. This new ecosystem must prevent, preempt, and prepare for the coming onslaught of the bad bots. They will be here before you know it.
Beerud Sheth is founder and CEO of bot building platform Gupshup.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More