Twitter today said it has suspended more than 125,000 accounts that were used to make threats about or promote terrorist activity, especially as it pertains to ISIS.

“Like most people around the world, we are horrified by the atrocities perpetrated by extremist groups. We condemn the use of Twitter to promote terrorism and the Twitter Rules make it clear that this type of behavior, or any violent threat, is not permitted on our service,” Twitter said in a blog post. “As the nature of the terrorist threat has changed, so has our ongoing work in this area.”

The effort comes after Telegram announced that it had blocked 78 channels that had engaged in ISIS-related activity.

People affiliated with ISIS have previously shared pictures and other content, hence promoting the group’s cause to millions.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

Twitter said today it had increased the size of teams who look into reports about Twitter activity related to terrorism and thereby has begun to respond to these reports faster.

The company noted that “there is no ‘magic algorithm’ for identifying terrorist content on the Internet.” That’s a fascinating admission given Twitter’s collection of people skilled in machine learning. So human curation is the area in which Twitter is doubling down in order to deal with terrorist rhetoric on the social network.

Last month Twitter updated its rules around hateful conduct. “You may not make threats of violence or promote violence, including threatening or promoting terrorism,” one rule states.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More