One of the biggest challenges facing any social platform is abusive trolls, and Internet companies are increasingly having to tread a fine line between supporting freedom of speech and censoring abusive and threatening content.
Yesterday, Reddit — the popular social news site that lets users submit, discuss, and “upvote” articles and anecdotes — announced it was removing five subreddits (topic-specific sub-sections) deemed to be conducive of harassment: r/fatpeoplehate, r/hamplanethatred, r/transfags, r/neofag, and r/shitniggerssay.
Aware it would be accused of censorship, Reddit attempted to preempt such criticism by saying that it’s “banning behavior, not ideas.” In its announcement, Reddit added:
Our goal is to enable as many people as possible to have authentic conversations and share ideas and content on an open platform. We want as little involvement as possible in managing these interactions but will be involved when needed to protect privacy and free expression, and to prevent harassment.
It is not easy to balance these values, especially as the Internet evolves. We are learning and hopefully improving as we move forward. We want to be open about our involvement: We will ban subreddits that allow their communities to use the subreddit as a platform to harass individuals when moderators don’t take action. We’re banning behavior, not ideas.
Given that anyone is able to create a subreddit for a specific topic, Reddit is inviting users to report any other subreddits deemed to contravene its anti-harassment rules. Still, Reddit was accused of arbitrary censorship by many commenters, leading founder Alexis Ohanian to defend the decision on Twitter.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
But Reddit wasn’t the only company to take a swipe at trolls and online bullies yesterday. Twitter announced it was to allow users to export their “block” lists, essentially creating a crowdsourced mechanism for sharing information around nuisance Twitter users. Simply take someone else’s list, import them into your own account, and you’re done.
This comes less than two months after Twitter updated its abuse policies and revealed it would also let supports staff “lock” abusive accounts for set periods, while trialing a feature that would “limit the reach” of abusive tweets. And this is all part of a big push by Twitter to help create a friendlier experience on the social network.
In a memo sent by Twitter CEO Dick Costolo earlier this year, he admitted: “We suck at dealing with abuse and trolls on the platform and we’ve sucked at it for years. It’s no secret and the rest of the world talks about it every day. We lose core user after core user by not addressing simple trolling issues that they face every day.”
Twitter and other social platforms have traditionally been reluctant to interfere with content shared on their networks, but the fact is they are money-making businesses. Twitter has had an onboarding problem for a long time, and when existing or would-be new users are pushed away due to trolls and abuse, it has to act.
Reddit too has a long history of toxicity. But with a $50 million investment under its belt, the company wants to shake off the trolls and negative reputation they bring and be taken seriously as a media company, with original video content now on its agenda.
Facebook has had issues of its own in recent times. The company has been criticized in the past for not removing content deemed offensive, including decapitation videos. But it has u-turned on some of these policies. And following the horrific terrorist attack on Paris-based satirical mag Charlie Hebdo back in January, Facebook CEO Mark Zuckerberg vowed to reject all attempts by extremists to censor the social network. He said:
Facebook has always been a place where people across the world share their views and ideas. We follow the laws in each country, but we never let one country or group of people dictate what people can share across the world.
Yet as I reflect on yesterday’s attack and my own experience with extremism, this is what we all need to reject — a group of extremists trying to silence the voices and opinions of everyone else around the world.
I won’t let that happen on Facebook. I’m committed to building a service where you can speak freely without fear of violence.
But Facebook has been accused of selective censorship, as it seems to take some posts down, but not others. It has had a longstanding battle with mothers posting breastfeeding pictures, for example, and it recently hit the headlines again after such a photo was removed from the social network after the company received a complaint.
Tech companies would prefer not to play the role of censors or community managers, but with businesses to run, users to attract, and investors to appease, they really have no choice.
Any company that hosts user-generated content face an ongoing battle in terms of creating “safe” platforms for communities of people to flourish without fear of abuse, threats, and general trolling. If content is removed or accounts are blocked, the anti-censorship brigade comes out in force. If they adopt a laissez-faire attitude, they’re accused of supporting bullies. They really are caught between a rock and a hard place.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More