After years of criticism for the way in which it handles harassment, threats and bots on the platform, Twitter has unveiled a new filter designed to automatically prevent users from seeing threatening messages. The social media platform has moved to ban indirect threats of violence and introduced temporary suspensions for accounts that fall foul of its policies.
This feature previously existed, however was only available to people with verified accounts — comprising a very small section of its 313 million user base. Now however, Twitter has admitted to its previous failings and is acknowledging its responsibility to curb online abuse by offering this to all users on the platform.
A few days back, in one of the biggest crackdowns, Twitter announced that in the last six months it has suspended over 235,000 accounts belonging to extremists.
The new feature takes into account a wide range of signals and context that frequently correlates with abuse including the age of the account itself and the similarity of a tweet to other content that the Twitter safety team has in the past independently determined to be abusive.
It will not affect the ability to see content that you’ve explicitly sought out, such as tweets from accounts you follow, but instead is designed to help limit the potential harm of abusive content. This feature does not take into account whether the content posted or followed by a user is controversial or unpopular.
Largely speaking, the overhaul is a result of Twitter’s CEO Dick Costolo admitting the platform’s failures to date in getting on top of abusive content on the platform. He has publicly admitted to being embrarrassed by Twitter’s failures to tackle this problem and their responsibility to get control of it.
One aspect the new filters do not seem to address is the process by which users report abuse to the platform and the dubious way the site reviews those complaints. This is a bit of a missed opportunity for the platform and was perfect timing to address this widely compained about issue.
The new features solve some of the fatigue of using Twitter every day, however still allows harassment, which can lurk just beneath the surface in an unchecked notification. And it’s the persistence of these tweets that create a dilemma for Twitter: does abuse truly cease to exist — or to matter — if its targets no longer see it? Is the platform continuing to nurture an environment for trolls, but masking their activity through large sets of blinkers for targets of abuse?
Twitter has expanded their definition of what counts as a threat and this is a a positive move by the platform and should be widely welcomed. It’s great to see Twitter taking genuine steps to address the issue of abuse. Sure, this might not be a perfect solution, or address the issue fully, however it’s never too late for social media companies like Twitter to take abuse seriously, and begin the journey of tackling the problem head on.