Twitter has a known and admitted problem with abusive content – one that truly embodies the phrase "no silver bullet." How do you stop people from harassing others? How do you prevent people from tweeting abusive content? You can't ban everyone. You surely can't stop people from creating multiple accounts. You can't prevent people from hurling verbal garbage at others.
Especially when you continue to rally under the flag of free speech.
To combat the problem, Twitter has taken a bunch of baby steps as of late. It tripled the team that handles reports of abuse and harassment, and began asking suspended users to verify a phone number and delete offending tweets before reinstatement. It updated its policies to specifically ban revenge porn and other content posted without a user's consent. It made reporting threats to the police a little easier. It implemented a new notifications filtering option.
But now Twitter is deploying a new algorithm to fight the abuse. It's is testing a way to automatically detect abusive tweets based on a number of factors, including age of account and similarity to previously flagged tweets.
Twitter's leaving this a bit vague, as the specifics of the tool aren't really laid out in the company's announcement:
"We have begun to test a product feature to help us identify suspected abusive Tweets and limit their reach. This feature takes into account a wide range of signals and context that frequently correlates with abuse including the age of the account itself, and the similarity of a Tweet to other content that our safety team has in the past independently determined to be abusive. It will not affect your ability to see content that you’ve explicitly sought out, such as Tweets from accounts you follow, but instead is designed to help us limit the potential harm of abusive content. This feature does not take into account whether the content posted or followed by a user is controversial or unpopular," says Twitter.
What this likely means is that Twitter won't remove a tweet when it's detected by the system, but will likely hide it from the mentioned users' notifications.
Twitter execs have floated this sort of approach in the past – saying you might have the right to say something, but Twitter doesn't have to be your megaphone. It's limiting the reach of tweets, without removing them entirely.
This action makes a lot of sense when you consider the company's recent insistence on achieving a balance between protecting users and protecting free speech.
Twitter is also giving its support team another option – the account lockout.
"In addition to other actions we already take in response to abuse violations (such as requiring users to delete content or verify their phone number), we’re introducing an additional enforcement option that gives our support team the ability to lock abusive accounts for specific periods of time. This option gives us leverage in a variety of contexts, particularly where multiple users begin harassing a particular person or group of people," says Twitter.
There's another policy change as well, as Twitter is making rules barring threats a bit more vague in order to give its support team more leeway to punish certain activity.
"We are updating our violent threats policy so that the prohibition is not limited to “direct, specific threats of violence against others” but now extends to “threats of violence against others or promot[ing] violence against others.” Our previous policy was unduly narrow and limited our ability to act on certain kinds of threatening behavior.
It's a subtle change, but now a user only has to say "I wish someone would kill JANE SMITH" as opposed to "I'm going to kill JANE SMITH" to qualify for a reprimand (or more, depending on how actionable the threat appears).
No silver bullet. There never will be when it comes to online speech.
Image via Garrett Heath, Flickr Creative Commons