Though many social networks, forums, and websites struggle with how to deal with abuse and harassment, Twitter is usually the first one mentioned when the issue is discussed. Twitter is more public than, let’s say, Facebook (where you choose your friends) – so the instances of harassment are much greater. Twitter is also more high-profile than many other social media sites – with many more users. Also, when your CEO publicly admits that you really suck at dealing with abuse, people are going to fixate.
But the fixation is justified. Twitter does have a problem with bullying, abuse, and harassment. It’s not just a problem for Twitter’s 300 million+ users – it’s also a problem for Twitter itself. Getting a reputation as a place where foaming mouth vitriol is lurking at every turn is bad for business. Twitter, both for its own sake and the sake of its users, has a vested interest in somehow curbing the level of trolling on the site.
So Twitter has taken steps – most of them incremental. Twitter, by its own admission, is trying to walk a delicate line between protecting users and preserving free speech.
Have you witnessed abuse and harassment on Twitter? What should the social network do about it? Let us know in the comments.
“Balancing both aspects of this belief — welcoming diverse perspectives while protecting our users — requires vigilance, and a willingness to make hard choices. That is an ideal that we have at times failed to live up to in recent years. As some of our users have unfortunately experienced firsthand, certain types of abuse on our platform have gone unchecked because our policies and product have not appropriately recognized the scope and extent of harm inflicted by abusive behavior,” said Twitter general counsel Vijaya Gadde in a recent op-ed.
To combat the problem, Twitter has taken some small steps as of late. It tripled the team that handles reports of abuse and harassment, and began asking suspended users to verify a phone number and delete offending tweets before reinstatement. It updated its policies to specifically ban revenge porn and other content posted without a user’s consent. It made reporting threats to the police a little easier. It implemented a new notifications filtering option.
The most recent, and arguably most substantial move Twitter has made is to deploy a new algorithm to try to reign in abuse on the site. Twitter is currently working on a way to automatically detect abusive tweets – using indicators like account age and a tweet’s similarity to previously-flagged tweets.
“We have begun to test a product feature to help us identify suspected abusive Tweets and limit their reach. This feature takes into account a wide range of signals and context that frequently correlates with abuse including the age of the account itself, and the similarity of a Tweet to other content that our safety team has in the past independently determined to be abusive. It will not affect your ability to see content that you’ve explicitly sought out, such as Tweets from accounts you follow, but instead is designed to help us limit the potential harm of abusive content. This feature does not take into account whether the content posted or followed by a user is controversial or unpopular,” says Twitter.
Once again, clearly trying to toe the line.
Twitter hasn’t been too forthcoming about what this all means. What this likely means is that Twitter won’t simply yank a tweet when it’s detected by the system, but will instead hide it from the mentioned users’ notifications. Twitter CEO Dick Costolo has hinted at this sort of approach in the past, saying that Twitter users have a right to free speech – but Twitter doesn’t always have to be a megaphone for said speech.
Long story short, Twitter is trying. Some may say it’s not enough, and that’s fair, but the company is at least taking steps.
But just how big is Twitter’s problem? Do these new protocols really have a chance of succeeding in the fight against harassment?
In November of 2014, months before Twitter made some of the moves mentioned above, the company partnered with a group called Women, Action, & the Media (WAM!) , a nonprofit “dedicated to building a robust, effective, inclusive
movement for gender justice in media”, to study the online harassment of women and Twitter’s reaction to it.
Today, that report is ready.
The report is based on over 800 reported instances of harassment on the site, so it’s important to know that going forward. All of these instances of abuse were reported to WAM! by either the person being harassed (43%) or someone who saw the harassment taking place (57%).
What WAM! found was that the biggest type of reported abuse was hate speech (sexist, racist, homophobic slurs) at 27% of total reports, followed by doxxing (revealing a user’s private information) at 22%. Violent threats made up 12% of all threats reported, and impersonation tallied 4%.
WAM! says that in 43% of instances of harassment it received, it “escalated” them – meaning it passed them along to Twitter.
Here’s how Twitter responded in those cases:
WAM! collected data on the process and outcome of all 161 tickets opened with Twitter in the three week monitoring period. In 55% of cases, Twitter took action to delete, suspend, or warn the alleged harassing account. Most of Twitter’s actions against alleged harassers were associated with reports of hate speech, threats of violence, and nonconsensual photography.
Was Twitter more likely to take action on some kinds of harassment and not others? In a logistic regression model, the probability of Twitter taking action on reports of doxxing was 20 percentage points lower than tickets involving threats of violence, in cases where WAM! recorded an assessment risk, an odds ratio of 0.32. This is likely due to the common practice of ‘tweet and delete,’ in which harassers temporarily post private, personal information and remove the content before it can be reported and investigated by Twitter.
The practice of “tweet-and-delete,” as WAM! calls it, is clearly an obstacle for Twitter when it comes to users self-reporting abuse. Sometimes, an online harasser will leave a tweet up just long enough for the target to see it. Since Twitter’s reporting mechanisms involve providing Twitter a link to the tweet(s) in question, it’s easy to see how this is a big problem.
“Twitter currently requires URLs and rejects screenshots as evidence; consequently, Twitter’s review process doesn’t address ‘tweet and delete’ harassment, which often involves doxxing. While Twitter updated its reporting system in February 2015 to accept reports of doxxing, there have been no public changes with regard to the evidence it accepts for harassment reports,” says WAM!’s report. “Twitter’s default URL requirement makes it complicated to report harassment that is not associated with a URL, such as exposure to violent or pornographic profile images or usernames via follower/favorite notifications.”
Another issue? What WAM! called “dogpiling” – where victims are inundated with a barrage of harassing tweets from various accounts. Without a way to report all of this together, it’s rather hard to manage.
WAM! also identified other issues with Twitter’s reporting system – “false flaggers” and “report trolls”.
False flaggers attempt to use Twitter’s mechanisms against the victim.
“This person falsely reports an account for harassment. This person intentionally tries to use Twitter’s policies and the complexity of determining harassment to silence an account. This person may also report accounts falsely to draw reviewers’ attention to themselves and their stance on issues under contention, often as an act of intimidation or warning. They may provide inaccurate contact details.”
Report trolls try to logjam the system by making false reports.
“This person performs a character, pretending to have been harassed. Their reports are marked by reductive narratives and stereotypical expressions, and often contain internal indicators such as word play, name choices, etc., that point to the performance aspect. They may provide functioning contact details under their character’s persona in order to lengthen the performance.”
In the end, WAM! concluded that additional policies are needed to combat abuse on Twitter. It offered these suggestions:
More broadly and clearly define what constitutes online harassment and abuse, beyond “direct, specific threats of violence against others” or “threats of violence against others, or promoting threats of violence against others” to increase accountability for more kinds of threatening harassment and behavior. 19% of reports were defined as “harassment that was too complex to enter in a single radio button.” See “Summary of Findings” and page 15 of the report.
Update the abuse reporting interface, using researched and tested trauma-response design methods. Twitter should acknowledge the potential trauma that targets may experience; additionally, connecting users to support resources would go a long way in offering to inspire constructive discourse and structural changes. See page 34 of the report.
Develop new policies which recognize and address current methods that harassers use to manipulate and evade Twitter’s evidence requirements. These policies should focus particularly on the “tweet and delete” technique, where harassers share, but quickly delete, abusive comments and information.The problem of evidence prevents comprehensive resolution for all reports acknowledgement and validation. See page 35 of the report.
Expand the ability for users to filter out abusive mentions that contain “threats, offensive or abusive language, duplicate content, or are sent from suspicious accounts,” to counter the effect of a harassment tactic known as dogpiling– where dozens, hundreds or even sometimes thousands of Twitter users converge on one target to overwhelm their mentions and notifications. This kind of filtering would be opt-in only, enabling users to decide whether to use it or not. See “Summary of Findings: Dogpiling” in the report.
Hold online abusers accountable for the gravity of their actions: suspensions for harassment or abuse are currently indistinguishable from suspensions for spam, trademark infringement, etc. This needs to change. Ongoing harassment was a concern in 29% of reports, where reporters mentioned that harassment started more than three weeks before the report. See page 15 of the report.
Diversify Twitter’s leadership. Twitter’s own 2014 report reveals that its company leadership is 79% male and 72% white. Systemic changes in the hiring and retention of diverse leaders will likely expand internal perspectives about harassment since women and women of color, disturbingly absent, are disproportionately targeted online.
Twitter has already begun work on some of these suggestions – for instance the aforementioned new filtering tools and algorithms to limit certain tweets’ impact.
But it’s clear that Twitter trolls can game Twitter’s current anti-abuse policies – and quite easily at that.
You can check out the full report here.
How can Twitter curb abuse on its platform? Is there any point at which Twitter would cross the line and stifle free speech – something the company says it’s committed to protecting? Let us know in the comments.
Image via Rosaura Ochoa, Flickr Creative Commons