Nearly everyone has had the experience of saying something they don’t mean, and Twitter is working to help prevent it.
Twitter has struggled with an increasingly toxic environment as trolls and abusive individuals have hijacked the social media platform. It’s not uncommon for well-known and highly visible individuals to take breaks from the platform as a result of the vitriol they experience. Over the last couple of years, Twitter has experimented with a number of options in an effort to fight it.
The latest endeavor is a warning system that will prompt an individual when they are preparing to publish a tweet containing harmful language, giving them the option of editing it first.
The measure was announced via a tweet from the Twitter Support account:
When things get heated, you may say things you don’t mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.
— Twitter Support (@TwitterSupport) May 5, 2020
Based on many of the comments, many of which have exactly the kind of language the feature is designed to help weed out, it’s safe to say this measure is going to be fairly controversial.