Twitter has announced the introduction of a tool that allows users to rewrite replies before publishing that contain what they describe as “harmful” language.
The company said in a tweet from its support account on Tuesday that the new feature would be first brought in as a "limited" measure. After hitting send, users will be alerted if their message contains words similar to other posts that have previously been reported and given an option to revise the message before it's published.
"When things get heated, you may say things you don't mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful", Twitter said.
While the test comes as part of a general attempt by Twitter to combat hateful posts on the social media platform, some users did not take well to the announcement, going as far as to describe it as 'thought policing'.
Let me stick out my hands and you can slap them with a ruler okay
— crafter00 (@crafternut) May 5, 2020
When things get heated, you may say things you don't mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.
— Twitter Support (@TwitterSupport) May 5, 2020
Gab, which is a self-professed "free-speech" alternative to Twitter, used the opportunity to advertise its own platform.
Gab is not, nor will we ever, do this.
— 🔛Gab (@getongab) May 5, 2020
Speak freely: https://t.co/J3RftnOEqv
Others called for the introduction of an edit button to comments. Currently, to edit tweets users must delete then reupload a post.
Edit feature would be a lot better..........
— K 🌸 🦋 (@klalaaaaaaa13) May 5, 2020
Also, delete messages like WhatsApp.
Just give us an edit button already.
— apaulcalypse how (@wxsniper) May 5, 2020
Y'all need an EDIT button! pic.twitter.com/CO5Ks6iUk2
— Indy ☮️ (@Raisingirl_Indy) May 5, 2020
There was some support for the measure, however, as well as calls for the site to counter "fake news".
I really appreciate that. But what about those ppl who are spreading fake news and haterade on twitter, any action on them?
— Shridhar Appa Barkol (@ShridharBarkol) May 5, 2020
In an interview with Reuters, a Twitter representative said that the policy is designed to get users to "rethink" comments before posting to ensure that they are in line with existing guidelines.
“We’re trying to encourage people to rethink their behavior and rethink their language before posting because they often are in the heat of the moment and they might say something they regret”, said Sunita Saligram, Twitter’s global head of site policy for trust and safety.
Twitter policies do not allow users to use slurs, racist, or sexist tropes, or degrading content, but, until now, monitoring has been done by netizens themselves who report offensive posts, as well as through the companies' own screening technology.