Twitter to Test Warning Prompt on Replies Containing Offensive Messages
Twitter is set to begin an experimental phase, where users will receive a warning prompt before they publish a reply containing harmful or offensive messaging.
The company has said that it will run the “limited experiment” on iOS. The Twitter Support account published a tweet outlining how this experiment will work for iOS users. Twitter hopes this prompt will help users “rethink” their messaging in the event when “things get heated”.
When things get heated, you may say things you don’t mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.
— Twitter Support (@TwitterSupport) May 5, 2020
The Twitter Support’s messaging doesn’t exactly clarify what will constitute as offensive or harmful language. However, Twitter does have hate speech policies in place as well as policies against abusive behaviour.
From the messaging, it doesn’t appear that Twitter is concerned about removing offensive language entirely from its platform. This measure seems to be more in line with giving users an extra nudge to reconsider their verbiage before publishing and replying to another tweet.
Instagram has rolled out a similar warning when users are posting what could be considered harmful messages. Upon publishing, a pop-up window will appear and a message stating that the caption “looks similar to others that have been reported.”
There’s no word on when this experiment will begin or if it will eventually come to Android in the future. Assuming Twitter sees positive results from this limited experiment, a wider release may be considered.