Twitter to Test Warning Prompt on Replies Containing Offensive Messages

Twitter is set to begin an experimental phase, where users will receive a warning prompt before they publish a reply containing harmful or offensive messaging.

The company has said that it will run the “limited experiment” on iOS. The Twitter Support account published a tweet outlining how this experiment will work for iOS users. Twitter hopes this prompt will help users “rethink” their messaging in the event when “things get heated”.

The Twitter Support’s messaging doesn’t exactly clarify what will constitute as offensive or harmful language. However, Twitter does have hate speech policies in place as well as policies against abusive behaviour.

From the messaging, it doesn’t appear that Twitter is concerned about removing offensive language entirely from its platform. This measure seems to be more in line with giving users an extra nudge to reconsider their verbiage before publishing and replying to another tweet.

Instagram has rolled out a similar warning when users are posting what could be considered harmful messages. Upon publishing, a pop-up window will appear and a message stating that the caption “looks similar to others that have been reported.”

There’s no word on when this experiment will begin or if it will eventually come to Android in the future. Assuming Twitter sees positive results from this limited experiment, a wider release may be considered.

P.S. - Like our news? Support the site: become a Patreon subscriber. Or shop with our Amazon link, or buy us a coffee! We use affiliate links when possible--thanks for supporting independent media.