Twitter Strips ‘Misgendering’ and ‘Deadnaming’ From Hateful Conduct Policy

Twitter Strips ‘Misgendering’ and ‘Deadnaming’ From Hateful Conduct Policy
The Twitter logo on a phone in this photo illustration in Washington on July 10, 2019. (Alastair Pike/AFP via Getty Images)

Twitter has modified its “hateful conduct” guidelines, removing a policy that designated the targeted “misgendering” or “deadnaming” of transgender individuals as violent speech.

In 2018, the platform enacted a policy that labeled it a form of harassment when Twitter users refer to a transgender person by their biological sex or name—something known as misgendering and deadnaming.

Prior to the move, Twitter’s guidelines stated the following under the “Slurs and Tropes” subsection: “We prohibit targeting others with repeated slurs, tropes or other content that intends to degrade or reinforce negative or harmful stereotypes about a protected category. This includes targeted misgendering or deadnaming of transgender individuals.”

In the updated version, the microblogging service states exactly the same but without the final sentence specifically mentioning transgender users.

The version of Twitter’s previous guidelines can be reviewed here, while the updated version can be found here.

The updated version still defines “targeted harassment” as behavior that is repeated, unreciprocated, and intended to humiliate or degrade an individual(s), noting this includes targeting people based on gender, race, religious affiliation, age, disability, or serious disease, among others.

Although the Elon Musk-owned company modified the guidelines’ language, it does not necessarily mean this constitutes any change in the policy itself.

Twitter did not immediately respond to a request for comment on Wednesday.

However, the move did stir some controversy among LGBTQ activist organizations, with one group claiming the platform could now become an “unsafe” space for transgender users.

According to GLAAD, an LGBTQ advocacy group that spotted the change this week, the sentence mentioning transgender individuals was still present when Twitter announced its updated guidelines on April 7, but it was pulled down a day after, archives captured by the Wayback Machine show.

Sarah Kate Ellis, the advocacy group’s president and CEO, responded to Twitter’s decision to “covertly roll back” its policy, claiming it’s an example of “how unsafe the company is for users and advertisers alike.”

“This decision to roll back LGBTQ safety pulls Twitter even more out of step with TikTok, Pinterest, and Meta, which all maintain similar policies to protect their transgender users at a time when anti-transgender rhetoric online is leading to real world discrimination and violence,” she said.

Content Moderation

After it was announced that Musk, who recently reclaimed his position as the world’s richest man, bought the tech giant for $44 billion last year, there have been many changes regarding user censorship, account validation, and the restoration of previously banned accounts.

In a series of tweets on April 17, Twitter announced its latest changes to a list of consequences for users who do not follow the rules, a move designed to add more transparency to the enforcement actions for everyone on the platform.

The company said it will start to put a content warning label on some posts that are “potentially violating” the service’s hateful conduct policy. In the past, such posts would be removed.

“These actions will be taken at a tweet level only and will not affect a user’s account. Restricting the reach of Tweets helps reduce binary ‘leave up versus take down’ content moderation decisions and supports our freedom of speech vs freedom of reach approach,” the company said in its latest content moderation update.

“We may get it wrong occasionally, so authors will be able to submit feedback on the label if they think we incorrectly limited their content’s visibility,” it added. “In the future, we plan to allow authors to appeal our decision to limit a Tweet’s visibility.”

Twitter considers hateful content media or text that incite violence against individuals or groups with an intent to harass, referencing genocides and lynchings, repetitive usage of slurs and tropes, and dehumanization based on religion, gender identity, and race, among others, according to its policies.

“We will continue to remove illegal content and suspend bad actors from our platform,” the company said on Monday. “We’re committed to increasing transparency around our moderation actions, and we’ll continue to share updates on our progress.”

ntd newsletter icon
Sign up for NTD Daily
What you need to know, summarized in one email.
Stay informed with accurate news you can trust.
By registering for the newsletter, you agree to the Privacy Policy.
Comments