TikTok bans misgendering, deadnaming from its content
TikTok is updating its community guidelines to ban deadnaming, misgendering and misogyny.
The changes, announced Tuesday, are a part of a broader update designed to promote safety and security on the platform. The app will also remove content that promotes disordered eating and further restrict content related to dangerous acts.
Last year, a report by GLAAD said TikTok and other top social media sites are all “effectively unsafe for LGBTQ users.”
TikTok said its new guidelines are intended to “further support the well-being of our community and the integrity of our platform. Transparency with our community is important to us, and these updates clarify or expand upon the types of behavior and content we will remove from our platform …”
The updated community guidelines add clarity to hateful ideologies, explicitly banning content targeting transgender or nonbinary people “through misgendering or deadnaming,” according to the guidelines. Deadnaming refers to the act of calling a transgender person by a name that they no longer use.
Media that supports conversion therapy will also not be tolerated on TikTok.
Such content had already been prohibited, TikTok said, but “we’ve heard from creators and civil society organizations that it’s important to be explicit in our Community Guidelines.” The app also recently added a feature allowing users to add pronouns to their profiles.
Similarly, the social media platform says it was already removing content that promoted eating disorders. But the adjusted guidelines clamp down on disordered eating, as well.
The decision to remove the promotion of disordered eating is a product of conversations with experts, recognizing that individuals can experience and engage in “unhealthy eating patterns without having an eating disorder diagnosis.”
Along with the new guidelines, TikTok published its most recent quarterly Community Guidelines Enforcement Report. More than 91 million videos — about 1% of all uploaded videos — were removed during the third quarter of 2021 because they violated the guidelines.
Of all videos removed from July to September 2021, about 1.5% were removed due to hateful behavior, which includes hate speech on the basis of race, sexual orientation and gender, among other attributes.