LGBTQ safety continues to suffer across major social media platforms, with most companies announcing their lowest scores ever in an annual assessment of policies affecting gay users, according to a new report from GLAAD.
6th annual Social Media Safety Index (SMSI) We evaluate six platforms for their LGBTQ safety, privacy, and expression policies: TikTok, YouTube, X, Facebook, Instagram, and Threads. The 2026 findings show a significant decline overall, with TikTok remaining the only platform that remained stable compared to the previous year.
Researchers say the results show a widening gap between the community’s prescribed guidelines and actual protections for LGBTQ users.
Scores hit record lows on most platforms
According to the report, X continues to rank last with a score of 29 out of 100, reflecting continued concerns over hate speech and harassment. YouTube follows in 30th place, down 11 points from last year and the steepest decline of all platforms measured.
Meta’s platform also decreased. Instagram had a score of 41, Facebook a score of 40, and Threads a score of 39, all of which were down from their 2025 levels. TikTok remained the index’s highest score at 56, but showed no improvement.
GLAAD researchers say this decline reflects policy regression, less transparency, and weakening protections for LGBTQ users, especially transgender and gender nonconforming people.
Policy changes raise concerns
The report notes that several recent changes at large tech companies have contributed to the decline in safety scores.
Meta has come under fire for changing its hate speech rules in a way that critics say allows more anti-LGBTQ speech on its platform. The company also made changes to its approach to content moderation, including scaling back its diversity, equity, and inclusion efforts and ending its fact-checking program in the United States.
Meanwhile, YouTube removed gender identity from the list of characteristics protected by its hate speech policy, which GLAAD argues puts LGBTQ users at greater risk of harassment and abuse.
The report claims that both companies are moving away from previously established best practices when it comes to online safety.
Key findings point to broader risks
The report raises concerns about how artificial intelligence is shaping content moderation beyond platform-specific policies. The group warns that automated systems can unfairly suppress LGBTQ voices while failing to consistently remove harmful content.
The researchers also raised concerns about data privacy, noting that major platforms are increasingly using user-generated content to train AI systems, often without a clear consent framework.
The index also highlights a lack of transparency, including limited reporting on moderation practices and workforce diversity data.
GLAAD says these trends make it difficult to assess whether platforms are adequately protecting vulnerable communities.
Offline harm is reflected online
The report connects online safety concerns to broader real-world trends, citing more than 1,000 anti-LGBTQ incidents reported in 2025. It also cited FBI data showing anti-LGBTQ bias accounted for more than 20% of reported hate crimes in 2024, the third consecutive year at that level.
Researchers argue that online harassment and misinformation often leads to offline harm, especially when extremist content is spread across digital platforms.
Advocacy requires accountability
Sarah Kate Ellis, president and CEO of GLAAD, said major platforms fail to meet basic standards of safety and transparency.
He called on advertisers and users to reconsider their relationships with platforms that do not adequately protect the LGBTQ community.
“Social media companies are failing to meet basic best practices such as content moderation, transparency, data privacy, and workforce diversity,” Ellis said in a statement included in the report. “They continue to prioritize profit over safety.”
Ellis added that LGBTQ creators and users often have to deal with harassment, threats, and misinformation without meaningful support from the platforms.
what happens next
SMSI recommends strengthening content management systems, increasing transparency around enforcement, and new investments in diversity and inclusion programs. It also calls on platforms to better protect LGBTQ users from targeted harassment while avoiding the suppression of gay content and expression.
As debates over online safety, regulation, and freedom of expression continue, the report suggests that LGBTQ users continue to be disproportionately affected by policy changes from big tech companies.
So far, TikTok stands out as the only platform that has maintained its previous score, while other platforms continue to decline, raising new questions about how social media companies balance growth, moderation, and user safety in an increasingly polarized digital landscape.
Source: Gayety – gayety.com
