A new survey of Facebook, Instagram, and Threads users reveals widespread concern about safety on Meta’s platforms following recent changes to its moderation policies. Conducted by advocacy organizations UltraViolet, All Out, and GLAAD, the survey gathered responses from over 7,000 individuals who belong to groups protected under Meta’s own definitions. These include people targeted based on race, ethnicity, gender identity, sexual orientation, disability, religion, or serious illness.
Survey Results Reflect Decline in User Safety
The survey asked respondents how they perceived Meta’s current efforts to protect users from harmful content. Since CEO Mark Zuckerberg relaxed enforcement around hate speech and misinformation in January 2025, the majority of those surveyed reported feeling less safe. Over 90 percent said they were concerned about the increased prevalence of harmful or targeted content, with many reporting direct exposure to attacks.
One in six participants said they had been targeted with gender-based or sexual violence on a Meta platform. Sixty-six percent reported witnessing harmful content, which the report defined as attacks based on a protected characteristic. Users described encountering slurs, offensive stereotypes, and AI-generated false stories aimed at stirring hostility toward marginalized communities.
Respondents expressed frustration not only with the content itself but also with the changing nature of their feeds. Some said they were being served fewer posts from friends and more from unknown accounts or low-quality sources. One user wrote that they had shifted to alternative platforms like Bluesky and Substack to escape what they described as “obscene faked sexual images” and “commercial ads for products that are crap.”
Internal Discontent at Meta Over Policy Shift
The changes to Meta’s hateful conduct policy have not only affected users but also sparked internal disagreement within the company. In January, 404 Media reported that some Meta employees expressed anger over the relaxed standards. Leaked internal conversations revealed concern from public policy staff who believed the shift enabled content that previously violated platform guidelines.
According to messages reviewed by 404 Media, employees objected to specific examples of content that would now be permitted, such as calling LGBTQ+ individuals “mentally ill” or describing immigrants as “trash.” One policy team member described the rollback as an attempt to curb “mission creep” and re-center free expression. “These changes allow for counterspeech and more open dialogue,” the employee wrote, while acknowledging the move could be offensive to many.
Despite internal resistance, Meta leadership has continued to frame the changes as a move toward transparency and free speech. In a January post, Zuckerberg said the company would eliminate what he called “out-of-touch” restrictions on topics like immigration and gender. However, leaked documents and employee feedback suggest the practical outcome has been an increase in hateful and inflammatory content across Meta’s platforms.
Advocacy Groups Call for Action, Zuckerberg Remains Silent
In response to the survey results, the report’s authors are calling on Meta to take corrective action. They recommend that the company hire an independent third-party to evaluate the impact of the January policy changes and reinstate moderation protocols that were previously in place. The goal, according to the report, is to prevent further harm to users who already face disproportionate risks online.
Several respondents described how the policy changes have led to personal harassment. One woman reported being told she needed to be “properly corrected by a real man” because of her support for gender equality and LGBTQ+ rights. Another said she was stalked and targeted with explicit messages due to her political views. These examples underscore how digital harassment can escalate into real-world consequences.
Meta has not issued a formal response to the report or addressed the survey’s findings. The company has also not indicated whether it plans to review or reverse any of the policy changes made earlier this year. In the meantime, users and advocacy organizations continue to raise concerns that the current environment on Meta’s platforms is not only unwelcoming but increasingly unsafe for vulnerable communities.