...

Meta Finds New Way to Embarrass Users by Sharing Their AI Chats

Meta has drawn criticism over a feature in its AI assistant app that publicly displays users’ chatbot conversations. The “Discover” tab, part of Meta’s AI interface, has allowed anyone to view prompts and conversations submitted by users, including potentially sensitive and personal information. After public scrutiny and media coverage, Meta has begun making changes to clarify how chats are shared, though many user conversations remain publicly accessible.

Public Display of Private Conversations

The Discover tab in Meta’s AI assistant app surfaced a range of user-submitted content, including both AI-generated images and full-text conversations. While some of the content was relatively benign, such as creative prompts and art requests, other conversations revealed deeply personal information. These included chats about medical conditions, mental health, legal concerns, and personal relationships.

Because Meta’s AI tool is linked to a user’s Facebook or Instagram account, in many cases, conversations could be tied back to identifiable individuals. Journalists and researchers were able to verify that users were inadvertently sharing private details without realizing they were being made public. Some chats appeared to include full names, references to relatives or employers, and specific geographic locations.

Security researchers, including Rachel Tobac, CEO of SocialProof Security, raised alarms about the implications of this public feed. Tobac reviewed multiple posts involving confidential topics, including health diagnoses, financial information, and even discussions of potential criminal liability. “When a tool doesn’t work the way a person expects, there can be massive personal security consequences,” she said, adding that users appeared unaware their prompts were being published.

Meta Begins Rolling Back Visibility of Sensitive Content

Following coverage from multiple news outlets and commentary from privacy advocates, Meta appears to have limited the type of content displayed in the Discover tab. By Monday, the majority of visible posts involved image generation, with fewer full-text conversations being shown. However, by the following day, text prompts had reappeared, including audio clips and conversations about serious legal and medical topics.

One example included a user asking for legal advice about domestic violence charges in Indiana. Another detailed a person’s experience with depression, including the comment, “just life hitting me all the wrong ways daily.” Some users left comments expressing surprise that their messages had been made public. One asked, “Was this posted somewhere because I would be horrified?”

Meta has not issued a detailed public statement on the incident. However, it has reportedly made changes to better notify users when their content may be displayed in the Discover feed. Critics argue that this does not go far enough and are calling for the feed to be disabled entirely or made opt-in by default. Tobac emphasized that AI tools must be designed with a clear expectation of privacy and warned that failing to do so can lead to unintended exposure of sensitive data.

Broader Criticism of Meta’s AI Strategy

The controversy comes amid broader criticism of Meta’s approach to AI integration across its platforms. In recent months, Meta has faced scrutiny for allowing AI-generated content to flood its apps, including misinformation, scam advertisements, and generative tools with little oversight. Some experts argue that the company has prioritized AI engagement over user safety and content moderation.

Meta’s AI rollout is part of a larger strategy to embed generative tools into Facebook, Instagram, and WhatsApp. The company has promoted its AI assistant as a way to boost creativity and communication, but observers point to a lack of guardrails. As user-generated prompts are shared across public feeds, the potential for privacy violations grows—particularly when users do not clearly understand how their data is being used.

While Meta has adjusted the Discover tab in response to public backlash, the episode underscores the risks of deploying AI features without sufficient transparency. The integration of chatbot prompts into public-facing features raises questions about informed consent, default settings, and user control. As generative AI becomes more deeply embedded in social platforms, privacy advocates are urging companies to take greater responsibility for how these tools are designed and how user data is managed.