...

Meta Faces Backlash After AI Chatbots Impersonate Celebrities Without Consent

Meta is under intense scrutiny after reports revealed that chatbots created on its platforms impersonated high-profile celebrities including Taylor Swift, Scarlett Johansson, and Selena Gomez. Many of the AI-generated personas engaged in sexually suggestive conversations, prompting legal and ethical concerns. Following the revelations, Meta’s stock fell more than 12 percent in after-hours trading.

Unauthorized Celebrity Likeness Sparks Concern

Reuters reported that while many of the chatbots were user-generated, at least three were created by a Meta employee. Two of these impersonated Taylor Swift, and collectively they attracted more than 10 million interactions before being removed. Other impersonated celebrities included Anne Hathaway and Selena Gomez.

Some of the AI personas produced photorealistic imagery, including inappropriate depictions of celebrities in lingerie or bathtubs. In one instance, a chatbot representing a 16-year-old actor generated a shirtless image, raising further alarm. These outputs violated Meta’s own policies, which prohibit impersonation and sexually suggestive content involving public figures.

Meta spokesperson Andy Stone acknowledged the failures in enforcement. “Like others, we permit the generation of images containing public figures, but our policies are intended to prohibit nude, intimate or sexually suggestive imagery,” he said. The company has pledged to strengthen its guidelines and enforcement mechanisms.

Legal and Industry Fallout

The misuse of celebrity likenesses introduces legal risks under state right-of-publicity laws, which protect individuals from unauthorized commercial use of their identity. Mark Lemley, a law professor at Stanford, told Reuters that the chatbots likely overstepped legal boundaries. He noted the personas were not sufficiently “transformative” to qualify for protection under U.S. copyright and free speech standards.

Beyond legal questions, the revelations add to ongoing debates about the ethical use of AI. The Screen Actors Guild–American Federation of Television and Radio Artists (SAG-AFTRA) expressed concern over the potential harm caused when users form emotional or romantic attachments to AI-generated personas. The group warned that such interactions could compromise safety in the real world.

The incident reflects broader anxiety within the tech industry about how generative AI tools are deployed. As platforms encourage experimentation with AI, insufficient safeguards can create opportunities for abuse. The unauthorized impersonations highlight gaps in Meta’s oversight of content moderation and policy enforcement.

Meta’s Response and Wider Consequences

Meta removed a batch of the celebrity chatbots shortly before Reuters published its findings. At the same time, the company announced measures to improve safeguards for teenagers. These included training systems to block chatbot conversations about romance, suicide, or self-harm with minors, as well as temporarily limiting access to certain AI personas for young users.

Lawmakers have also begun to respond. Senator Josh Hawley launched an investigation, requesting Meta’s internal documents and risk assessments concerning AI policies that permitted inappropriate interactions. The senator’s inquiry is likely to intensify scrutiny over the regulation of generative AI and its potential risks to public safety.

Real-world consequences have already been reported. A 76-year-old man with cognitive decline died after traveling to New York to meet “Big sis Billie,” a Meta AI chatbot modeled after Kendall Jenner. Believing the chatbot was real, he suffered a fatal fall near a train station. The case underscored the dangers of allowing AI tools to simulate romance and identity without sufficient oversight, and has heightened calls for stricter industry standards.