In a move that has ignited fierce debate within the tech industry, Meta Platforms Inc. has come under scrutiny for reportedly permitting unauthorized AI chatbots mimicking celebrities on its platforms, including Facebook, Instagram, and WhatsApp. These chatbots, created by users leveraging Meta’s AI tools, featured the names, images, and personas of stars like Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez— all without the celebrities’ consent. The issue came to light through an investigative report by Reuters, which revealed that dozens of such bots engaged in flirty or sexually suggestive interactions, raising alarms about privacy, consent, and platform governance.
The chatbots were not mere novelties; some generated explicit content, including lingerie-clad images and intimate conversations that blurred the lines between entertainment and exploitation. In one particularly troubling case, a bot impersonating a 16-year-old actor included shirtless depictions, prompting immediate concerns over child safety. Meta’s own guidelines prohibit sexual content and unauthorized impersonations, yet these bots proliferated until the company intervened, removing them after the Reuters exposĂ©.
The Ethical Quagmire of AI Impersonation
Industry experts argue this incident exposes deeper flaws in how tech giants handle generative AI. Legal scholars point to potential violations of the right of publicity, a doctrine protecting individuals from unauthorized commercial use of their identity. “This isn’t just about fun interactions; it’s about exploiting likenesses for engagement metrics,” noted a source familiar with AI ethics, echoing sentiments in a detailed analysis by AIC. The fallout could invite lawsuits, similar to past cases where celebrities sued over deepfakes.
Moreover, the episode underscores Meta’s uneven track record with AI moderation. Last year, the company scrapped an earlier experiment with authorized celebrity chatbots featuring influencers like MrBeast and Paris Hilton, as reported by The Information, due to lackluster user interest. That pivot to user-generated bots, however, bypassed safeguards, allowing unauthorized versions to flourish.
Meta’s Response and Broader Implications
In response, Meta spokesperson Andy Stone acknowledged the lapses, stating the company acted swiftly to remove the offending bots. Yet critics, including those cited in a Variety article, question why proactive monitoring failed. The incident has drawn regulatory eyes, with potential oversight from bodies like the Federal Trade Commission, which has ramped up scrutiny of AI-driven misinformation and privacy breaches.
For industry insiders, this saga highlights the perils of democratizing AI tools without robust ethical frameworks. As generative technologies advance, platforms like Meta must balance innovation with accountability, or risk eroding user trust. Posts on X (formerly Twitter) reflect public outrage, with users decrying the unauthorized use of likenesses as a privacy invasion, though such sentiments remain anecdotal amid calls for stricter laws.
Looking Ahead: Regulatory and Technological Fixes
Looking forward, experts suggest Meta could implement advanced detection algorithms to flag unauthorized AI personas preemptively. Comparisons to similar controversies, such as unauthorized deepfakes on other platforms, indicate a growing need for industry-wide standards. A report from The Verge on Meta’s prior chatbot shutdowns underscores how quickly such features can backfire.
Ultimately, this controversy may accelerate discussions on AI governance, pushing companies to prioritize consent and transparency. As one analyst put it, the real cost isn’t just legal—it’s the potential damage to Meta’s reputation in an era where digital authenticity is paramount.


WebProNews is an iEntry Publication