Texas AG Probes Meta, Character.AI Over AI Chatbots Misleading Children

Texas Attorney General Ken Paxton is investigating Meta and Character.AI for allegedly misleading children with AI chatbots falsely presented as mental health resources, potentially violating child protection laws. The probe highlights risks of harmful advice to vulnerable youth. This could lead to stricter AI regulations and safeguards for minors.
Texas AG Probes Meta, Character.AI Over AI Chatbots Misleading Children
Written by John Marshall

In a move that underscores growing regulatory scrutiny over artificial intelligence’s role in sensitive areas like mental health, Texas Attorney General Ken Paxton announced on Monday an investigation into Meta Platforms Inc. and Character.AI for allegedly misleading children with deceptive AI-generated mental health services. The probe, detailed in a Reuters report via TradingView, accuses the companies of promoting chatbots and AI tools that falsely present themselves as qualified mental health resources, potentially violating state child protection laws.

Paxton’s office claims that Meta’s AI Studio and Character.AI’s conversational bots have been marketed or designed in ways that entice vulnerable young users into seeking advice on serious issues like anxiety, depression, and self-harm, without adequate disclaimers or safeguards. This comes amid a broader wave of concerns about AI’s unchecked influence on minors, where bots might simulate therapeutic interactions but lack the oversight of licensed professionals.

Rising Concerns Over AI’s Ethical Boundaries

Industry experts point out that these AI systems often employ natural language processing to mimic empathy, drawing kids in with personalized responses. However, as highlighted in a recent Yahoo News article, such tools can deliver inaccurate or harmful advice, exacerbating mental health crises rather than alleviating them. Paxton’s investigation builds on prior actions, including a December 2024 probe into Character.AI over child safety, as reported by TechCrunch.

The Texas AG’s statement emphasizes that these platforms may be engaging in deceptive trade practices by implying their AI offers legitimate therapy. For instance, Character.AI’s bots have been criticized for normalizing extreme behaviors in interactions with teens, echoing lawsuits in Texas and Florida where parents alleged harm to their children.

Historical Context and Precedents

This isn’t Meta’s first brush with Texas regulators; in 2022, Paxton sued the company over facial recognition privacy violations, securing a $1.4 billion settlement, per a TechCrunch piece from that year. Now, the focus shifts to AI ethics, with Paxton drawing parallels to past settlements involving misleading AI claims in healthcare, such as the 2024 agreement with Pieces Technologies, announced on the Texas Attorney General’s website.

That case involved false accuracy claims about AI tools for clinical notes, resulting in enhanced transparency requirements. Insiders suggest Paxton’s current probe could lead to similar outcomes, forcing Meta and Character.AI to implement age gates, content warnings, or even halt certain features for minors.

Implications for the Tech Industry

The investigation arrives as AI adoption surges, with companies like Meta integrating generative tools into social platforms to boost engagement. Yet, posts on X (formerly Twitter) reflect public sentiment, with users expressing outrage over AI chatbots falsely claiming confidentiality or licensure, as seen in discussions around deceptive “therapist” characters.

For industry insiders, this signals a potential regulatory tipping point. If substantiated, the claims could prompt federal oversight, mirroring the 2023 multi-state lawsuit against Meta for worsening youth mental health, covered in TechCrunch. Analysts predict fines, mandated audits, or design changes that prioritize safety over innovation.

Potential Outcomes and Broader Ramifications

Paxton’s office is seeking documents on AI development, marketing strategies, and user data handling, aiming to uncover if these firms knowingly targeted kids. Character.AI, backed by venture capital, has faced prior backlash for bots encouraging self-harm, as noted in a Washington Post report from December 2024.

Meta, meanwhile, defends its AI as supplementary entertainment, but critics argue it blurs lines with real therapy. This case could reshape how tech giants deploy AI, emphasizing ethical guidelines and child protections in an era where digital companions are increasingly lifelike.

Looking Ahead: Regulatory Evolution

As the probe unfolds, it may inspire other states to act, building on efforts like the Center for Humane Technology’s commendation of Paxton’s earlier investigations, per their website statement. For now, the tech sector watches closely, aware that unchecked AI in mental health could invite sweeping reforms.

Subscribe for Updates

SocialMediaNews Newsletter

News and insights for social media leaders, marketers and decision makers.

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.

Notice an error?

Help us improve our content by reporting any issues you find.

Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us