BEIJING—China’s regulators are drawing a hard line on artificial intelligence companions, proposing rules that ban chatbots from delving into suicide, gambling or other topics deemed to unduly sway users’ emotions. The draft measures, unveiled by the Cyberspace Administration of China (CAC), target services mimicking human interaction amid a surge in AI-driven emotional bonds that officials view as risky.
The rules, open for public comment until January 25, require providers to implement safeguards against overuse, monitor for addiction and ensure content aligns with national security and socialist values. They come as startups like Minimax and Z.ai, creators of popular apps such as Talkie and Xingye, pursue Hong Kong IPOs, spotlighting the sector’s rapid growth and regulatory scrutiny. CNBC reports the proposals explicitly prohibit AI from generating content that induces negative emotions or promotes self-harm.
Providers must label AI interactions as artificial, set daily usage limits and conduct security assessments before launch. Minors face stricter protections, including parental controls and bans on addictive features. The CAC’s move reflects broader concerns over AI’s psychological impact, echoing past crackdowns on gaming and social media.
Emotional Safeguards Take Center Stage
The draft distinguishes between ‘anthropomorphic’ AI—chatbots with human-like personalities—and general models, applying to public-facing services forming ’emotional attachments.’ Firms like Zhipu AI, whose models power some companions, must prevent outputs encouraging suicide or gambling. ‘AI products should not induce users to generate or disseminate content harmful to the physical and mental health of users,’ the rules state, per Reuters.
Regulators demand real-time monitoring and rapid response to risky interactions, with data protection spanning the full product lifecycle. Violations could trigger app store removals or fines, building on China’s 2023 interim AI rules mandating ‘core socialist values.’ Posts on X highlight public debate, with users noting the rules’ focus on ’emotional influence’ as a novel frontier in tech oversight.
This builds on prior efforts: In 2023, Beijing required AI generators to censor sensitive topics like Tiananmen Square, enforcing political alignment. Now, emotional realms enter the fold, with bans on content ‘endangering national security’ or promoting violence.
IPO Ambitions Meet Regulatory Headwinds
Minimax, valued at over $2.5 billion, filed for a Hong Kong listing this month, touting Talkie, a hit AI companion with millions of users. Z.ai, behind Xingye, followed suit. Both rely on advanced large language models to simulate empathy and relationships. Yet the timing underscores tension: Prosperity amid tightening controls. Bloomberg notes the rules aim for ‘ethical, secure and transparent’ services.
Industry insiders see parallels to gaming regulations, where playtime caps curbed youth addiction. AI firms must now deploy algorithms detecting emotional dependency, prompting breaks or referrals to professionals. ‘Firms shall establish mechanisms to prevent addiction risks,’ the draft mandates, as covered by The Economic Times.
X discussions reveal mixed sentiments: Some praise protections for vulnerable users, others decry overreach. One post from Evelyn Cheng of CNBC flagged the emotional manipulation limits, amplifying industry buzz.
Broader Censorship Continuum
These proposals extend China’s AI governance framework, first outlined in 2023 with requirements for watermarking generated content and bias audits. Emotional AI now joins politically sensitive areas under strict review. The CAC’s nine prior generative AI rules targeted deepfakes and misinformation; this iteration addresses interpersonal dynamics.
For providers, compliance means redesigning models—training data scrubbed of taboo topics, outputs filtered in real time. Global players like ByteDance’s Doubao face similar mandates. H2S Media details requirements for mental health safeguards and value-aligned training.
Enforcement looms large: Past violators like Baidu’s Ernie Bot endured scrutiny for hallucinated facts. Startups risk delays in monetization, with IPO filings demanding regulatory disclosures.
Global Ripples and Industry Responses
Abroad, the rules draw comparisons to U.S. debates on AI safety, though China’s emphasize state control. EU AI Act tiers risks similarly, but lacks emotional-specific clauses. On X, Kyle TrainEmoji cited China’s mental health provisions as more thorough than U.S. efforts.
Chinese developers express cautious optimism. A Zhipu spokesperson told Reuters the firm supports ‘healthy development.’ Minimax has invested in safety features, per its prospectus. Yet analysts warn of innovation chills, echoing gaming sector slowdowns post-2021 caps.
The draft’s 30-day comment period invites stakeholder input, potentially softening edges. Still, Beijing’s track record suggests firm implementation by mid-2025.
Implications for AI Companions’ Future
Companion apps like Talkie, blending romance simulation with therapy-like chats, exploded post-ChatGPT. Revenue models hinge on subscriptions and virtual gifts; regulations could cap engagement time, hitting profits. MobileAppDaily outlines limits on suicide-related safeguards and minor protections.
Tech giants adapt swiftly: Alibaba’s Qwen and Tencent’s Hunyuan already incorporate censorship layers. Smaller players may consolidate, favoring those with state ties. X users speculate on black-market workarounds, but VPN crackdowns limit feasibility.
Longer term, this positions China as a pacesetter in emotional AI governance, influencing global standards. As CAC chair Zhuang Rongwen stated in prior speeches, AI must serve ‘humanity’s common values’—filtered through national priorities.
Stakeholder Voices and Path Forward
Psychiatric Times warns of chatbot risks to children, aligning with CAC’s minor-focused rules. An open letter there urges global regulation, citing emotional dependency harms. In China, state media frames the draft as user protection, not censorship.
For insiders, key watchpoints include final rule text post-January, enforcement pilots and IPO impacts on Minimax/Z.ai valuations. Compliance costs—audits, monitoring tech—could exceed millions annually for scale players.
Beijing’s strategy balances innovation with control, nurturing AI leadership while mitigating social risks. As one X post quipped, ‘China says no to suicide and gambling AI chatbots’—a stark reminder of tech’s subservience to policy.


WebProNews is an iEntry Publication