China Acts to Limit AI Chatbots Impact on Suicide, Gambling, and Abuse
China is set to implement new regulations aimed at preventing artificial intelligence-powered chatbots from affecting human emotions in manners that could result in suicide or self-harm. On Saturday, the Cyberspace Administration of China released the proposals in draft form, as reported. The regulations focus on what authorities characterize as “human-like interactive AI services.” These encompass AI systems that replicate human personality traits and emotionally connect with users through text, images, audio, or video. The public is encouraged to provide feedback on the draft regulations until January 25. Once finalized, the measures will be applicable to AI products and services accessible to the public in China.
Legal experts assert that the proposal signifies a significant transformation in the regulation of AI. Winston Ma stating that the rules would represent the world’s first effort to regulate AI with human or anthropomorphic characteristics. In contrast to the generative AI regulations implemented in China in 2023, Ma remarked that the new draft “highlights a leap from content safety to emotional safety.” The decision arrives as Chinese companies swiftly advance in the creation of AI companions, digital celebrities, and chatbots intended to establish emotional bonds with users.
The proposed regulations establish stringent boundaries regarding the actions and statements permitted for AI chatbots. According to the proposals:
- AI chatbots are prohibited from generating content that promotes suicide or self-harm, or employs verbal violence.
- If a user directly suggests suicide, it is imperative that the company ensures a human takes over the conversation and promptly contacts the user’s guardian or a designated individual.
- AI systems are prohibited from generating content related to gambling, obscenity, or violence.
- Guardian consent will be required for minors to utilize AI for emotional companionship, and there must be established time limits on its usage.
- Platforms should have the capability to ascertain a user’s status as a minor, regardless of whether the user chooses to reveal their age.
The document also includes further safeguards. Users must be reminded by AI services after two hours of continuous interaction. Platforms boasting over one million registered users or exceeding 100,000 monthly active users will be mandated to undergo security assessments. Simultaneously, the draft advocates for the application of human-like AI in domains like “cultural dissemination and elderly companionship,” indicating that officials continue to recognize the potential benefits of the technology when employed judiciously. The proposal comes in the wake of recent IPO filings by two prominent Chinese AI chatbot startups, Z.ai and Minimax, in Hong Kong.
Growing apprehensions regarding AI’s impact on human behavior are emerging globally. In September, OpenAI CEO Sam Altman stated that one of the most difficult challenges for the company is managing conversations related to suicide. Earlier this year, a US family initiated legal action against OpenAI following the tragic suicide of their teenage son. OpenAI has recently made an announcement regarding its search for a “Head of Preparedness” to examine AI risks, which encompass mental health impacts.









