China issues draft rules to regulate AI with human-like interaction
China's draft rules require AI chatbots to monitor emotional states, warn against excessive use, and intervene in addiction cases to reduce psychological harm, affecting over 515 million users.
- On Saturday, China's Cyberspace Administration proposed draft rules to regulate human-like interactive AI services offered publicly in China, targeting AI products simulating human personality via text, images, audio or video.
- Industry observers flagged rapid growth of companion bots and celebrities as Chinese firms rapidly developed AI companions and digital celebrities, with Z.ai and Minimax filing for Hong Kong IPOs this month.
- Specific provisions ban a wide range of harmful content, require tech providers to have humans intervene and notify guardians if suicide is mentioned, and mandate guardian consent with usage limits for minors.
- Public officials set a Jan. 25 deadline for comments on the draft, requiring two-hour usage reminders and security assessments for chatbots with more than 1 million registered users or over 100,000 monthly active users.
- Seen as a global first, experts note the draft targets emotional, not just content, risks, with Winston Ma saying it marks the world's first attempt to regulate AI with anthropomorphic traits.
66 Articles
66 Articles
China is preparing rules to limit the action of Artificial Intelligence Chatbots (IA) capable of influencing emotions in a way that leads to suicide, automutation or other harmful behavior. Emotions that can be...
China drafts stricter rules to regulate AI for emotional interaction
China's cyber regulator proposed draft rules to oversee AI simulating human personalities and emotional interaction, requiring user safety measures, addiction monitoring, data protection, algorithm review, and banning harmful content or behavior.
Coverage Details
Bias Distribution
- 46% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium
























