Artificial Intelligence: China Plans Rules to Protect Children and Tackle Suicide Risks
9 Articles
9 Articles
China Tightens Rules On AI Firms To Shield Children
China is moving to put some firm guardrails around artificial intelligence. And this time, the focus is clear: children’s safety and harmful chatbot advice.With AI chatbots popping up everywhere, regulators are asking a simple question. What happens when machines start influencing emotions, decisions, and mental health?Under draft rules released by the Cyberspace Administration of China (CAC), AI systems would be barred from offering content lin…
AI regulation: China proposes new rules over child safety fears, suicide risks
In the rapid age of artificial intelligence, China has come forward with new rules for AI firms, aiming to protect young minds from online abuse and chatbots that promote self-harm and violence....
Artificial intelligence has been a breakthrough in many respects, allowing users to increase their productivity. However, this technology is far from perfect and sometimes respond by encouraging suicide or violence. China has wanted to tackle this at the root and is the first country to draft rules to prevent AI from emotionally manipulating users. AI is being a problem by encouraging users to harm their environment, self-harm or even suicide. I…
China is tackling the issue of mental health head-on with AI chatbots. Authorities have proposed a series of very restrictive rules that could severely limit the capabilities of ChatGPT and other bots.
China Proposes New AI Rules To Protect Children And Restrict Harmful Chatbot Advice
China has proposed new regulations for artificial intelligence that would require safeguards for children and prohibit chatbots from offering advice related to self-harm or violence, as authorities move to address safety concerns tied to the rapid growth of AI services. The draft rules were published over the weekend by the Cyberspace Administration of China and would apply to AI products and services operating in China once finalised. The propo…
Coverage Details
Bias Distribution
- 67% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium







