China proposes new rules for governing artificial intelligence ethics

China has published draft rules to strengthen the ethical oversight of artificial intelligence. The proposal would require universities, companies, and research institutions to set up ethics committees to review AI projects, focusing on risks such as discrimination, data misuse, and safety concerns.

China proposes new rules for governing artificial intelligence ethics

China has released draft rules that would set up a national system for managing the ethics of AI. The proposal, called the ‘Artificial Intelligence Science and Technology Ethics Management Service Measures (Trial)‘, is now open for public consultation.

The draft is meant to address concerns that AI systems—such as advanced algorithms, chatbots, or autonomous decision-making tools—could create risks for people’s health, dignity, safety, and social stability. It aims to make sure AI is developed and used in a way that is fair, transparent, and responsible.

Under the new rules, universities, research institutes, companies, and hospitals that work with AI would be required to create their own ethics committees. These committees would review projects before they start, checking for issues like discrimination in algorithms, misuse of data, or risks to public safety. If an institution cannot set up its own committee, it could turn to special ‘ethics service centers’ established by local governments.

The proposal lays out several levels of review depending on the risk of a project. Low-risk projects could go through a simplified process, while high-risk ones, such as AI systems that can influence public opinion, affect human emotions, or make autonomous decisions in safety-critical areas, would need expert-level reviews organised by government departments. In emergencies, reviews would be fast-tracked to ensure that urgent projects can still move forward responsibly.

To keep oversight consistent, institutions would also be required to register their ethics reviews and annual reports in a national database. The Ministry of Science and Technology would lead the system at the national level, with the Ministry of Industry and Information Technology and local governments sharing responsibility for enforcement.

The rules also encourage the development of standards for AI ethics, training programs to build expertise in the field, and public education campaigns to raise awareness. Violations of the rules could result in penalties under existing Chinese laws on science and technology.

If adopted, this framework would mark one of China’s most comprehensive attempts to manage the ethical risks of AI, focusing on both technical safeguards and social impacts.

Go to Top