China introduces trial ethics rules for AI research and development
China has issued trial measures establishing ethics review procedures for AI-related scientific and technological activities, covering risk assessment, oversight, and institutional responsibilities.
China’s Ministry of Industry and Information Technology, together with nine other government departments, has issued trial measures setting out how ethics reviews should be conducted for AI research and development.
The rules apply to a wide range of AI-related activities, including scientific research and technology development, where there may be risks to areas such as human rights, public safety, health, or the environment.
The measures require that ethical considerations be integrated throughout the lifecycle of AI activities. They outline principles such as respect for human dignity, fairness, transparency, privacy protection, and risk control.
Organisations carrying out AI-related work, including universities, companies, and research institutes, are required to establish internal ethics review mechanisms. This includes setting up dedicated ethics committees responsible for assessing projects.
The framework also allows local authorities to create specialised ethics review centres. These centres may provide services such as evaluations, training, and advisory support, but cannot review and re-evaluate the same project.
The document defines several review procedures, including standard, simplified, and emergency processes. Reviews are generally expected to be completed within 30 days, with shorter timelines for urgent cases.
Certain categories of AI systems are subject to additional scrutiny. These include systems that influence human behaviour, affect public opinion, or involve high levels of automation in safety-critical contexts.
The measures also include provisions for monitoring risks, developing standards, supporting smaller enterprises, and promoting training and public awareness in AI ethics.
According to the document, violations of these rules may be addressed under existing laws, including those related to cybersecurity, data protection, and scientific research. The measures take effect upon issuance.
