China moves to tighten rules on using chat logs to train AI systems

Chinese regulators have proposed new restrictions on how AI companies collect and use chat log data for model training, signalling a stronger emphasis on user consent, data protection, and safeguards for minors while continuing to support the development of interactive AI services.

China moves to tighten rules on using chat logs to train AI systems

China is considering new regulatory measures that would significantly limit how artificial intelligence systems are trained using user chat logs. The draft rules, published by the Cyberspace Administration of China (CAC) for public consultation, would require AI platforms to obtain explicit user consent before using conversational data to improve their models or sharing such data with third parties.

According to the proposal, providers of “human-like” interactive AI services, including chatbots and virtual companions, would be required to clearly inform users when they are interacting with an AI system. Users would also gain the right to access and delete their chat histories. Any use of conversation data for training purposes would be conditional on affirmative consent, marking a shift away from treating user interactions as default training material.

The draft introduces additional protections for children. For minors, AI providers would need consent from a parent or guardian before sharing chat data with third parties, and guardians would have the right to request deletion of a child’s conversation history. The measures are open for public feedback until late January 2026.

Chinese authorities framed the proposal as part of a broader effort to balance innovation with safety and governance. While encouraging the development of interactive AI, the CAC said it intends to apply “prudent, tiered supervision” to prevent misuse and loss of control over sensitive data.

Industry analysts note that the restrictions could slow certain forms of AI development, particularly techniques that rely heavily on real-time human feedback from user conversations. At the same time, they argue that the move reflects Beijing’s wider focus on national security and the collective public interest, signalling that some categories of user data are too sensitive to be freely repurposed for training. Others interpret the draft less as a brake on innovation and more as a directional signal, encouraging AI development within clearer boundaries around transparency, consent, and user protection.

Go to Top