Chinese court rules AI developer not automatically liable for hallucination errors
A court in Hangzhou has ruled that an AI developer is not automatically responsible for false information generated by its system, unless a user can prove fault and actual harm.
A court in Hangzhou has ruled that an AI developer is not legally liable for so-called ‘hallucinations’ produced by its system, unless the claimant can demonstrate fault and concrete damage. The ruling was delivered by the Hangzhou internet Court and is described as the first decision in China to address civil liability for AI-generated fabricated information.
The case concerned an AI system that generated false information about a nonexistent campus of a real Chinese university. The system later told the user that it would compensate him if the information proved incorrect. The user subsequently filed a lawsuit seeking nearly 10,000 yuan in damages, arguing that he had been misled and that the AI had promised compensation.
The court dismissed the claim. It held that AI-generated content constitutes a service rather than a product in such cases and ruled that the AI system itself does not have legal personhood and cannot independently make binding legal commitments. The court also found that the developer had not authorised the AI to make statements on its behalf and that the plaintiff failed to demonstrate actual harm resulting from the incorrect information.
In its reasoning, the court stated that AI-generated content generally does not constitute high-risk activity and that developers have limited control over specific outputs. It warned that imposing strict liability for hallucinations could hinder technological development.
Under current Chinese regulations, AI service providers must review and remove illegal or harmful content but are not required to guarantee the accuracy of all generated information. The ruling suggests that courts will assess liability based on whether a provider breached a duty of care and whether the claimant can demonstrate measurable harm.
