Rights groups warn court of risks from military use of AI in Anthropic case
Human rights organisations have filed a legal brief in a US court case involving Anthropic, arguing that the use of AI in military operations raises serious concerns under international humanitarian law.
A coalition of human rights and legal organisations has filed an amicus brief in the US case Anthropic vs Department of War (DOW), urging the court to consider the broader risks of AI in military use.
The brief, submitted on 13 March 2026 in a federal court in California, does not support either party but focuses on the implications of AI-assisted warfare. The case centres on whether the US Department of War can penalise Anthropic for restricting the use of its AI system, Claude, in lethal autonomous weapons and surveillance.
The organisations argue that even semi-autonomous use of AI in military operations can pose significant risks to civilians. According to the brief, AI tools can accelerate decision-making processes in warfare, including target identification, reducing the time available for human assessment.
This compression of the so-called ‘kill chain’, from intelligence gathering to targeting decisions, may undermine key principles of international humanitarian law, including distinction, proportionality and necessity.
The filing also raises concerns that AI systems are being used to process large volumes of surveillance data and generate targeting recommendations at speeds that limit meaningful human oversight.
The organisations warn that such uses of AI could expose both technology companies and government actors to legal risks, including potential liability under domestic and international law.
