The US administration orders federal agencies to stop using Anthropic AI systems
The US administration has directed federal agencies to phase out the use of Anthropic’s AI technology following a dispute over restrictions on military applications of the company’s models.
US President Donald Trump has ordered all federal agencies to stop using AI systems developed by Anthropic, escalating a dispute between the government and the AI company over military applications of its technology.
The directive was announced shortly before a Pentagon deadline requiring Anthropic to remove restrictions on how the US military could use its Claude AI model. The company had maintained safeguards prohibiting the use of its systems for domestic mass surveillance and fully autonomous weapons.
The administration argued that such limitations could undermine national security and military operations. Under the order, federal agencies have been given a six-month period to phase out Anthropic’s technology.
Anthropic stated that it would challenge any attempt to classify the company as a government supply-chain risk, arguing that its safeguards are intended to prevent misuse of AI systems.
On the same day, rival company OpenAI announced an agreement to deploy AI technology within the Pentagon’s classified network. According to the company, the arrangement also includes safeguards such as limits on domestic surveillance and requirements for human oversight in autonomous weapons systems.
The dispute highlights growing tensions between technology companies and governments over how AI should be used in security and military contexts.
