Trump is pushing for AI in the military. Anthropic refused two applications
What activities were involved?
OpenAI has entered into an agreement with the Pentagon (the agreement is not yet finalized) under which artificial intelligence tools will be used by the American military in proprietary military systems.
Everything is the result of the Pentagon’s breach of contract with Anthropic, the creators of the Claude models. The Trump administration ceased cooperation with immediate effect. As the AI model manufacturer explains:
The War Department has stated that it will only contract with AI companies that agree to “use (AI) for any purpose within the letter of the law” and remove any AI protections (at their request). They threatened to remove us from their systems if we maintained these protections; they also threatened to label us a “supply chain risk” — a label reserved for hostile powers and never before used against a U.S. company — and invoke the Defense Production Act to force the removal of these protections.
What activities and safeguards were involved? We are talking about mass surveillance and autonomous weapon control. The creators of Claude did not allow you to do this with their systems because they had built-in limitations.
Now Sam Altman, the head of OpenAI, has made it clear to the world in an agreement with the Pentagon that his models will be able to become surveillance tools and support autonomous weapon systems. This sparked an avalanche of controversy and led to a grassroots “QuitGPT” movement calling for a boycott of ChataGPT.
Altman maintains that OpenAI has committed not to use its models for surveillance or autonomous weapons. However, it is unclear how their contract with the Pentagon differs from that with Anthropic.
Meanwhile, in the App Store, Apple’s digital store, Antrophic’s Claude AI app has jumped to the second place as the most downloaded software.
