Pentagon Requests Fewer Moral Settings As Anthropic Politely Disagrees With Trump-Era AI Vision
Pentagon officials clash with AI start-up Anthropic over limits on military use of its Claude models as CEO Dario Amodei keeps distance from Trump’s regulatory vision. A $200 million contract meets ethical boundaries.
The Pentagon has reportedly discovered that artificial intelligence sometimes arrives with boundaries.
Anthropic, the AI company behind the assistant Claude, is among several firms awarded a $200 million contract in summer 2025 to provide advanced AI tools to the US military. The arrangement, however, encountered turbulence when Anthropic insisted that its technology not be used for population surveillance or to develop overly autonomous weapons systems.
In other words, the software came with settings.
According to reporting by The Wall Street Journal and Reuters, disagreements emerged over the restrictions the company placed on how its models could be deployed. The Pentagon sought broader operational flexibility. Anthropic maintained guardrails.
The tension reflects a wider divergence between the Trump administration’s approach to technological deployment and Anthropic’s more cautious posture. CEO Dario Amodei, who supported Democratic candidate Kamala Harris during the 2024 election, has avoided direct confrontation with President Donald Trump but has consistently emphasized the need for careful regulation of artificial intelligence.
In a US tech landscape where many industry leaders have aligned publicly with the White House, Anthropic’s stance stands out less as rebellion and more as deviation. Competitors such as OpenAI’s Sam Altman and xAI’s Elon Musk have taken more collaborative tones toward the administration’s broader innovation agenda.
Anthropic’s position does not reject military collaboration outright — the $200 million contract remains intact — but it does suggest that not all AI developers interpret modernization as synonymous with unlimited deployment.
The company appears determined to define its models as advanced tools rather than unrestricted instruments. The Pentagon, accustomed to flexibility, is reportedly adjusting to the concept of built-in hesitation.
The disagreement underscores a larger debate unfolding in Washington:
Should artificial intelligence be integrated into defense systems with minimal constraint, or should the private sector impose ethical limitations before federal agencies request them?
For now, the contract stands.
The guardrails remain.
And one AI start-up has demonstrated that in an era of accelerated innovation, saying “not that feature” still qualifies as defiance.