Anthropic, the AI company behind Claude, signed a Pentagon contract worth up to $200 million in July 2025. The military used Claude through Palantir Technologies during the operation to capture Venezuelan President Nicolas Maduro in January 2026. Then the Pentagon demanded Anthropic remove safety restrictions to allow unrestricted military use, using the specter of the Defense Production Act (an emergency wartime law) to force Anthropic to do what it wanted. Anthropic refused on two specific points: fully autonomous weapons and mass domestic surveillance of Americans. Defense Secretary Pete Hegseth met with CEO Dario Amodei on February 24, called Anthropic's stance "woke AI," and set a Friday 5:01 PM deadline. Anthropic rejected the final offer on Wednesday.

1. Hold the Line (Civil Liberties Advocates, Common Cause, Lawfare)

A private company refusing to build autonomous kill systems and mass surveillance tools is not "woke" — it's the bare minimum.

Removing Anthropic's guardrails would enable mass surveillance of Americans, raising Fourth Amendment concerns — not to mention destruction of the planet concerns. Autonomous weapons without human oversight and domestic surveillance infrastructure are not edge cases. They are the two applications most likely to cause catastrophic, irreversible harm. Anthropic drew a line at the two things worth drawing a line at.

The Defense Production Act threat is "without precedent" and legally dubious. The DPA has never been used to compel a company to produce a product it considers unsafe, or to dictate terms of service. If the government can force an AI company to remove safety restrictions, the precedent extends far beyond Anthropic.

2. Trust Your Military (Pentagon, Defense Hawks, Trump Administration)

The military — not a private contractor — decides how national security technology gets used. That's how civilian control of the military works.

A Pentagon official told CBS News: "You have to trust your military to do the right thing." The argument is that Anthropic signed a defense contract and is now trying to dictate how the military uses the tools it paid for. The Pentagon's position is that it, not a Silicon Valley company, determines what is appropriate use of defense technology. Hegseth framed the safety restrictions as ideological posturing that limits national security capabilities.

The Pentagon has escalation options and alternatives. It could cancel the $200M contract, designate Anthropic as a "supply chain risk" (a label normally reserved for adversarial foreign companies like Huawei), or invoke the Defense Production Act to compel compliance. Jensen Huang, Nvidia's CEO, downplayed the standoff entirely — "not the end of the world" — implying the Pentagon has alternatives and Anthropic is replaceable.

3. The Precedent Matters More Than the Players (Lawfare, Tech Policy Press, The Hill)

Whether Anthropic or Hegseth is right about autonomous weapons is almost secondary to what happens when the government can force a company to remove its own safety standards.

There's a clear escalation pattern here: warnings, an ultimatum, a meeting, a deadline, and threats escalating to the Defense Production Act. The DPA was designed to compel manufacturing during wartime — steel, ammunition, medical supplies. Using it to override a software company's terms of service would redefine the relationship between the government and the entire tech industry.

If the DPA works here, it works everywhere. Any future administration could invoke it to force any AI company to remove any restriction the government dislikes. The question is not whether you trust this Pentagon with this AI. It is whether you want any Pentagon to have this power over any company, permanently.

Where This Lands

If the Pentagon follows through on its threats, we're in uncharted legal territory — the Defense Production Act has never been used this way. If it backs down, Anthropic proved that a company can say no to the Department of Defense and survive. Either way, the question that outlasts this standoff is the one neither side wants to answer directly: who gets to decide what AI refuses to do?

Sources