Yolo Federal Credit Union
The California AggieToday's Date
FacebookInstagramX - TwitterYouTube

The Department of War’s attack on free enterprise

Why Anthropic was right to say ‘no’ to the Pentagon

By MILES BARRY — mabarry@ucdavis.edu

On Feb. 27, United States Defense Secretary Pete Hegseth directed the Pentagon to designate Anthropic as a "supply chain risk to national security" after the company refused to allow unrestricted military use of its artificial intelligence (AI) model, Claude. A supply chain risk designation is a legal mechanism under federal procurement law designed to protect U.S. military and government systems from infiltration or sabotage by adversarial actors. This empowers the government to exclude a vendor from defense contracts, and more consequentially, to prohibit any contractor, supplier or partner doing business with the military from conducting commercial activity with the designated entity. 

This is an unprecedented step from the Department of Defense (now the Department of War) — the supply chain risk designation has never been used against an American company, let alone one of America’s leading AI companies. It also appears as though Anthropic received this designation for ideological reasons; their Chief Executive Officer Dario Amodei stated that the Pentagon took issue with Anthropic (in my view, sensibly), preventing them from using Claude for two specific purposes: domestic surveillance and firing autonomous weaponry. Now, because Anthropic maintained these two red lines, they will lose military contracts (Lockheed Martin, a defense contractor, has already begun to purge Claude from their supply chain) and may be unable to work with other big-name companies in contact with the military depending on how courts interpret the decision. They will be forced to spend money challenging the decision in court, and face major reputational damage — whether the courts side with them or not. 

While the federal government using AI for domestic surveillance and firing autonomous weapons sounds terrifying and dystopian, giving a billionaire corporate executive the power to regulate the military also sounds horrible. Most objections to Anthropics' red lines sound something like this: Why should Anthropic, a privately owned, profit-seeking company, make policy decisions about how the (theoretically) publicly accountable government uses its technology?

To answer this question: Anthropic has the right to make the government's use of their technology conditional because we live in a free enterprise system. While the American ideological left is (for good reason) highly critical of large corporations, these businesses are made up of individuals. In Anthropic's case, many of these individuals left other AI companies specifically because they wanted to build technology responsibly — Hegseth is essentially compelling these engineers to build something they believe to be dangerous. This should trouble anyone who values individual liberty. 

Anthropic’s second red line — prohibiting its models’ use in firing autonomous weaponry — isn’t even a significant deviation from other operational limitations in military contracts. A military contract may require a fighter jet to only be used in certain conditions, according to Dean W. Ball, who wrote AI policy in the opening months of the second Trump administration. Use outside of these limits would break its warranty. Anthropic follows this trend; they believe that their product is unable to safely perform a task (in this case, firing a weapon without a human in the loop), so they prohibit the government from carrying out that task within the lines of their contract. 

However, the implications of their first red line — preventing Claude's use in domestic surveillance — are more contentious. The Pentagon argued that this restriction was unnecessary given existing law already constrains domestic surveillance: the Fourth Amendment, the Foreign Intelligence Surveillance Act (FISA) and Department of Defense (DoD) directives all impose limits on intelligence collection targeting Americans. 

Anthropic’s position is that those laws aren’t strict enough. They stated that, “under current law, the government can purchase detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant.” Yet, in theory, this should be rectified by the passage of a new law, not a private company’s military contract. 

I believe that Anthropic is doing something very important by maintaining these red lines and acknowledging that our lives aren’t only constrained by humans anymore. Any free enterprise argument (like the one made above) presupposes a tension between two types of actors: governments and firms. Firms constrain human behavior through prices, and governments constrain human behavior through laws (enforced by threat of violence). The integration of AI into policy decisions has introduced a third actor: algorithms.

When a human intelligence analyst recommends a surveillance target, there is a chain of reasoning that can be scrutinized, challenged and held accountable. When an AI system does the same thing at scale, that chain dissolves. 

Philosopher John Danaher called this phenomenon “algocracy”: governance by algorithms we cannot fully understand but are compelled to obey. Anthropic's red lines are, in effect, an acknowledgment that existing legal frameworks were designed to constrain human decision-makers, not algorithmic ones. Until the law catches up to this reality, we should be grateful for these “red lines” drawn by private actors.

Written by: Miles Barry—mabarry@ucdavis.edu

Disclaimer: The views and opinions expressed by individual columnists belong to the columnists alone and do not necessarily indicate the views and opinions held by The California Aggie.