The Anthropic and US Government conflict is larger than you think

While Anthropic and the US government fight over surveillance of American citizens, the world is getting militarised.

Key points
  • Anthropic and US DoD/DoW conflict is currently about surveillance and whether it is acceptable
  • We must expand this conversation beyond Anthropic: what about other AI firms, e.g. Google, OpenAI, Amazon and others?
  • We must look beyond AI: how does tech companies provision of compute, data, storage further enable government powers.
News & Analysis
A digital collage based on Goya’s etching showing a person asleep at a desk. The figure leans forward with their head on their arms. Around them hover many small drones and large video surveillance cameras, replacing the original owls and bats. A laptop sits open on the desk. Faint text on the desk reads “The sleep of reason produces monsters.” A label over the word “reason” says “AUTONOM-IA.” The overall effect suggests technology watching and controlling a passive person

Daniela Zampieri / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

Anthropic and the US Government are at war over surveillance and war. Part of that conflict is over the use of AI models for surveillance, but only of Americans.

The U.S. Department of War (DoW) is considering cutting business ties with the firm, designating it a ‘supply chain risk’ or even using the Defence Production Act to alter any contract. The conflict is currently over whether the US Department of War can use Anthropic’s model for mass domestic surveillance or for autonomous kinetic operations.

As immense as those stakes sound, our take is about even more: the Militarisation of Tech is both larger and the Anthropic battle is just one facet of it.

Conflict details

The exact nature of this conflict is still confusing even as it must be settled by Friday, 27 February 2026.

The clash is, according to the Verge over Anthropic’s enforcement of its ‘acceptable use’ policy.

The conflict seems to be arising from the DoW’s use of Palantir’s services, that includes the use of Anthropic’s AI model. According to the New York Times, Pentagon officials were notified by Palantir, when two employees of the two firms were discussing the challenge.

Anthropic argues that it must ensure that its models were used in line with what they could “reliably and responsibly do”. The Pentagon claims that AI contracts must allow the military to use models for any lawful purpose.

Industrial scale

This has been a confusing period for the AI industry. Originally founded on noble principles of serving humanity, they all began shifting to supporting militaries.

In January 2024, OpenAI deleted language expressly prohibiting on activities of ‘high risk of physical harm’ including weapons development and military and warfare, and shortly thereafter was being used by US Africa Command. In February 2025, Google removed commitments to not apply their technology to weapons or surveillance. After a springtime of denials, in September 2025 Microsoft admitted that its cloud services were used by the Israeli Ministry of Defence in ways that conflicted with its avlues, and terminated elements of its contract. According to reports, Google and Amazon also have contracts with Israeli Government for cloud storage and AI services.

Therefore, the larger picture is that it’s about how tech firms are enabling militaries around the world to achieve immense powers to monitor and target civilian populations, in both war and peacetime.

Without cloud compute and advanced processing capacities, these governments could not shift towards being ‘AI-first’, or ‘human-machine teams’.

Rules matter

It’s clear that the initial noble promises of the tech industry and their red lines were convenient tools for recruitment, investment, and marketing -- at the time. They are now inconvenient for expansive investment, growth and domination. In our experience, we can’t rely on companies’ sense of morality to counter-act against those pressures.

An entire industry has now made the shift to war. Now that it goes well beyond one single actor and spreads to an industry, this is exactly when laws and rules are required.

As it is, there are in theory rules that limit the US Department of War from undertaking surveillance domestically; but now militaries have immense capacity, supplied by industry, to collect and manage vast amounts of data on people across the world. (We also think it's important to remind everyone that these now militarised corporate systems were built initially on consumer data and subscriptions and consumer-market oriented investments.)

Militaries now have access to unprecedented capacities. Early automation led to early conventions. But rules are still forming around the use of data, third parties, AI, and automation in targeting. Meanwhile, we need governments to state very clearly how they are complying with rules and conventions. Current standards should not be ignored when convenient, or when new tools or toys become available, or in the (worryingly long pause) until more specific conventions come along.

In this gap, companies need to consider carefully their obligations in conflict. Militarisation is not an addendum to their business model; it likely shapes their very existence.

To respond to this trend of militarisation, we’re arguing for

  • international standards on autonomous weapons systems;
  • protection of people’s data in both military and civilian contexts;
  • stop blanket military and national security exemptions from legal frameworks;
  • transparency and due diligence in procurement for both military and civilian contexts.

We must see this ‘conflict’ between titans as a call to push harder on these pressing challenges. There are many more contracts and governments that are balancing our lives and our rights on the tip of their arsenals. War and conflict must not be the determinant of our rights today and tomorrow.

Sign up to our newsletter

Join our newsletter to stay informed about our work