San Francisco, United States: A California judge has set the stage for a potential victory for Anthropic in its push for regulation of weapons powered by artificial intelligence, a drawback for the administration of United States President Donald Trump, which brings the company a step closer to not losing billions in government contracts.
The Trump administration had designated Anthropic a “supply chain risk” for its stance on increased regulation, a move that would block the company from certain military contracts.
Recommended Stories
list of 4 itemsend of list
The United States Department of Defense may be illegally trying to punish Anthropic for attempting to restrict the use of its artificial intelligence (AI) models for weapons without human supervision or for mass surveillance, a district judge has said.
“It looks like an attempt to cripple Anthropic,” Judge Rita Lin of the Northern California district court said on Tuesday.
Legal analysts say this could set the stage for providing Anthropic a preliminary injunction from being labelled a supply chain risk by the Defense Department.
“Their stated objectives are not completely backed by the Department of War,” Charlie Bullock, senior research fellow at the Institute for Law and AI, a Boston-based think tank, said about the Defense Department’s designation of Anthropic as a supply chain risk.
This is the first time a US company has been designated as such and it would entail cancelling government contracts as well as those of government contractors.
On March 17, the Defense Department told the court that Anthropic’s stance that its products not be used for AI-powered weapons without human oversight or for domestic surveillance would undercut its “ability to control its own lawful operations”.
Anthropic’s lawsuit to remove the designation is unfolding as being about the extent of AI’s capacities, how they could shape life and whether they will be regulated.
“This case is a kind of moment when to reflect on what kind of relations we want between the government and companies and what rights citizens have,” says Robert Trager, co-director of Oxford University’s Oxford Martin AI Governance Initiative.
Alison Taylor, clinical associate professor of business and society at New York University’s Stern School of Business, said, “In the US, technology is moving ahead like a freight train and any idea of human oversight is getting harder. But people are concerned about AI-related job losses, data centres, surveillance and weapons. This has meant public opinion is shifting away from AI.”
Over the last two weeks, a range of tech companies, think tanks and legal groups filed court briefs in support of Anthropic’s stance, asking for oversight and regulation of AI for weapons and mass surveillance. That support ranges from Microsoft and the employees of Anthropic’s competitors OpenAI and Google Inc, to the Catholic Moral Theologians and Ethicist, among others.
In their brief, engineers from OpenAI and Google DeepMind, filing in their personal capacities said, the case is of “seismic importance for our industry” and that regulation is crucial since AI models’ “chain of reasoning is often hidden from their operators, and their internal workings are opaque even to their developers. And the decisions they make in lethal contexts are irreversible.”
Against the backdrop of such concerns, NYU’s Taylor said, “Anthropic is making a risky but good bet that positioning itself as an ethical AI company will give it a hand in shaping regulation when it does happen.”
Hallucinations and other problems
Anthropic has worked on Pentagon contracts extensively and its Claude Gov models have been integrated into Palantir’s Project Maven, which helps with data analysis, target selection and other such tasks, reportedly including in the ongoing US-Israel war against Iran.
While AI-powered weapons are not currently used without human supervision, Anthropic has asked for continued human oversight in its contract with the Defense Department because, it says, AI models can hallucinate and are not yet completely reliable. While hallucination is a concern in all AI models, the potential harm from use in weapons can be on a huge scale.
Mary Cummings, a professor of civil engineering at the George Mason University College of Engineering and Computing and director of the Mason Autonomy and Robotic Center, found that half of all accidents by self-driving cars in San Francisco, where most such cars are deployed, were caused by the car wrongly thinking an object was ahead of it and braking leading to the car behind it crashing into it.
“We call this phantom braking and it is caused by hallucination,” she told Al Jazeera.
In a February paper, she warned that, “The incorporation of AI into weapons will face similar reliability issues as self-driving cars, including hallucinations.”
Annika Schoene, an assistant professor who researches the impact of AI on health systems at the Bouve College of Health Sciences at Northeastern University, says, “Hallucination is not the only concern. Models like these can have different workflows, data biases or model biases. We don’t yet know how safe they are from foreign manipulation. There are so many pieces to this and we have not yet agreed on what we deem as safe and what we don’t.”
Given that AI models, including Claude Gov, are not made by the military, it needs to test how reliable they are while integrating them into military systems, says Aalok Mehta, director of the Wadhwani AI Center at the Washington, DC-based think tank, Center for Strategic and International Studies.
“Evaluations and benchmarks testing can be lagging. Models saturate the testing systems we have.”
Others say it is not as much the technology as the way it is used that could lead to errors.
“I remember, in the [early] 2020s there was a hope that with such tools, civilian deaths would come down,” says Andrew Reddie, associate research professor at University of California, Berkeley’s Goldman School of Public Policy and founder of the Berkeley Risk and Security Lab.
“But that has not really happened because it depends on the data you feed. The challenge is not the AI-ness, but that, what is a legitimate target,” he says about how military personnel select targets from a range provided by tools.
On domestic mass surveillance, too, while it is unclear whether the Pentagon is currently using AI for that, OpenAI and Google researchers have underscored concerns over this in their court submissions.
More than 70 million cameras, credit card transaction histories and other such data can be collated to monitor the entire US population, they say. “Even the awareness that such capability exists creates a chilling effect on democratic participation.”
‘Public relations triumph’
Until the court case and amid growing public acrimony, Anthropic was said to have had a deeper relationship with the Pentagon than many of its competitors, and one that benefitted both.
“The Pentagon thinks Anthropic has the best product for military use so it is applying pressure on the company” to continue using it, says CSIS’s Mehta.
As for Anthropic, “the economics are very challenging for the AI industry. So you do need a robust public sector business with its billions of dollars of contracts,” he says.
OpenAI stepped in place of Anthropic to work with the Pentagon soon after Anthropic’s contract was terminated. But Anthropic seems to have had “a public relations triumph if not one on substance,” says NYU’s Taylor.
Its positioning as an ethical AI company may have won it public popularity. Downloads of Claude increased sharply in the weeks after the cancelled contract.
But a company having to draw lines is indicative of the failure of the government to do so, says Brianna Rosen, executive director of the Oxford Programme for Cyber and Technology Policy.
“For the first time, the United States is using AI to generate targets in large-scale combat operations in Iran,” she says. “And lawmakers are still debating whether to draw red lines on fully autonomous weapons. The absence of governance is itself a national security risk.”
The debate on the regulation of AI weapons only amplifies the gap between public concern and reticence to overregulate AI innovation in other fields. Polls have shown that Americans are concerned about potential job losses and climate change impacts from AI. An April 2025 poll by Quinnipac University found that 69 percent of Americans thought the government could do more to regulate AI.
This rift has led the AI industry to emerge as a major donor in the 2026 midterm elections. Leading The Future, a super PAC which has received more than $100m from OpenAI President Greg Brockman, Palantir co- founder Joe Lonsdale and others, has funded advertisements against Alex Bores, a New York assembly member running for Congress. Bores sponsored the RAISE act that would mandate AI developers to disclose safety protocols or accidents.
In February, Anthropic announced a $20m donation to Public First Action, a PAC that will support candidates in favour of AI regulation, including Bores.
While AI companies are looking to develop industry standards for testing and evaluation of their models, Anthropic is pushing for regulation because bad actors can violate such non-binding standards, says the Institute for Law and AI’s Bullock.
Between the court decision on Anthropic’s case and the upcoming midterm elections, experts say those events could determine the course of AI regulation.
“It could create space for more deliberate policy development,” says Oxford’s Rosen.