
Audio By Carbonatix
The US has officially deemed artificial intelligence (AI) firm Anthropic a supply chain risk, setting the stage for an unprecedented legal fight.
The Pentagon's designation is the first time a US company has been labelled a supply chain risk, meaning the government considers Anthropic not secure enough for its use.
It has led Anthropic, which has refused to give defence agencies unfettered access to its AI tools over concerns of mass surveillance and autonomous weapons, to move towards challenging the decision in court.
"We do not believe this action is legally sound, and we see no choice but to challenge it in court," chief executive Dario Amodei wrote on Thursday evening.
In a statement earlier on Thursday, a senior Pentagon official said the supply chain risk designation was "effective immediately."
Amodei wrote that Anthropic had received a letter from the defence department the previous day designating it a risk, noting that the designation "has a narrow scope."
"The law requires the Secretary of War to use the least restrictive means necessary to accomplish the goal of protecting the supply chain", he wrote.
"Even for Department of War contractors, the supply chain risk designation doesn't (and can't) limit uses of Claude or business relationships with Anthropic if those are unrelated to their specific Department of War contracts.
The AI developer had been in talks with the Department of Defence in recent days.
Those talks did not prove fruitful, according to a person familiar with Anthropic who asked not to be identified, in part because of how President Donald Trump and other members of his administration had publicly berated the company.
Leadership at Anthropic had thought last week the two sides were near a resolution, after weeks of back and forth. Then Trump posted on his Truth Social platform that he was directing all federal agencies to stop using Anthropic, the person familiar added.
"We don't need it, we don't want it, and will not do business with them again!" Trump wrote in the Friday post.
Hegseth followed up with a post on X, writing that Anthropic would be "immediately" designated a supply chain risk, prohibiting any business working with the military from "any commercial activity with Anthropic".
Anthropic said it had received no communication from the White House or the Pentagon that these statements were coming.
According to a person familiar with discussions, the feeling inside Anthropic is that it is disliked by some in the Trump administration, as its chief executive has not been among the tech leaders to donate large sums to Trump or publicly praise him.
On Thursday, tech giant Microsoft said it would continue to embed Anthropic technology into its products for clients, except for the US Department of Defence.
"Our lawyers have studied the designation and have concluded that Anthropic products, including Claude, can remain available to our customers, it told the BBC in a statement.
“We can continue to work with Anthropic on non-defence related projects,“ it continued.
The Department of War is a secondary name Trump gave to the Department of Defence.
A Pentagon official said Thursday: "From the very beginning, this has been about one fundamental principle: the military being able to use technology for all lawful purposes."
"The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and putting our warfighters at risk."
Anthropic had been used by the US government and military since 2024 and was the first advanced AI company to have its tools deployed in government agencies doing classified work.
However, as its relationship with the US military has soured, its rival OpenAI has stepped in.
Sam Altman, chief executive and co-founder of OpenAI, has said his new contract with the defence department has "more guardrails than any previous agreement for classified AI deployments, including Anthropic's".
Senator Kirsten Gillibrand said Thursday that designating Anthropic as a supply chain risk was "shortsighted, self-destructive, and a gift to our adversaries."
"The government openly attacking an American company for refusing to compromise its own safety measures is something we expect from China, not the United States," Gillibrand added.
Anthropic's AI app, Claude, remains popular despite the firm's public fallout with the US government.
Claude is the most downloaded AI app in several countries. Anthropic's chief product officer said on Thursday that "more than a million people" are signing up for Claude every day.
Latest Stories
-
Mahama’s push for visa-free Africa reflects Nkrumah’s Pan-African vision – Rashid Tanko-Computer
11 minutes -
Redefining sweetness: Why our celebrations must heal, not harm
12 minutes -
IMANI urges Mahama to reaffirm his 2014 directive on competitive state insurance placements
13 minutes -
Maiden Katon Praise comes off at Accra Sports Stadium on April 17
25 minutes -
Families flock to Luv FM Easter party at Rattray park in Kumasi
41 minutes -
Rural health worker laments overwhelming burden at CHPS compounds
1 hour -
Katon Meet to stream Accra stadium Katon Praise Concert worldwide
1 hour -
Gov’t considers exploring local raw materials to stabilise production costs of sachet water prices
2 hours -
Mahama in Paris: Turning Diplomacy into Delivery
2 hours -
Middle East crisis shows Ghana needs to diversify energy sources – Energy expert
2 hours -
Government of Ghana Internal Revenue Generation cannot fund Big Push in Four Years
2 hours -
EPCG Moderator urges leaders to serve with courage and humility in Easter message
3 hours -
The Cobra at the gate: When good intentions overrun the system
3 hours -
Easter service disrupted in Nyanyano as suspected land guards storm church event
3 hours -
Price surge exposing NDC gov’ts dishonesty -Titus Glover
3 hours