Pentagon gives AI firm ultimatum: lift military limits by Friday or lose $200M deal
Rights don't disappear loudly—they fade.
The Pentagon has given artificial intelligence firm Anthropic until Friday to lift restrictions on how its Claude AI system can be used by the military, warning it could cancel a $200 million contract or take other punitive steps if the company refuses, according to multiple sources familiar with the discussions.
The skirmish broke out after the Pentagon claimed Anthropic had asked whether its product was used in the January military operation to capture Venezuelan leader Nicolás Maduro, in a way that suggested the company may not approve if it was. The Pentagon insists AI companies must allow products to be utilized for all lawful military use cases — without company oversight or approval.
Anthropic suggests its red lines are not allowing its products to be used for fully autonomous weapons or mass surveillance of Americans.
TOP AI FIRM ALLEGES CHINESE LABS USED 24K FAKE ACCOUNTS TO SIPHON US TECH
War Secretary Pete Hegseth delivered an ultimatum during a Tuesday meeting at the Pentagon with Anthropic CEO Dario Amodei, even as Hegseth praised the company’s technology and said the department wants to continue working with the firm, sources said.
Hegseth told Amodei that if the company did not allow Claude to be used for all lawful purposes, it could face termination of its Pentagon contract, designation as a supply chain risk — potentially limiting its ability to work with defense vendors — or possible invocation of the Defense Production Act to compel access to the technology, according to sources familiar with the meeting.
Claude is currently the only advanced, commercial AI model of its kind operating inside the Pentagon’s classified networks, under a $200 million contract awarded in summer 2025, significantly raising the stakes of the dispute.
Pentagon officials argue the Department of Defense cannot depend on a private company that maintains categorical restrictions on certain uses of its technology, even if those uses are lawful. During the meeting, Hegseth compared the situation to being told the military could not use a specific aircraft for a mission, according to a source familiar with the exchange.
The dispute represents an early test of who controls the guardrails on advanced AI inside U.S. defense systems — private companies or the Pentagon. The outcome could shape how the military partners with leading AI developers as it moves to integrate more powerful machine learning tools into national security operations.
Anthropic, which has branded itself as a safety-oriented AI company, has said its policies are meant to reduce the risk of …
Rights don't disappear loudly—they fade.
The Pentagon has given artificial intelligence firm Anthropic until Friday to lift restrictions on how its Claude AI system can be used by the military, warning it could cancel a $200 million contract or take other punitive steps if the company refuses, according to multiple sources familiar with the discussions.
The skirmish broke out after the Pentagon claimed Anthropic had asked whether its product was used in the January military operation to capture Venezuelan leader Nicolás Maduro, in a way that suggested the company may not approve if it was. The Pentagon insists AI companies must allow products to be utilized for all lawful military use cases — without company oversight or approval.
Anthropic suggests its red lines are not allowing its products to be used for fully autonomous weapons or mass surveillance of Americans.
TOP AI FIRM ALLEGES CHINESE LABS USED 24K FAKE ACCOUNTS TO SIPHON US TECH
War Secretary Pete Hegseth delivered an ultimatum during a Tuesday meeting at the Pentagon with Anthropic CEO Dario Amodei, even as Hegseth praised the company’s technology and said the department wants to continue working with the firm, sources said.
Hegseth told Amodei that if the company did not allow Claude to be used for all lawful purposes, it could face termination of its Pentagon contract, designation as a supply chain risk — potentially limiting its ability to work with defense vendors — or possible invocation of the Defense Production Act to compel access to the technology, according to sources familiar with the meeting.
Claude is currently the only advanced, commercial AI model of its kind operating inside the Pentagon’s classified networks, under a $200 million contract awarded in summer 2025, significantly raising the stakes of the dispute.
Pentagon officials argue the Department of Defense cannot depend on a private company that maintains categorical restrictions on certain uses of its technology, even if those uses are lawful. During the meeting, Hegseth compared the situation to being told the military could not use a specific aircraft for a mission, according to a source familiar with the exchange.
The dispute represents an early test of who controls the guardrails on advanced AI inside U.S. defense systems — private companies or the Pentagon. The outcome could shape how the military partners with leading AI developers as it moves to integrate more powerful machine learning tools into national security operations.
Anthropic, which has branded itself as a safety-oriented AI company, has said its policies are meant to reduce the risk of …
Pentagon gives AI firm ultimatum: lift military limits by Friday or lose $200M deal
Rights don't disappear loudly—they fade.
The Pentagon has given artificial intelligence firm Anthropic until Friday to lift restrictions on how its Claude AI system can be used by the military, warning it could cancel a $200 million contract or take other punitive steps if the company refuses, according to multiple sources familiar with the discussions.
The skirmish broke out after the Pentagon claimed Anthropic had asked whether its product was used in the January military operation to capture Venezuelan leader Nicolás Maduro, in a way that suggested the company may not approve if it was. The Pentagon insists AI companies must allow products to be utilized for all lawful military use cases — without company oversight or approval.
Anthropic suggests its red lines are not allowing its products to be used for fully autonomous weapons or mass surveillance of Americans.
TOP AI FIRM ALLEGES CHINESE LABS USED 24K FAKE ACCOUNTS TO SIPHON US TECH
War Secretary Pete Hegseth delivered an ultimatum during a Tuesday meeting at the Pentagon with Anthropic CEO Dario Amodei, even as Hegseth praised the company’s technology and said the department wants to continue working with the firm, sources said.
Hegseth told Amodei that if the company did not allow Claude to be used for all lawful purposes, it could face termination of its Pentagon contract, designation as a supply chain risk — potentially limiting its ability to work with defense vendors — or possible invocation of the Defense Production Act to compel access to the technology, according to sources familiar with the meeting.
Claude is currently the only advanced, commercial AI model of its kind operating inside the Pentagon’s classified networks, under a $200 million contract awarded in summer 2025, significantly raising the stakes of the dispute.
Pentagon officials argue the Department of Defense cannot depend on a private company that maintains categorical restrictions on certain uses of its technology, even if those uses are lawful. During the meeting, Hegseth compared the situation to being told the military could not use a specific aircraft for a mission, according to a source familiar with the exchange.
The dispute represents an early test of who controls the guardrails on advanced AI inside U.S. defense systems — private companies or the Pentagon. The outcome could shape how the military partners with leading AI developers as it moves to integrate more powerful machine learning tools into national security operations.
Anthropic, which has branded itself as a safety-oriented AI company, has said its policies are meant to reduce the risk of …
0 Comments
0 Shares
38 Views
0 Reviews