Uncensored Free Speech Platform









Garbage In, Carnage Out
Is this competence or optics?

Log In

Email *

Password *

Remember Me

Forgot Your Password?

Log In

New to The Nation? Subscribe
Print subscriber? Activate your online access

Skip to content Skip to footer

Garbage In, Carnage Out

Magazine

Newsletters

Subscribe

Log In

Search

Subscribe

Donate

Magazine

Latest

Archive

Podcasts

Newsletters

Sections

Politics

World

Economy

Culture

Books & the Arts

The Nation

About

Events

Contact Us

Advertise

Current Issue

Society

/ March 4, 2026

Garbage In, Carnage Out

The harrowing lessons of the Pentagon’s recently dissolved partnership with Anthropic.

David Futrelle

Share

Copy Link

Facebook

X (Twitter)

Bluesky Pocket

Email

Edit

Ad Policy

Anthropic touts its alliance with the American imperium in happier times for the company.
(Photo illustration by Li Hongbo / VCG via Getty Images)

It’s been a dizzying few weeks for the AI firm Anthropic. After a barrage of MAGA-led tantrums, the company lost its $200 million contract with the Pentagon by refusing to suspend key safeguards within its operating system that protect it from manipulation by bad actors; in terminating the deal, Secretary of Defense Pete Hegseth claimed that the AI lab posed a “supply chain risk to national security.”

But it appears that risk was short-lived, at least when it comes to a new intervention in the Middle East. As the Trump administration launched its invasion of Iran, the military reportedly relied on Anthropic’s AI technology to identify targets and coordinate bombing attacks. The whole episode speaks volumes about our failure to reckon with the true scale and implications of the AI sector’s growing dominance over all facets of American life—including the fateful life-and-death decisions entrusted to the country’s military-industrial complex. As the MAGA war complex and the Silicon Valley elite battle over the finer points of Anthropic’s role in modern war-making, the larger story remains unchanged: AI overseers are enthusiastic partners in a morally disastrous campaign to insulate the most destructive decisions that military commanders make from their actual consequences. And as usual, the casualties often marked for elimination in our emerging post-human war-making regime are powerless civilians on the ground.

None of this has entered into the high-profile spat between Anthropic and the Department of Defense. When news of the company’s breach with the Pentagon broke, AI boosters and tech analysts embarked on a fervid round of wishcasting, depicting Anthropic and company CEO Dario Amodei as swashbuckling defenders of responsible data collection against the forces of government surveillance and repression. “Dario Amodei lost his tender with the Pentagon but the Anthropic CEO held onto his beliefs and cemented his reputation as a man of courage,” Russian dissident and former chess grandmaster Garry Kasparov wrote on his Substack, having convinced himself that the contretemps was “a story bigger than Iran.” Meanwhile, Anthropic’s AI chatbot app, Claude, shot to the top of the charts on the App Store and Google Play.

It didn’t hurt Anthropic’s case that its opponents seemed to be doing their best impressions of monologuing …
Garbage In, Carnage Out Is this competence or optics? Log In Email * Password * Remember Me Forgot Your Password? Log In New to The Nation? Subscribe Print subscriber? Activate your online access Skip to content Skip to footer Garbage In, Carnage Out Magazine Newsletters Subscribe Log In Search Subscribe Donate Magazine Latest Archive Podcasts Newsletters Sections Politics World Economy Culture Books & the Arts The Nation About Events Contact Us Advertise Current Issue Society / March 4, 2026 Garbage In, Carnage Out The harrowing lessons of the Pentagon’s recently dissolved partnership with Anthropic. David Futrelle Share Copy Link Facebook X (Twitter) Bluesky Pocket Email Edit Ad Policy Anthropic touts its alliance with the American imperium in happier times for the company. (Photo illustration by Li Hongbo / VCG via Getty Images) It’s been a dizzying few weeks for the AI firm Anthropic. After a barrage of MAGA-led tantrums, the company lost its $200 million contract with the Pentagon by refusing to suspend key safeguards within its operating system that protect it from manipulation by bad actors; in terminating the deal, Secretary of Defense Pete Hegseth claimed that the AI lab posed a “supply chain risk to national security.” But it appears that risk was short-lived, at least when it comes to a new intervention in the Middle East. As the Trump administration launched its invasion of Iran, the military reportedly relied on Anthropic’s AI technology to identify targets and coordinate bombing attacks. The whole episode speaks volumes about our failure to reckon with the true scale and implications of the AI sector’s growing dominance over all facets of American life—including the fateful life-and-death decisions entrusted to the country’s military-industrial complex. As the MAGA war complex and the Silicon Valley elite battle over the finer points of Anthropic’s role in modern war-making, the larger story remains unchanged: AI overseers are enthusiastic partners in a morally disastrous campaign to insulate the most destructive decisions that military commanders make from their actual consequences. And as usual, the casualties often marked for elimination in our emerging post-human war-making regime are powerless civilians on the ground. None of this has entered into the high-profile spat between Anthropic and the Department of Defense. When news of the company’s breach with the Pentagon broke, AI boosters and tech analysts embarked on a fervid round of wishcasting, depicting Anthropic and company CEO Dario Amodei as swashbuckling defenders of responsible data collection against the forces of government surveillance and repression. “Dario Amodei lost his tender with the Pentagon but the Anthropic CEO held onto his beliefs and cemented his reputation as a man of courage,” Russian dissident and former chess grandmaster Garry Kasparov wrote on his Substack, having convinced himself that the contretemps was “a story bigger than Iran.” Meanwhile, Anthropic’s AI chatbot app, Claude, shot to the top of the charts on the App Store and Google Play. It didn’t hurt Anthropic’s case that its opponents seemed to be doing their best impressions of monologuing …
0 Comments 0 Shares 41 Views 0 Reviews
Demur US https://www.demur.us