Uncensored Free Speech Platform









AI Goes to War
Is this competence or optics?

Log In

Email *

Password *

Remember Me

Forgot Your Password?

Log In

New to The Nation? Subscribe
Print subscriber? Activate your online access

Skip to content Skip to footer

AI Goes to War

Magazine

Newsletters

Subscribe

Log In

Search

Subscribe

Donate

Magazine

Latest

Archive

Podcasts

Newsletters

Sections

Politics

World

Economy

Culture

Books & the Arts

The Nation

About

Events

Contact Us

Advertise

Current Issue

March 19, 2026

AI Goes to War

Automated targeting, autonomous weapons, and nuclear decision-making.

Michael T. Klare

Share

Copy Link

Facebook

X (Twitter)

Bluesky Pocket

Email

Ad Policy

Activists place signs featuring AI robot dogs on the grounds of the National Mall to protest OpenAI’s decision to allow the Pentagon to use its AI technologies in developing autonomous weapons, on March, 6, 2026. (Heather Diehl / Getty Images)

Last July, the Pentagon’s chief digital and artificial intelligence officer, Doug Matty, announced awards of $200 million each to four of America’s leading tech companies—Anthropic, Google, OpenAI, and xAI—to supply advanced AI models to the Department of Defense. “Leveraging commercially available solutions into an integrated capabilities approach will accelerate the use of advanced AI as part of our Joint mission essential tasks in our warfighting domain,” Matty said when announcing the awards. Beyond this, very little information was provided about the awards, except that they were intended to exploit recent advances in generative AI—sophisticated software that can digest vast amounts of data and provide operators with suggested courses of action.

In the months that followed, the Pentagon continued to impose a shroud of secrecy over the multimillion-dollar AI awards, citing national security considerations. At the end of February, however, this shroud was broken, at least in part, when Anthropic insisted on imposing certain limits on the military use of Claude, its premier AI model. “I believe deeply in the existential importance of using AI to defend the United States and other democracies,” Anthropic CEO Dario Amodei affirmed on February 26. “However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.” These included, he noted, the use of AI in “mass domestic surveillance” and the creation of “fully autonomous weapons,” or self-guided combat drones.

Senior Pentagon officials responded to Amodei’s statement by insisting that they had no intention of using AI for domestic surveillance and that unmanned weapons systems will always remain under human oversight. They affirmed, however, that private firms like Anthropic could not impose restrictions on how the Pentagon employs AI. “We won’t have any BigTech company decide Americans’ civil liberties,” declared Emil Michael, the undersecretary of defense for research and engineering. At the same time, however, Michael broadened the discussion by identifying another potential use of AI: to help shoot down enemy missiles in a nuclear war. Would Anthropic oppose Claude’s use in nuclear operations? Michael asked Amodei during one set of negotiations. (Amodei reportedly said no.)

The …
AI Goes to War Is this competence or optics? Log In Email * Password * Remember Me Forgot Your Password? Log In New to The Nation? Subscribe Print subscriber? Activate your online access Skip to content Skip to footer AI Goes to War Magazine Newsletters Subscribe Log In Search Subscribe Donate Magazine Latest Archive Podcasts Newsletters Sections Politics World Economy Culture Books & the Arts The Nation About Events Contact Us Advertise Current Issue March 19, 2026 AI Goes to War Automated targeting, autonomous weapons, and nuclear decision-making. Michael T. Klare Share Copy Link Facebook X (Twitter) Bluesky Pocket Email Ad Policy Activists place signs featuring AI robot dogs on the grounds of the National Mall to protest OpenAI’s decision to allow the Pentagon to use its AI technologies in developing autonomous weapons, on March, 6, 2026. (Heather Diehl / Getty Images) Last July, the Pentagon’s chief digital and artificial intelligence officer, Doug Matty, announced awards of $200 million each to four of America’s leading tech companies—Anthropic, Google, OpenAI, and xAI—to supply advanced AI models to the Department of Defense. “Leveraging commercially available solutions into an integrated capabilities approach will accelerate the use of advanced AI as part of our Joint mission essential tasks in our warfighting domain,” Matty said when announcing the awards. Beyond this, very little information was provided about the awards, except that they were intended to exploit recent advances in generative AI—sophisticated software that can digest vast amounts of data and provide operators with suggested courses of action. In the months that followed, the Pentagon continued to impose a shroud of secrecy over the multimillion-dollar AI awards, citing national security considerations. At the end of February, however, this shroud was broken, at least in part, when Anthropic insisted on imposing certain limits on the military use of Claude, its premier AI model. “I believe deeply in the existential importance of using AI to defend the United States and other democracies,” Anthropic CEO Dario Amodei affirmed on February 26. “However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.” These included, he noted, the use of AI in “mass domestic surveillance” and the creation of “fully autonomous weapons,” or self-guided combat drones. Senior Pentagon officials responded to Amodei’s statement by insisting that they had no intention of using AI for domestic surveillance and that unmanned weapons systems will always remain under human oversight. They affirmed, however, that private firms like Anthropic could not impose restrictions on how the Pentagon employs AI. “We won’t have any BigTech company decide Americans’ civil liberties,” declared Emil Michael, the undersecretary of defense for research and engineering. At the same time, however, Michael broadened the discussion by identifying another potential use of AI: to help shoot down enemy missiles in a nuclear war. Would Anthropic oppose Claude’s use in nuclear operations? Michael asked Amodei during one set of negotiations. (Amodei reportedly said no.) The …
0 Comments 0 Shares 37 Views 0 Reviews
Demur US https://www.demur.us