Uncensored Free Speech Platform









Anthropic’s Lawsuit Should Absolutely Destroy the Pentagon in Court
Rights don't disappear loudly—they fade.

Log In

Email *

Password *

Remember Me

Forgot Your Password?

Log In

New to The Nation? Subscribe
Print subscriber? Activate your online access

Skip to content Skip to footer

Anthropic’s Lawsuit Should Absolutely Destroy the Pentagon in Court

Magazine

Newsletters

Subscribe

Log In

Search

Subscribe

Donate

Magazine

Latest

Archive

Podcasts

Newsletters

Sections

Politics

World

Economy

Culture

Books & the Arts

The Nation

About

Events

Contact Us

Advertise

Current Issue

Politics

/ March 11, 2026

Anthropic’s Lawsuit Should Absolutely Destroy the Pentagon in Court

But make no mistake: The company is not one of the good guys.

Elie Mystal

Share

Copy Link

Facebook

X (Twitter)

Bluesky Pocket

Email

Edit

Ad Policy

Anthropic CEO Dario Amodei, Chief Product Officer Mike Krieger and Head of Communications Sasha de Marigny give a press conference on May 22, 2025.
(Julie Jammot / AFP via Getty Images)

Anthropic, makers of the “Claude” AI model, has sued the Department of Defense in two separate lawsuits, including one alleging that the government is violating its First Amendment rights. The conflict arose last week when the Trump administration labeled the company a “supply chain risk” and banned government agencies, or any entity working with the US military, from using the Claude system. The Trump administration now calls Claude a national security risk. (The second lawsuit takes issue with this designation, which, until now, has never been used against a US company.)

The blacklisting followed months of fighting between Anthropic and the government. Anthropic wants to keep “safeguards” on Claude that prevent the system from being used to power autonomous weapons—basically, killing machines that can conduct military operations without human involvement—and to engage in widespread surveillance of Americans. The Trump administration wants the company to loosen those safeguards. Evidently, Secretary of War Crimes Pete Hegseth wants the killer robots now, and he doesn’t like Anthropic getting in his way.

The government repeatedly threatened Anthropic with consequences if it didn’t remove its safety restrictions. It would appear the supply chain risk designation and associated blacklisting are those consequences.

All of this should make the Anthropic lawsuit a slam dunk, at least the First Amendment part, assuming there are still judges and justices willing to hold the Trump administration accountable to the Constitution, even in the realm of national security. Anthropic’s complaint makes a pretty clear cut case for a First Amendment violation (I’m less knowledgeable about the other claim, though my assumption, based on prior history, is that the Trump administration is indeed in violation of every law it’s accused of violating).

The simple facts are these: The government wanted Anthropic to make its AI do something. Anthropic didn’t want to make its AI do it, because of its beliefs, and those beliefs are protected under the First Amendment. The government punished Anthropic with an adverse national security designation, because the company wouldn’t do what the government wanted. That is a free speech violation.

It …
Anthropic’s Lawsuit Should Absolutely Destroy the Pentagon in Court Rights don't disappear loudly—they fade. Log In Email * Password * Remember Me Forgot Your Password? Log In New to The Nation? Subscribe Print subscriber? Activate your online access Skip to content Skip to footer Anthropic’s Lawsuit Should Absolutely Destroy the Pentagon in Court Magazine Newsletters Subscribe Log In Search Subscribe Donate Magazine Latest Archive Podcasts Newsletters Sections Politics World Economy Culture Books & the Arts The Nation About Events Contact Us Advertise Current Issue Politics / March 11, 2026 Anthropic’s Lawsuit Should Absolutely Destroy the Pentagon in Court But make no mistake: The company is not one of the good guys. Elie Mystal Share Copy Link Facebook X (Twitter) Bluesky Pocket Email Edit Ad Policy Anthropic CEO Dario Amodei, Chief Product Officer Mike Krieger and Head of Communications Sasha de Marigny give a press conference on May 22, 2025. (Julie Jammot / AFP via Getty Images) Anthropic, makers of the “Claude” AI model, has sued the Department of Defense in two separate lawsuits, including one alleging that the government is violating its First Amendment rights. The conflict arose last week when the Trump administration labeled the company a “supply chain risk” and banned government agencies, or any entity working with the US military, from using the Claude system. The Trump administration now calls Claude a national security risk. (The second lawsuit takes issue with this designation, which, until now, has never been used against a US company.) The blacklisting followed months of fighting between Anthropic and the government. Anthropic wants to keep “safeguards” on Claude that prevent the system from being used to power autonomous weapons—basically, killing machines that can conduct military operations without human involvement—and to engage in widespread surveillance of Americans. The Trump administration wants the company to loosen those safeguards. Evidently, Secretary of War Crimes Pete Hegseth wants the killer robots now, and he doesn’t like Anthropic getting in his way. The government repeatedly threatened Anthropic with consequences if it didn’t remove its safety restrictions. It would appear the supply chain risk designation and associated blacklisting are those consequences. All of this should make the Anthropic lawsuit a slam dunk, at least the First Amendment part, assuming there are still judges and justices willing to hold the Trump administration accountable to the Constitution, even in the realm of national security. Anthropic’s complaint makes a pretty clear cut case for a First Amendment violation (I’m less knowledgeable about the other claim, though my assumption, based on prior history, is that the Trump administration is indeed in violation of every law it’s accused of violating). The simple facts are these: The government wanted Anthropic to make its AI do something. Anthropic didn’t want to make its AI do it, because of its beliefs, and those beliefs are protected under the First Amendment. The government punished Anthropic with an adverse national security designation, because the company wouldn’t do what the government wanted. That is a free speech violation. It …
0 Comments 0 Shares 25 Views 0 Reviews
Demur US https://www.demur.us