Table of Contents
The Pentagon is violating Anthropic's First Amendment rights
Shutterstock.com
Last night, FIRE filed a friend-of-the-court brief with the U.S. District Court of Northern California arguing the Department of War's retaliatory designation of Anthropic as a supply chain risk violates the First Amendment. The Electronic Frontier Foundation, the Cato Institute, Chamber of Progress, and the First Amendment Lawyers Association joined FIRE on the filing. FIRE thanks Sopen B. Shah, Addison W. Bennett, and Sarah Grant of Perkins Coie LLP for their work on the amici brief.
The Pentagon designated Anthropic a supply-chain risk because the Pentagon believes that Anthropic is insufficiently “patriotic” and “fundamentally incompatible with American principles.” The reason? Anthropic refused to agree to remove guardrails from its artificial intelligence tools to help the Pentagon develop fully autonomous weapons or conduct mass domestic surveillance. Anthropic believes “deeply in the existential importance of using AI to defend the United States and other democracies,” but it also believes there are “[s]ome uses” that are “simply outside the bounds of what today’s technology can safely and reliably do.” That is why Anthropic has a Usage Policy. Consistent with that viewpoint, Anthropic designed Claude, its “next-generation AI assistant,” to embrace a broad set of values and guardrails — in Anthropic’s words, “Claude’s Constitution.” In line with these values, Anthropic’s existing contracts with the Pentagon do not permit Claude to support fully autonomous weapons or mass domestic surveillance.
But the Pentagon changed its mind. It demanded that Anthropic allow its technology to be used for any purportedly “lawful purpose,” which the Pentagon believes includes fully autonomous weapons and mass domestic surveillance. But Anthropic refused to change Claude’s safeguards to let slip the dogs of war without limitation, so the Pentagon designated Anthropic a “supply chain risk” — purporting to give the government unprecedented authority to strangle an American company for its supposedly being a risk to national security interests. See 10 U.S.C. § 3252. This potentially ruinous sanction threatens not only Anthropic’s business but also that of its partners and customers. If left in place, that sanction imposes a culture of coercion, complicity, and silence, in which the public understands that the government will use any means at its disposal to punish those who dare to disagree.
The Pentagon’s temper tantrum is a textbook violation of Anthropic’s First Amendment rights. Claude is not designed specifically for a military purpose or a static product that simply can be sold to the Pentagon and used as its leaders direct, such as a missile or an airplane. Claude is a constantly evolving AI model that humans at Anthropic designed by making expressive choices about its outputs in response to user inputs. Humans created Anthropic’s Usage Policy and its government-specific addendum. And unlike weapons or airplanes, Claude is fundamentally expressive. It is a “system that talks, explains, summarizes, argues, and refuses — a system that sits in the middle of human inquiry.” The Pentagon’s demand that Anthropic remove safeguards on that system — to change what Claude must and may say, analyze, argue, and refuse — asks Anthropic to make a trade on a core freedom of expression. To contract with the government, and to avoid the supply chain risk designation that would undermine its ability to contract with and engage expressively with third parties, Anthropic must change its point of view and espouse its agreement with Department of Defense policy. Anthropic must agree to both refrain from speaking and to utter compelled speech by acceding to the government’s demand to change Claude’s permitted outputs and remove any safeguards — chosen and designed by humans — that would restrict how the Pentagon can use Claude and how Claude could respond to the Pentagon’s requests.
By bullying Anthropic, the Pentagon is violating the First Amendment. Here’s why.
Government pressure on Anthropic raises a core question: Can Washington punish an AI company for refusing to redesign its systems for the Pentagon?
The Pentagon admits its sanction is in retaliation against Anthropic’s speech and an effort to threaten or coerce it into compliance, an independent violation of the First Amendment. The Secretary of Defense and other government officials involved in the designation have not even tried to hide that they are trying to put Anthropic out of business merely for its dissent, not for any actual supply chain risk. (Indeed, as Anthropic pointed out, why would the Pentagon so desperately try to contract with a “security risk”?) The officials have shamelessly bragged that the purpose of sanctioning Anthropic is to punish Anthropic for its “ideolog[y]” and to make room for more “patriotic” businesses. The First Amendment does not allow the government to put someone out of business because he or she does not pass the governing party’s ideological litmus test. “On the spectrum of dangers to free expression, there are few greater than allowing the government to change the speech of private actors in order to achieve its own conception of speech nirvana.” Moody v. NetChoice, LLC, 603 U.S. 707, 741–42 (2024).
This Court must enjoin the Pentagon’s actions. Permitting the government to dictate Anthropic’s speech would violate the First Amendment, chill the rights of not just Anthropic but business leaders, innovators, and entrepreneurs across the country, and stifle the marketplace of ideas about AI as a transformative technology, how to harness it responsibly, and the privacy and civil liberties risks associated with its use by the national security apparatus.
Recent Articles
Get the latest free speech news and analysis from FIRE.
Maryland bill would end ‘free speech zones’ on public campuses
By bullying Anthropic, the Pentagon is violating the First Amendment. Here’s why.
After a professor's hot-mic racial remarks, Hunter College faces a free speech test