Table of Contents

The quiet push to control AI speech

reflection of AI output in man's glasses as he reads

Shutterstock

Recent reports suggest the Trump administration is now considering new oversight for advanced AI models like ChatGPT, Claude, and Gemini. Few details have been finalized, but officials are reportedly discussing an executive order to create a government–industry working group. Another idea under consideration is a process for reviewing models before or around their release. 

As these talks move forward, they risk setting a troubling precedent for free expression.

Some AI companies such as xAI have already agreed to provide an early release of their models. One report announced that a group has already been formed within the Department of Commerce, which will review these models — sometimes testing them with fewer safety limits — to see what risks they might pose, especially for cybersecurity and national security.

Therapist taking notes

Licensed to speak? How NY’s AI bill gets it wrong.

New York’s AI bill could treat everyday chatbot answers as unlicensed advice, blurring speech and conduct while chilling access to information.

Read More

This isn’t entirely new as OpenAI and Anthropic have done something similar before. But now the Trump administration is considering making this kind of review much more formal.

AI is an expressive tool — one that people use to learn, ask questions, and engage with ideas. The design of an AI system also reflects a series of choices about what information to prioritize, how to reason, what boundaries to draw, and how to respond. Those choices embed values and assumptions about knowledge, truth, and human interaction.

People who build and use AI tools do not shed their constitutional rights to freedom of expression at the prompt window. That includes the right to speak without being pressured by the government to first seek approval or give them a look under the hood. What starts as “just a review” can quickly become pressure to change what tools the public is allowed to have and what information users are allowed to see. Informal oversight has a way of turning into coercion.

Imagine an AI company being told its approval depends on how its model handles controversial issues — whether it reflects the government’s preferred stance on tariffs, energy or climate policy, or how it discusses election integrity ahead of an upcoming vote. Even without an explicit mandate, that kind of signal would pressure developers to shape outputs to satisfy officials rather than reflect independent judgment.

We’ve seen this before. When Biden administration officials approached social media platforms and repeatedly pressured them to remove or downgrade COVID-related content, it meant the public discourse about the most important issue of the day was being covertly shaped by government officials. In the current Administration, we’ve observed how the Federal Communications Commission’s authority to regulate the broadcast spectrum — which sounds fairly technical and benign in theory — has been weaponized to pressure news outlets to reshape their coverage at the government’s behest. 

Artificial intelligence. Technology web background.

Artificial intelligence, free speech, and the First Amendment

Sticking AI creators with blanket liability for user-generated content produced through their services will be a disaster for technological advancement and free speech on the internet.

Read More

In those cases, pressure largely operated after content was posted. This new arrangement is more dangerous because it moves that pressure upstream, before an AI model is released to the public. Giving the government a role in reviewing speech or expressive tools at that stage creates a point of leverage it can use to ensure certain expression never sees the light of day. In First Amendment terms, this amounts to prior restraint — blocking speech before it’s communicated, rather than punishing it afterward — if the government can give thumbs-up-or-down approval. That’s why they’re presumed unconstitutional by courts.

Officials often invoke national security to justify expanding their reach. But national security isn’t a blank check. The public should be wary of arrangements that give government officials of either party a foothold over what AI systems generate.

Once the government assumes a gatekeeping role over emerging forms of expression, the line between oversight and censorship gets blurry real fast.

Recent Articles

Get the latest free speech news and analysis from FIRE.

Share