Table of Contents
Licensed to speak? How NY’s AI bill gets it wrong.
Shutterstock
A casual exchange with a chatbot can help someone understand a lease, think through a medical question, or navigate a personal issue. It can become specific and personal, even though people understand it isn’t a licensed professional.
Yet even basic, exploratory conversations risk being labeled professional advice under a New York bill introduced this session.
Senate Bill 7263 would prevent AI chatbots — defined broadly as any system that simulates “human-like conversation” and provides information or services — from generating responses that would amount to the unlicensed practice of a profession like law, medicine, finance, or mental health, if that profession is normally provided by a human. The bill also authorizes lawsuits against chatbot companies that generate such responses, and makes clear that disclaimers alone are not enough to avoid liability.
It would be one thing if the bill was limited to unlicensed professional conduct. The state can restrict AI-powered robots from doing surgery or prescribing medicine. But some activities restricted by the bill affect only speech. And the legal context is especially murky.
New York Judiciary Law § 478 prohibits practicing law without a license, and courts have interpreted it to include providing legal advice or opinions tailored to a specific individual. When applied to human speakers, this existing law is already quite broad and raises serious free speech concerns.
Outside professional settings where someone holds themselves out as a lawyer, this law can reach ordinary, situation-specific guidance when it crosses into applying legal rules to someone’s circumstances. A casual conversation with a friend about negotiating rent gets into tenant rights and strategy. A suggestion on how to dispute a credit card charge turns into guidance on how to assert rights under the card’s terms and conditions. At that point, a friend’s advice can start to look like regulated legal advice once it offers tailored, actionable guidance about someone’s legal rights or obligations.
That risk isn’t hypothetical. New York has taken the position that a nonprofit program that trained non-lawyers to help low-income people fill out debt collection answer forms could violate unauthorized practice of law rules. In other words, even structured, form-based guidance tailored to a person’s situation in responding to a lawsuit can be unauthorized practice of law.
Given how unclear those lines already are, SB 7263 raises the risk that the bill would not only capture chatbots pretending to be lawyers, but also ordinary back-and-forth exchanges between a person and a chatbot. If someone asks whether crossing 34th Street outside a crosswalk on their daily commute is illegal, that personalized question could be treated as legal advice, creating pressure for chatbots not to answer.
Again, it would be one thing if a developer falsely claimed its chatbot was a licensed New York attorney, and it offered to write someone’s will. Laws targeting that kind of deception fit comfortably within existing fraud and consumer protection frameworks. But this bill goes further, reaching chatbots that make no such claims.
As a result, the approach taken in S. 7263 targets pure speech. AI systems generate responses to user questions, often in a conversational, tailored way. But treating those outputs as the unauthorized practice of law makes liability turn on whether the response is deemed “legal advice.” As a result, what is permitted or prohibited depends solely on what the AI system says, regardless of whether it reflects ordinary, back-and-forth exchanges of information. In doing so, the law blurs the line between protected expression and professional conduct.
If we’re purely talking about speech, whether it’s a person speaking or a developer’s chatbot providing a response, it should be free from government interference unless it falls into one of the narrow categories of unprotected speech like fraud. But states have long regulated professional conduct, and courts have upheld those rules. In Upsolve, Inc. v. James, a federal court recognized New York’s authority to enforce its unauthorized-practice-of-law statutes. However, FIRE is concerned about laws restricting the unauthorized practice of professions to the extent they reach pure speech. Such regulations raise serious constitutional concerns and warrant the highest level of judicial scrutiny, as they risk chilling valuable information sharing.
Another problem is that in order to avoid expensive liability costs, AI companies will predictably limit outputs, restrict features, or avoid certain topics altogether. Unless you can afford a lawyer for every law-related question you might have, the result is likely to be reduced access to information. For those representing themselves in court, that means fewer tools to understand the law, prepare filings, and navigate the process.
Even at this early stage, AI tools are already helping people navigate complex legal issues where traditional institutions fall short. New research released in March documents a sharp rise in people representing themselves in court alongside growing evidence of AI use. Notably, this increase was tracked in federal courts, which impose more demanding procedural and jurisdictional requirements than state courts, making the trend all the more striking. Separately, Stanford’s Justice AI Co-Pilots helps legal aid organizations navigate complex rules, draft filings, and manage high-volume cases, making a strong case that AI has a place in legal problem-solving.
As AI becomes a central tool for how people learn, ask questions, and engage with ideas, the legal frameworks that apply to it matter. But so does the distinction between professional conduct and free expression. And that balance will only become more important as these technologies continue to evolve.
Recent Articles
Get the latest free speech news and analysis from FIRE.
A lawsuit against a Black Lives Matter activist could chill all of our speech
Yale tries to claw back public trust
Lawmakers want to force Californians to take anti-hate speech training