Table of Contents
So to Speak Podcast Transcript: Social media = cigarettes?
Note: This is an unedited rush transcript. Please check any quotations against the audio recording.
Nico Perrino: I do want to ask, and I’m derelict in my responsibilities here as a host that it took me 45 minutes to get to this question, but what about the comparison of social media to cigarettes? That’s the one I’m hearing all the time as I’m debating this online. What do you make of this?
Mike Masnick: Yeah.
Video: “Somewhere I read of the freedom of speech.” You’re listening to So to Speak: The Free Speech Podcast, brought to you by FIRE, the Foundation for Individual Rights and Expression.
Nico Perrino: Welcome back to So to Speak: The Free Speech Podcast, where every other week we take an uncensored look at the world of free expression through the law of philosophy and stories that define your right to free speech. I’m your host Nico Perrino. Last week, juries in California and New Mexico delivered seminal verdicts holding Meta and YouTube liable for failing to protect young users from harm.
In California, a 20-year-old plaintiff known by her initials, KGM, argued that it wasn’t social media content that contributed to her mental health issues but rather how the platforms were designed. She testified that using social media as a teen became an addiction that worsened her mental health problems. The juried agreed, awarding her $6 million in damages.
In New Mexico, the state secured nearly $400 million in damages after accusing Meta of failing to protect kids from child predators. Both verdicts found that the companies were negligent in the design or operation of their platforms and that each company knew their platforms could be dangerous when used by a minor.
Absent from these verdicts was a consideration of the First Amendment’s free speech protections as well as Section 230 of the Communications Decency Act, which is the 1996 law that provides a liability shield to interactive computer services that host third party content. The courts found that the design elements of the platforms could be separated from the content hosted on the platforms, thus removing the need to consider the First Amendment or Section 230.
Joining me to break down the rulings and their possible free speech implications is Mike Masnick, CEO and founder of Techdirt and the Copia Institute. Mike, welcome back onto the show. It’s been a couple of years.
Mike Masnick: Yeah, it’s been a while. I was trying to remember how long it’s been. I’m not even sure. But I’m glad to be here.
Nico Perrino: I think you were one of my early guests and this podcast has been going on now – I think it’ll be 10 years, actually, in April. So, it was probably, I don’t know, 2017, 2018, or 2019 that I had you on.
Mike Masnick: Wow. Okay.
Nico Perrino: It’s been a beat.
Mike Masnick: Yeah. Yeah.
Nico Perrino: Well, your name came up in my newsfeed on social media recently because you have an article that’s circulating among free speech circles and that we circulated within FIRE. The headline for that article is “Everyone Cheering the Social Media Addiction Verdicts Against Meta Should Understand What They’re Actually Cheering For.” I think for a lot of people in our community it was really clarifying because there was this debate over the verdicts and what free speech implications, if any, they had. Your article does a really good job of laying that out.
But you begin this article by pulling no punches against Meta. You said, “Meta is a terrible company that has spent years making terrible decisions and being terrible at explaining the challenges of social media trust and safety.” You don’t really dive into that too much. You have a bunch of links that I’m assuming give you the context there. But for our listeners, what problems do you have with Meta? We’ll talk about that before we get into the problems with these verdicts as you see them.
Mike Masnick: Yeah. How much time do we have? I tend to think it’s not a very good company, that it is run thoughtfully. I don’t think it is run in the interest of the wider internet. I am at heart a believer in having a wide open internet that enables more speech and enables more empowerment of different users online, and I find not all but most of Meta’s products to be antithetical to that. Some of the decisions that they’ve made about how they exist and how they contribute to the open internet and how they interact with other players have done a lot of damage to the open internet.
I think that in their pursuit of pure growth without thinking about the long-term consequences of what they’re doing, I think they’re undermining their own ability to – they rely on the open internet. They’re built on and based on the open internet. They’ve made a bunch of decisions that I think are really damaging to that, both in terms of how they roll out their products but also their policy and legal decisions. So, I’m no fan of the company. I think, in an ideal world, its products would become much smaller. I don’t think they need to go away, but I think that there should be much broader competition against them and better services that put more power in the hands of users.
I just don’t really think very highly of them, but that doesn’t mean that I think any lawsuit that goes against them is there for a good verdict or a good result.
Nico Perrino: The problem I have with the company is it doesn’t seem to have any fixed principles.
Mike Masnick: Yes.
Nico Perrino: So, you have Mark Zuckerberg going Joe Rogan and apologize for content moderation policies over the past couple of years, which were in keeping with the general climate of social media content moderation over the past couple of years. Then, Trump gets elected, there’s a vibe shift, and Mark Zuckerberg apologizes and also blames the government for some of that content moderation.
In the free speech world, we tend to most admire those people who stand on their principles when they’re the most challenging to defend, and often that’s an environment where you are being subjected to threats of regulation or punishment one way or another. It’s very easy to take a principled stand on something when everyone agrees with that.
Mike Masnick: Yeah. Yeah. You’re exactly right, that I think there are very few principles at work there. The funny thing is, and this just came out in the last few days, literally 24 days – I counted this – after Zuckerberg went on Rogan to talk about how he had learned his lesson and he wasn’t going to take pressure from the government to remove any content –
Video: We got to this point where there were these things that you just couldn’t say which were mainstream discourse. So, Pete Hegseth is going to probably be defending his nomination for Secretary of Defense on the Senate floor. I think one of the points that he’s made is that he thinks that women shouldn’t be able to be in certain combat roles. Until we updated our policies, that wouldn’t have been a thing that you could’ve said on our platforms because it would call for the exclusion of a protected category of people.
Mike Masnick: He texted Elon Musk, who was then a powerful government official, saying, “You just let us know if anyone says anything bad about DOGE and we’ll take it down.” It’s one thing to --
[Crosstalk]
Nico Perrino: I did see that. I did see that. I didn’t realize it was right after he went on Rogan, though.
Mike Masnick: It was 24 days. I counted.
Nico Perrino: Wow.
Mike Masnick: And so, the idea that he has a principled stance I think is seriously undermined by just that one example, and I can come up with many more, of his lack of direct, real principled support for free speech like that.
Nico Perrino: The challenge with being a free speech advocate is you’re often defending unpopular speech and unpopular speakers. Both of those are at issue in these two verdicts. I should say we often spend our time here at FIRE defending the free speech rights of people who, if they were in power, would limit the free speech rights of others. This goes back to the seminal Skokie case when the ACLU defended the Nazis. A lot of the communists in the 20th Century wanted to limit free speech rights.
I interviewed David Cole, the former legal director of the ACLU, who talked about when he was litigating the Texas v. Johnson case and representing the Revolutionary Communist Party and its appeal to the Supreme Court to protect flag burning as protected speech. He said that the Revolutionary Communist Party kind of muted his rhetoric around the First Amendment because they didn’t totally buy into everything that the First Amendment protected. So, it’s not just those who defend free speech who get free speech rights and sometimes us who are defending free speech rights need to recognize that.
Nevertheless, you write in the second paragraph of this article here that if you care about free speech online these two verdicts should scare the hell out of you. Why does it scare the hell out of you here?
Mike Masnick: Yeah, I mean, it’s really looking at what it means. I, again, don’t really care what it means for Meta but the rulings apply to lots of other sites. They’re verdicts. They’re jury verdicts but what they imply and what they will enable will impact every website or every situation where users might figure out how they wanna display someone else’s content. If you put up your own website, even in some cases sharing email, could come into play with these kinds of things.
The impact here is basically saying the way that content is displayed suddenly could make you liable for any downstream harm that might come from that content, even if it’s a fairly tenuous connection. That opens up a field day for lawyers to sue all sorts of people who had nothing directly to do with the harm if there’s any harm downstream. People have lots of different opinions about plaintiffs, lawyers, and torte law and what kinds of things are good for it and where it might go overboard.
But this really throws open the floodgates and says if there’s any harm find any kind of online service that this person used that you can somehow tie, even very loosely, to that harm from content that they had nothing to do with, they were just displaying. They were hosting, or showing, or recommending in some way and you can argue that it was a design fault and that you’re not suing over speech because the speech is First Amendment protected. But the design choice itself is somehow a way to route around the First Amendment.
Nico Perrino: Yeah. But, let’s break that down a little bit.
Mike Masnick: Sure.
Nico Perrino: I think if you’re someone who’s just reading the headlines here, you’re not steeped in First Amendment doctrine. You’re not familiar with Section 230. You might say, “Okay, I do feel like social media is addicting and I do feel like some of the design elements of social media are addicting.” That could be separated out from the content of the speech, right? I actually saw a pretty good summary of this argument from Brandon Gorrell, who works for the tech news show TBPN, who summarized the plaintiff’s arguments in the California case – two separate cases.
He summarized them this way. “In the case, the plaintiff’s lawyer Mark Lanier argued that Meta and YouTube built digital casinos that use neurobiological techniques similar to those employed by slot machines. The jury found that specific features of Meta and YouTube are designed to be addictive: infinite scroll creates an environment where there are no natural stopping points; algorithmic recommendations feeds users highly engaging content; autoplay removes users’ agency in choosing whether to watch the next video; notifications pull users back in by exploiting their need for validation.
“Instagram beauty filters contributed to the plaintiff’s body dysmorphia; features like the Like button exploit users’ biological need for social approval,” etc., etc., etc. You can see there that there’s not a lot of talk about the specific content that those design elements are working for, but just the kind of design of how this content gets fed to you. That’s what the plaintiff’s lawyer is arguing can be regulated and can be separated out from the speech considerations of the First Amendment and the liability shield of Section 230. How do you respond to that?
Mike Masnick: Yeah. I mean, there’s a few different things. The biggest one, and I know this got some play because of the article, I talked about if you try and separate the content from the design, see what happens if you use those same product design features with content that is not particularly interesting. The example I used was paint drying. If all the videos on Instagram were paint drying, do we think that the infinite scroll would matter? Do we think that the notifications would matter? Do we think the auto play would drive people to just keep going back?
It is not those features that are doing it, it is the underlying content and recommending or encouraging people to see content that they find appealing and interesting, it’s tough to say that that violates the law because everyone does that. You walk into a bookstore and there are books on the shelves in front of you that have special able displays set up. Is that trying to addict you to a book? I don’t think so.
And you can say, “Well, that’s different. That’s a one-off thing.” But, over and over again, as you try and break these things down, what you’re really coming back to is the underlying content is the thing that is driving interest. There are concerns. I mean, everybody has this feeling of sometimes I use my phone too much and I would like to put it down. But there’s a difference between doing too much of something and having a habit that is not healthy for you and being addicted. This is actually a really important distinction.
There was a study that came out – I think it was in November, just a few months ago – that really tried to look into this whole question of social media addiction. They actually asked people how many people thought they were addicted to social media. I don’t have the exact numbers in front of me. I think it was 39%-40% of people said they were addicted to social media. And then, they went through this process to actually test that and there are ways that I don’t understand. This is not my field, but there are ways to test if you’re showing signs of addiction.
I think it includes things like withdrawal symptoms if it is taken away from you. Do you have symptoms of withdrawal that are damaging and harmful? In that case, they found that it was a very, very small percentage. I think it was under four percent. Maybe 10% of the people who actually though that they were addicted to social media actually showed signs that they really were. They were saying for most of the other people it’s maybe just a bad habit. Lots of us have bad habits but you treat habits or how to change habits very differently than you treat addiction.
There are lots of tips, tricks, and ways to change the habits that you engage in if you have bad habits. Lots of us do. But there are ways to deal with that that are very different than addiction treatment. The concern in this study, which I thought was really interesting, was that if you call something that is just a bad habit an addiction, it actually makes it harder because you sort of give up and you’re like, “Well, there’s nothing I can do. It has overpowered me.” Addiction is something that is, by definition, overpowering for you.
Or if it’s just having a bad habit, there are lots of things you can do to fix those habits. But if you think it’s an addiction rather than a habit it leads to bad places. In fact, part of what that study said was that the media discussion of social media addiction was causing more people to think they were addicted and therefore not making an effort to fix bad habits. It some ways, the media discussion of it was probably making the problem way worse than any actual features of social media addiction or bad habit forming.
Nico Perrino: Well, I have this bad habit where, when I’m watching a really good television show and an episode ends on a cliffhanger, and you’re watching Netflix and they dump all the episodes at the same time, and you can keep watching through to the end of the season. I have this bad habit of staying up late into the night and not being able to pull myself away from that show. Now, I made this argument and many other people have made this argument as well on social media. People don’t see to buy it. There’s always this desire among content creators and people and institutions that host content, be they newspapers or Netflix, to try and make their content more appealing.
The way you lay out a newspaper is often done to keep people reading. Writing, even. That is a technology. Language is a technology that, over the years, we learned how to make more appealing to get people to keep reading on. When I go to Rome, for example, and I see Latin, it used to be that you would have these big texts of words in one long paragraph with no spaces between the words, with no punctuation, with no paragraph breaks. That was a technology that was invented over the years in order to make it easier for people to write.
Axios has revolutionized this in a certain way. They have their Smart Brevity tool to make content more easily digestible. If you’re saying that the way content is delivered is separate from the content itself, you could regulate grammar and spelling. I’m taking this to absurd lengths. You could regulate Smart Brevity. You can regulate here according to the judge who dismissed some of these arguments the way the content’s delivered. Is that an extreme, stupid argument?
Mike Masnick: No, it’s not. I’ve certainly gone through the experience of reading a book that each chapter ends on a cliffhanger. I’m like, “Oh, just one more. Just one more,” and staying up late into the night. But there is this feel that people have of it’s different once it’s on the computer. I think it’s the newness of on the computer or on the internet that sets people off. As soon as you start to go down this path, it’s an incredibly slippery slope of saying, “If you’re getting people to stick around and consume more of that content, that is now a design choice and has nothing to do with the content and therefore is not protected by the First Amendment.”
I think it opens up all sorts of really, really dangerous downstream effects. It’s like every website, not just Meta, Google, and whoever else, is looking for ways to have people spend a little bit more time to enjoy the knitting forum that you’re on, or the Discord server that you’re talking to your friends in.
Nico Perrino: There are whole companies that are set up to help look at a website’s user experience and help improve it so people stay on the site longer. It’s often done with the intention of people finding content that they want to find. I think that’s one of the things people miss about these recommendation algorithms in particular. They are taking your past interests, the things that you’ve looked at in the past, the content you spend the most time with, and trying to find other content posted by other users that you might be interested in that match those previously demonstrated interests.
These algorithms, as you say, wouldn’t work if it weren’t for the content and assessing your interests. Those are all First Amendment free speech considerations.
Mike Masnick: Yeah. Absolutely. I understand where there is this internal, instinctual fear that a lot of people, which I fully understand. A lot of these, especially companies like Meta and Google, are very big companies that have very powerful teams that spend a lot of time trying to figure out how do we do that better than anyone else. For an individual, I can see the powerlessness feeling that you have. You say, “How can I push back against this? We have this big company that is doing all these things?”
But that’s also the society that we live in. We’ve always had magazines and tv shows that try to keep you watching. That is part of culture and it’s the kind of thing that I actually think that is where we should be thinking about in terms of education, educating people how to use these things better and how to push back and how to teach people how to have more control over their own digital environment, and how to use tools that give them more control. There are tools out there that will help you. There are tools that tell you how much screentime you’re watching or even cut you off.
The one I know a lot of people really like – I haven’t tried this yet – is one that turns your phone’s screen into grayscale. Somehow, that gets people to maybe not spend as much time. There are a number of different factors here that we can deal with the potential downside issues that some people have in having trouble putting down their devices or shutting their laptop or whatever it might be, that don’t involve regulating speech. These lawsuits in particular were really about regulating speech.
Nico Perrino: Mike, what do you say of the internal documents that were revealed that Meta knew about these harms, particularly to minors? Those didn’t look good. That’s not a great image to sell.
Mike Masnick: Yeah. And those were incredible headlines and people were talking about a smoking gun. You had these reports where people internally at Meta in particular were saying, “Oh, this is going to be dangerous. There’s going to be harm.” The narrative is really easy, especially for the media, saying, “Oh, Meta knew that this was harmful and they did nothing. The reality is pretty different when you begin to look at it, which is all of these companies, especially the big companies, do have trust and safety teams.
I know that in some circles, the whole phrase trust and safety is really scary. People think, “Oh, it’s the censorship team.” No, it really is about how we build a safer product that people trust and feel comfortable with. Part of what those people do -- and sometimes they’re researchers, sometimes they’re lawyers, communications people, and all different kinds of roles – is to think through the potential downsides of this. And then, the company has a discussion and they weigh the different tradeoffs.
So, what was quoted in these things were just bits and pieces of concerns that people have raised, which is what you should want. You should want companies like Meta to have people internally who will raise concerns that allow the company to then weigh those concerns against concerns of others. And if you only hear one set of concerns, then it can look really bad. So, as an example, some of the concerns were saying, “This kind of content sucks people in and they --
[Crosstalk]
Nico Perrino: Keep consuming it.
Mike Masnick: Keep consuming it. But what you don’t see are other concerns that people raised, which is if we’re cutting off certain kinds of content and there’s now increasing studies. The ones that I’ve seen a lot of are on eating disorder content. This is an interesting example, where eating disorder content, I think, most people agree is probably not very good. It’s not healthy. You don’t want to encourage people to have eating disorders. It’s a real problem, especially among younger people, often younger women have real problems with eating disorder content.
So, there were all these studies that were done. Meta, in particular, cut off various accounts that were promoting eating disorder content. But when the studies followed up on it, they found that it was more of a demand side problem than a supply side problem, which was that there were people who were really desperately looking for that content. And when Meta cut off the accounts that had that content, they went elsewhere that were less well protected, that had less helpful resources, and they went deeper into these subcultures and problematic behavior.
Whereas, when that content was actually on Instagram – which was mainly where it was – there were lots of people in there providing help resources, including Instagram, if they saw people were going to eating disorder content. It was pointing them to resources to help and popping up little things. If you have a problem with an eating disorder, go here.
Even more importantly, there were other users who had recovered from eating disorders who were going into the comments and saying, “Hey, this is unhealthy. You should look at this. You should think about this. Here’s a process. If you want to contact me, I can help you” There’s an indication that, if you jump in and say get rid of eating disorder content it can actually lead to worse results because the helpful bits of that, the people who were helping, the tools that were helping, and the resources that they were being driven to go away and the users then go to more problematic websites with even less controls and less help.
So, there are really tricky tradeoffs to all this stuff. Those discussions are happening inside companies like Meta. If you just pick and choose one side of that and you say, “Oh, these researchers raised these concerns,” and you don’t show the other side, which is saying, “Well, if we cut that off or if we change that, then other people are going to have these other problems.” Then, you have to weigh the decisions.
I’m not saying that Meta made all the decisions that I would make in those circumstances if I had to weigh the evidence. I might’ve come down differently. I probably would have, as I indicated earlier. I don’t trust Mark Zuckerberg. I don’t trust his decision-making. But the fact that you take one bit of these internal concerns out of context like that means in the long run that now every company in Silicon Valley is going to stop having those internal discussions because those internal discussions themselves, when taken out of context, are going to be used against them in court and you’re going to get these same kinds of headlines.
You want companies to have people inside to say, “Hey, this might be a problem,” even if you later decide, “Well, yes, but if we don’t do it then there’ll be these other problems. When we weigh the two things against each other, this is the better approach. We can put in some changes and maybe make it less of a problem.” There’s all different ways of approaching it. Everything about this is tradeoffs. Doing nothing is a tradeoff. Doing something is a tradeoff. Each thing has tradeoffs. What these companies have to do is weigh the different tradeoffs.
We can all disagree, and I think everybody would disagree – no two people have the same sense of what is the right level of these tradeoffs. Saying just because someone raises a concern and the company still does some of it doesn’t take into account all of the other things and all of the other people and the harms that might come from making a different decisions. One of the examples in the New Mexico case in particular, was the issue of encryption. That was where Meta for a while was trying to offer end-to-end encryption in its messaging services, which many of us think is a really important security and privacy device that protects millions of people and their communications.
It’s incredibly important in keeping people safe. Lots of people are using encrypted communications to escape domestic violence, to protect themselves, to blow the whistle on things. It’s an incredibly important tool. But in the New Mexico case, the fact that Meta offered end-to-end encryption and somebody internally had raised a concern that this could be used for sharing of child sexual abuse material, that that was used as an indictor that they had known about the harm and still went forward with it.
But again, there are tradeoffs here. If you don’t have end-to-end encryption, you’re putting millions of users at risk, their messages are at risk, they can’t be secure, they can’t be private. You have to weigh these things and make different decisions but in this particular case, what New Mexico was saying was that just the decision to use encryption and have somebody raise a warning about it is evidence of them being negligent or careless. That’s really, really worrisome.
Nico Perrino: Yeah, you write in your piece, “If any product improvement that protects the majority of users can be held against you because a tiny fraction of bad actors exploit it, companies will simply stop making those improvements.” I thought I read somewhere that Meta is actually removing option end-to-end encryption for Instagram direct messages.
Mike Masnick: They did. Yeah.
Nico Perrino: I think it’s going to take effect May 8th. I don’t know if that was a result of this case or seeing writing on the wall, but, yeah. If it can be used against you in a court and result in a $400 million judgment, that would seem to be a problem. But separate from weighing the tradeoff of harms, if you’re looking at this from a First Amendment perspective, we don’t have this catch all category of accepted speech for harmful speech. I mean, we have defamation, we have speech integral of criminal conduct, we have incitement to imminent lawless action, we have fraud. All of this stuff can constitution unprotected categories of speech.
We have a separate category which applies just to minors, which is content that is obscene as to minors. And that generally is pron. But beyond that, we don’t have other categories of content that just apply as unaccepted categories of free speech to minors that don’t apply to adults. Actually, the Supreme Court had considered expanding the categories of unprotected speech for minors back in a 2011 case where they assessed the harms or alleged harms of violent video games.
The court ended up saying no, we’re not expanding the categories of accepted speech. So, just the idea that even some speech might be harmful to some users does not mean that you are accepted from First Amendment protection. I think one of the challenging things here, Mike – and I’d like to hear your opinion on that – is that in both of these cases, if I understand correctly and were in state court, the judges during the motion to dismiss – or in California, I think it’s called a demurrer stage – said, “No, jury. You can’t consider the First Amendment. We’re throwing out First Amendment protections.
“We’re throwing out Section 230 liability shields when you’re considering the company’s actions here.” So, the juries couldn’t even consider that the Frist Amendment might protect these companies from being held liable for these alleged harms. I mean, do you think that was a right decision by the judge? Can you help us explain – we’ve talked a little bit about the First Amendment here. But this other protection that these companies have when distributing content, and that is Section 230 of the Communications Decency Act of 1996.
Mike Masnick: Yeah. I mean, there’s a lot to unpack there. The idea between the difference between the judge and the jury is that the judge us supposed to handle issues of law and the juries are supposed to handle issues of fact. So, usually what happens in a normal case, if there are such things as normal cases, is that these sorts of lawsuits get thrown out before they ever get to a jury because the judge will look at it and say as a matter of law these are really attacking Frist Amendment or should be blocked by Section 230. Just as a quick primer on Section 230 – I’m assuming people who listen to your podcast probably have heard --
[Crosstalk]
Nico Perrino: I’ve had Chris Cox, one of the authors of Section 230, on the podcast. They should be generally familiar. But we might have some new listeners coming in.
Mike Masnick: Sure. Just as a really quick refresher, the simplest way to think about Section 230 is that the liability – some people say, “Oh, it just gets rid of all liability.” That’s not true. It’s just saying that the liability is placed on whoever created the violation in the first place. Whoever spoke the speech that is violative in some --
[Crosstalk]
Nico Perrino: So, if I go on Instagram and I defame you, Mike Masnick --
Mike Masnick: Yes.
Nico Perrino: -- I am liable as the speaker. Instagram isn’t liable as the host or publisher.
Mike Masnick: Exactly.
Nico Perrino: Why is that? Why did we create the special statutory protection?
Mike Masnick: Because without that, then you would have no company willing to be an intermediary to host your speech. There is so much speech that flows through the internet. I should be very, very clear here because there’s a frequent confusion over Section 230, that they think it means that platforms need to be neutral conduits of speech. That is the exact opposite. The law was designed explicitly to allow companies to design their service however they wanted to cultivate the types of communities that they wanted. The example that I’ve heard Chris Cox and Ron Wyden both give – those are the two authors. You mentioned Chris already.
Nico Perrino: Yeah.
Mike Masnick: It’s that you wanted people to be able to create a gardening community and to say people, “No politics here because this is just about gardening.” Section 230 specifically and explicitly allows that because they’re saying you as the website as the holder, owner, or intermediary service, can make your own rules and we’re not gonna hold you liable for the speech that then takes place within those confines.
[Crosstalk]
Nico Perrino: So, the speech that you host and decide not to moderate and the speech that you host and do decide to moderate.
Mike Masnick: Exactly. Exactly. So, it covers all of that. The idea is to encourage more entities to be willing to host that speech, promote it, and share it. Some people now are saying “Oh, well, it shouldn’t apply to algorithmic recommended speech.” But that, again, would go against the very principles of 230, which is that it is trying to encourage different services to decide how they want to share content, what content they wanna host, what content they want to promote, how they want to display it, which was what was at issue in these cases.
An important part of Section 230 is that it is very clear that no state law can override Section 230. There’s what’s called a preemption clause. In theory, these are state cases. They should have killed these cases at the very earliest stage. The real benefit of Section 230 is that they get these cases kicked out at the earlier stage when it is not that expensive.
I think it’s important for people to understand this. Generally, what people say – and I’ve spoken to a bunch of lawyers about this over many, many years – to get a case kicked out on Section 230 grounds is in the range of $50,000.00-$100,000.00. It’s a lot of money for me, probably a lot of money for you, not so much money for a big company. You can get the case kicked out at that stage. If you have to go much further, if you have to go beyond that initial motion to dismiss stage, as we just saw with these cases, to summary judgment or really go all the way through a trial, it’s many, many millions of dollars.
There are orders of magnitude difference. Even though a company like Meta or Google can probably handle those expenses, there’s a very short list of companies who can handle multiple of those lawsuits.
Nico Perrino: This is a point you often make in your writing, that this really precludes small companies rising up and competing in this marketplace. If you remove Section 230, Meta, Google, or Alphabet – whatever you wanna call it – TikTok. These companies can fight this. But upstarts can’t. So, if you’re worried about the monopolistic practices of these big tech giants, then you really gotta like Section 230.
Mike Masnick: That is the key thing. Again, I would like to see Meta not be so powerful. I would like to see there be lots and lots of competitors. I would like to see Meta have way fewer users for their platforms. But if we don’t have full protections of Section 230 working, we have the opposite of that. Meta can handle these lawsuits. Google can handle these lawsuits. Almost every other provider, depending on size – some of the more midsize ones maybe can handle one of these lawsuits. But smaller companies can’t handle these lawsuits at all, which means either they don’t get into this business at all, they don’t exist in the first place or they’re susceptible to any threat of lawsuit.
If anyone comes and threatens a lawsuit, you immediately take down the content and the sites themselves no longer have the authority to decide what kind of site that they want to have and what sorts of decisions they wanna make. And that’s not a good world.
Nico Perrino: Yeah. There are two arguments that I hear in response to this in the broader discourse surrounding the verdicts. One is why should social media or other interactive computer services companies get this special protection when newspapers don’t? For example, someone I was debating on X, Claire Lehman, who publishes Quillette, doesn’t understand why she could be held liable for defamation that she publishes on her website but social media companies can’t be held liable.
Mike Masnick: Yeah. Well, I don’t read Quillette enough to --
[Crosstalk]
Nico Perrino: You have to know a little bit about her website to know whether it gets 230 protections.
Mike Masnick: Yeah. If she has comments or is reposting other people’s work in any way, she would have 230 protections, just as newspapers if they comment sections also get 230 protections. But also, it is important to recognize that Section 230 protects users as well. Everybody focuses on the platform side, the ICS interactive computing service part of it. But it also says users. That’s been useful. There have been cases where people have forwarded an email and the email itself was defamatory. But because they forwarded it, they were protected by 230. There are cases --
[Crosstalk]
Nico Perrino: Retweeting.
Mike Masnick: Exactly. That was the other one I was about to do. If you retweet something – which everyone does all the time – you’re s scrolling through your thing in your addictive feed as it might be and you see things and you’re retweeting it. If you retweeted something that was later then held to be defamatory, should you be held liable for just clicking that retweet button? Section 230 says no. The person who spoke it can still be held liable and they can be sued and have to go through the process of that. But merely for retweeting, you shouldn’t.
So, Section 230 is not a special carveout. It’s a special carveout, it’s a special carveout for users of the internet. We now have places where can speak and places where we can do things like send emails or retweets or do any of this stuff. It exists because we have the protection of Section 230, which makes it possible for intermediaries to offer the services to us.
Nico Perrino: Yeah, if you talk to Chris Cox, and I’m assuming Ron Wyden, who I’ve not spoken with about Section 230, they’ll say, “We were just recognizing the new nature of the internet and how its democratizes speech in a way that just didn’t exist before and it needed some statutory protections in order to become what it became.” I don’t know about you, Mike. I’d like to hear you opinion on this. Do you think social media could exist without Section 230?
Mike Masnick: I think it’s very, very different. People want to communicate. People want to form communities. It’s a natural part of human society. But what you end up having without something that is like a Section 230 – and we can address – some people will raise the question of Section 230 only exists in the US but we have social media in other places. There’s a discussion to be had there of why that is and how that works. But without Section 230, or something akin to Section 230, what you will most likely have is something that is much more like a broadcast kind of system where things are heavily locked down.
The content is very heavily gate kept. Only limited speech is allowed. It has to be reviewed and approved. It becomes more like television than an internet where you have this wide open world where people can discuss and share expertise and argue and debate and all of these things.
Nico Perrino: Which is interesting because you have a lot of conservatives that are coming down on Section 230 in Congress right now. I’m like, the past five years haven’t you guys been criticizing social media censorship? If you get rid of Section 230, it’s just gonna supercharge it.
Mike Masnick: Yes. It will almost certainly go in the opposite direction. I don’t understand why there’s so much confusion about Section 230. I know that it is frustrating to both Chris Cox and Ron Wyden. They’ve tried to explain it. Chris has obviously been out of Congress for a long time. Senator Wyden is still a senator and I know that he talks at his colleagues all the time on both sides of the aisle. There is some sort of very odd blind spot. I think it’s the same blind spot that caused a lot of people to cheer on these particular cases, which is that they don’t like the companies. Again, I don’t like the companies either, but --
[Crosstalk]
Nico Perrino: You’re more charitable than I am. I actually think a lot of members of Congress actually know what would happen if you got rid of Section 230. It’s just an easy punching bag because, as Derek Thompson wrote in his recent article, “The Smart Phone Theory of Everything,” everyone wants to blame all of society’s ills on the smart phone and social media and all that.
Mike Masnick: Yeah. I mean, also, the political context. This is a very cynical take but I’ve heard it from other people. The longer I’ve watched these things, the more it feels right, which is that as a politician, especially in Congress, one of the best ways to make sure that your campaign coffers are continually refilled is to pit big industries against big industries and set off big fights. Suddenly, lobbyists show up and donations show up and everyone is trying to pick off everybody on either side. It’s incredibly cynical and it’s kind of depressing if you want a good representative democracy working for the people.
Nico Perrino: It’s kind of funny. I was seeing something on X a I was scrolling in my addicted way today and someone said that Congress has just become a social media operation and not a legislative operation. It would be interesting to see actually what would happen to members of Congress if these social media companies could be held liable for some of their posts.
Mike Masnick: Oh, yeah.
Nico Perrino: I mean, we’ll put some of that to the side now because we’ve talked about Section 230 on a lot of previous episodes. But I do wanna ask – and I’m derelict in my responsibilities here as a host that it took me 45 minutes to get to this question – but what about the comparison of social media to cigarettes? That’s the one I’m hearing all the time as I’m debating this online. What do you make of this?
Mike Masnick: Yeah. I would say it’s ridiculous. Again, it’s an emotional comparison. But cigarettes are not speech. The simplest version of it, cigarettes are delivering a chemical into your body that we know is harmful, that there are very clear long-term studies showing the direct, clear, causal harm of nicotine and smoke into your body. That is not true with speech. We don’t have that and there are many, many uses – I would argue many more cases where speech is helpful, useful, community building, and valuable than there are of examples for speeches in some ways harmful, or painful, or problematic.
I think the comparisons are ridiculous. I understand the emotional appeal of them. There’s an emotional appeal because speech can paint pictures and make things emotional. But the comparisons I always hear are cigarettes and lead paint. Both of those are poisonous material, literal poisons, delivered into your body that you consume physically. Speech is not that. The comparison to me makes no sense and is extremely frustrating because it’s --
[Crosstalk]
Nico Perrino: Well, people will grant you that, Mike. I’ve seen that on social media. They’ll grant you that, but they’ll say, “But you can’t advertise cigarettes to minors.” To that, I’d say it’s a fundamental misunderstanding of the case law. It’s not clear from the case law that you can’t market cigarettes to minors. In fact, the reason you don’t or these companies don’t is because there have been a lot of settlements where they’ve committed to not doing that.
Mike Masnick: Yeah. Honestly, I think part of it too is just the public narrative. Marketing cigarettes to children looks bad. One of the things that I always try and remind --
[Crosstalk]
Nico Perrino: Yeah, it’s not good marketing.
Mike Masnick: Yeah. One of the things that I really try to remind people of – and this is unsatisfactory for some people – is that incentives are not just law. There are all sorts of incentives out there that go beyond the legal mandates. You want the public to like and support you in lots of cases and depending on different businesses. In the social media context, we always think about advertisers, the business model. They have a lot of influence on how social media works. Advertisers often will push the companies to change policies.
Nico Perrino: You might get sued by Elon Musk, but nevertheless.
Mike Masnick: Well, that’s a different podcast. But there are other incentives out there. Users. This is one of the things I know people who dislike social media – again, for potentially good reasons – go on and on about how all they want to do is create engagement bait and enrage you and get you all pissed off and all these kinds of things. But there have been studies that have shown that that is not a good long-term business strategy for any of these platforms because at some point you will put down your phone and the next day you’ll say, “Gosh, I didn’t enjoy that. I don’t think I’m going back anymore.”
There were studies that were done on Meta and on YouTube and on a few other platforms that found that when they were just promoting the most rage inducing kinds of content, that while it may have kept people on their devices in the short term, there was a period of time that after that they just completely abandoned the platform whatsoever. A lot of the platforms have adjusted their algorithms to say, “Don’t just show people the most rage inducing content because that’s not good as a long-term strategy."
The users themselves pushed back on these kinds of things. When the companies are doing bad things – and sometimes it takes a while. Sometimes, you might disagree with where or how long they do things or what tradeoffs they decide to make. Yes, obviously these are big, corporate entities in a system of free market capitalization or as free market as we are and want to make money and want to make a lot of money. But there are all sorts of incentives over how these things play out that – to think that the only solution is just to pass a law that says ban this or punish these guys, I think, is a very short sighted take.
Nico Perrino: You wrote an article in 2023 title “Yet Another Massive Study Says There’s No Evidence That Social Media is Inherently Harmful to Teens.” You have a bulleted list of all the studies where it says there might be some correlation, if that, but there is no direct causal evidence that social media is harmful to kids. Where does the literature stand now? Are you keeping up with it?
Mike Masnick: Yeah. Yeah. There have been more studies. In fact, there were just two different very large scale studies of teenagers in particular. One was done in the UK and one was done in Australia. I’m forgetting the exact numbers. I think the Australia one was 125,000 teenagers and I think the UK one was either 25,000 or 50,000, and came to the same results, which was that there is very little evidence of causal harm.
I try to be very clear because the different research all have slightly different methods of teasing this out and they have slightly different results. What the evidence really suggests is that there are some people, some percentage of people – again, from most of the research, it seems to be in the five percent range – who have real trouble. Significant problematic behavior associated with their use of this technology. That is a real concern and we should be looking at ways to identify and to help those people. There are a number of different interventions that I think might be useful.
For the vast majority of people, it is not harmful. For another percentage – again, this depends on what you’re looking at and where and all sorts of things – it is extremely helpful. People being able to find community, being able to find useful information. But the evidence of inherent harm is completely lacking.
Nico Perrino: That’s actually interesting you bring up for the people who it is helpful. Those people were absent from these two jury verdicts.
Mike Masnick: Yes.
Nico Perrino: They didn’t have a role in this case where they could explain to the court and they could explain to the juries, “Oh, this is the way social media helped me escape the trauma of my youth, helped me find supportive communities, helped me escape the limitations of my school and find a community built around physics, or sports.” For me, it was heavy metal music. I didn’t have much of that in my school. I was on PureVolume and Myspace finding all these local heavy metal bands. It is a way to connect.
That’s one of the challenges, also, in this case out of California. It seemed like this plaintiff had a very traumatic childhood. It’s tough to talk about this but it seemed like her mother was abusive, potentially physically, also psychologically. Her father had left her. Her sister, I heard, had a suicide attempt. And actually, if you look at some of the evidence, it seemed like social media was a way for this plaintiff to escape.
Mike Masnick: In fact, there was indications of that. Again, every individual is different and you don’t want to generalize from any particular individual. Obviously, she had a very, very troubling history and a lot of real issues that need real help. But it did appear that some of her use of social media was her escae from those real world, in person trauma events that were occurring. In fact, she was using it as a tool. This gets to one of the points, that there’s been studies that this is again not conclusive. Some of the studies suggest that among the group of people where there is a correlation of very high social media usage and significant mental health challenges, that the causation direction might be in reverse.
Which means they’re having trauma, mental health issues, or other troubling situations and they’re unable to have the kind of resources or mental health help and support that they need and therefore they turn to social media. It’s not that the social media is causing the problem but that is where they turn.
We can argue that that’s not the best place for them to turn and I think that’s probably true. But in the absence of a system to actually get them the help that they need, it’s very possible that for many of them it is more helpful for them to be able to go to social media and maybe find someone who can help or a community that can help rather than just saying cut them off entirely and leave them with no help at all and leave them feeling alone and unsupported entirely. It’s very, very tricky to just it’s harmful and we have to cut it off entirely.
Nico Perrino: One of the challenges for me in interpreting these jury verdicts is it seems to have implications beyond just minors. If the design of social media is addictive and there’s this vibe that it’s addictive for everyone – we’ve referenced the way some of us have a sense that it might be – then it could be regulated for adults as well. We’ve already talked about how, even if you put that to the side, these verdicts are going to implicate adults through removing encryption on social media apps. There’s also a suggestion from the New Mexico verdict that they need to age gate as well.
If you can’t deliver certain content to minors, the only way you know a minor is a minor is through some sort of age or identity verification. I had mentioned Derek Thompson. Derek Thompson was a write for The Atlantic for many years. Now he’s on Substack. In his article “The Social Media Theory of Everything,” he said it’s not clear, for example, that social media design is what’s creating this addiction. He thinks it’s just the IV drip that smart phones and other access to the internet can create for people. The IV drip of information.
There are some studies that suggest that the more access to information you have the more anxiety and stress that you have. So, I think, as you note in your piece, it’s hard to isolate various variables here. Is it just the design or is it the fact that the design feeds you additional information? Information, of course, is protected by the First Amendment and its free speech protections. We don’t want to regulate that. Nevertheless, it’s just not clear to me that it’s social media and not something else that you can’t disentangle from it.
Mike Masnick: Yeah. It’s very difficult to figure this out. To me, a lot of is that this is a new information environment that lots of us are dealing with and we’re not used to this. We’re all learning our way through it and we’ve seen in the past the introduction of the printing press, for example. It took a century or so for people to come to terms with that.
Nico Perrino: Wars in Europe, millions dead, as a result of the ideas that inspired and circulated. Mike, what do you make of the role of parents here? We have just two or three minutes left here. David French had an op-ed in The New York Times where he wrote that, “One of my great parenting regrets is naively giving my two older kids phones when they were quite young.” He said, “They’re doing fine and they’re great kids but it was still a mistake.” He said, “I did not know that I did not know. My youngest child, however, had a substantially different experience. We learned, we changed, and so has virtually every parent I know.
“She didn’t get a phone until she was 16, and she could not take it into her room. Even then, we limited access to social media apps. Every year, she took a monthlong sabbatical. Other parents ask that their kids digital contracts regarding phone use, or they block all social media, or they regularly review their kids’ social media accounts.”
I mean, these are things that David French has done. He’s referencing things other parents have done. Jonathan Haidt talks about a collective action problem, though. You get your kids constantly nagging you to give them a phone because all of their friends have phones. I believe that there is some role for parents to play here. You can’t get a smart phone that’s connected to the satellites or telecommunications devices without getting a data plan, which requires a credit check.
There are ways that parents can police this. It’s not easy, but then again putting my three kids to bed at night is not easy. They constantly are nagging me to stay up later than they do. So, what do you see the role as parents?
Mike Masnick: Yeah. I think it definitely matters a lot. I often go back to – there’s a researcher named Danah Boyd, who is a professor, who has written about the difference between risks and harms. You don’t send a child who is not prepared to go run downtown and buy stuff for me. You teach them how to walk with you. As a parent, you tech them how to cross the street. But as they get better and learn how to do those things, then maybe as they get older you can say, “Okay, you can go down to the park by yourself,” or different things like that.
You teach them the difference between harm and risks. Risky situations, you teach them how to deal with it. You don’t just assume, “Oh, crossing the street is dangerous. My child will never cross the street until they’re 18 years old,” and then suddenly go wild in the world. I think the role of parents is to teach people the differences. These are tools and sometimes they’re useful and sometimes you can use them too much. We can introduce them in gentle, careful ways and say, “Okay, you can use this and this is restricted.”
As you show, as you grow, and you show that you have responsibility, we can teach you how to use these things responsibly. I think that’s a really important role. But part of it is slow and boring and involves education and careful, thoughtful approaches to these things. Nobody likes that. Nobody writes bestselling books about that.
Nico Perrino: I had seen on Fox News that Meta’s Chief Legal Officer said that they are gonna appeal the verdicts. So, that’s coming up and I’m sure that the Section 230 and First Amendment questions will get addressed on appeal. Nevertheless, last question here. Thirty seconds remaining. There are thousands of similar lawsuits, as you know. Maybe 1,600, some of which are consolidated. There are a lot of lawsuits that are seeking the same outcome that we got from these two verdicts. Is this an existential crisis for social media companies?
Mike Masnick: Yeah. I think so. What the eventual version looks like, if these lawsuits are allowed to go on – and I’m sure thousands more are being filed as we speak based on these results – it will change social media in very dramatic ways. In ways we can’t quite predict but ways I don’t think we will be happy with. I think it really is an existential moment for the idea of an open internet where free speech is enabled and encouraged.
Nico Perrino: Mike Masnick, thanks for coming back on the show.
Mike Masnick: Yeah, thanks for having me. This was fun.
Nico Perrino: I’m Nico Perrino and this podcast is recorded and edited by a rotating roster of my FIRE colleagues, including Bruce Jones, Ronald Baez, Jackson Flegal, and Scott Rogers. This podcast was produced by Emily Beaman. To learn more about So to Speak, you can subscribe to our YouTube channel, our Substack page, and we also have video versions of this podcast on those two pages as well as X. You can follow us on X by searching for the handle @freespeechtalk.
You can email us feedback at sotospeak@fire.org. Again, that is sotospeak@fire.org. If you enjoyed this episode, leave us a review wherever you get your podcasts. They help us attract new listeners to the show. Until next time, thanks again for listening. The Foundation for Individual Rights and Expression, FIRE, and the flame logo are registered trademarks of the Foundation for Individual Rights and Expression.