If you don’t already know what Section 230 is, the EFF has a great primer, and Eric Goldman’s blog provides leading analysis of it. Here’s a quick summary: Section 230 prevents civil claims against internet companies based on user generated content that a third party uploads.
There are carve-outs for federal criminal law, state laws on sex trafficking, communications privacy laws, and intellectual property law. As a result the protection is not absolute, but it is still very broad.
So far it sounds great: You can’t sue a social media company because somebody else said something on their platform that you found offensive. That’s pretty much what free speech is about. If we all agree on something, it doesn’t need protection. The kind of speech that requires protection is unpopular or controversial speech.
Section 230 also looks out for the free speech rights of internet service providers: “No provider or user of an interactive computer service shall be held liable on account of (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected”. Read the boldfaced portions together:
“No provider of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material the provider considers to be objectionable”.
If you post something objectionable as a comment to my blog and I delete it, you can’t sue me. The same is true of Instagram, X and even PornHub. They can’t be compelled to host content they find objectionable.
All of the foregoing was critical in providing the legal security the internet needed to grow into the greatest information platform ever created.
Since Section 230 was created in 1996 (OMG it’s almost 30 years old), an entire generation has grown up in a world where service providers are allowed to arbitrarily delete content without worry. Internet OGs with a legal background understand that this is simply companies exercising their own free speech rights. But in a world where the global town square is effectively controlled by a handful of companies, for non-lawyers the line between governmental action and action by that handful of companies is blurry. And that’s where the problem lies.
Generation Z (~1997 – 2010) and the older part of Generation Alpha (~2010 – 2024) are used to deplatforming. They’re used to seeing a video they love on YouTube one day, and having it disappear the next. They’ve learned to self-censor to avoid “shadow banning” or straight up being banned. They live in a world of self-censorship.
I grew up in a world of three network TV news and print newspapers and magazines and it was easy to understand free speech: The government couldn’t stop me from saying something. I published my own magazine, Blitz, and knew that the government wasn’t allowed to knock on my door and confiscate my copies so I couldn’t distribute them.
In today’s world, it isn’t the government doing the confiscating — it is big tech companies (although sometimes operating under government pressure). And those born in the post-Section 230 era just accept that their speech can be stopped if it is offensive or even for no reason. Sure, if your YouTube channel is deleted, you can create a website to say the same thing, but guess who controls whether internet users can find the website with search? Yup, the same Google that controls YouTube.
This is in no way a criticism of Section 230, the statute. It is, instead, an observation that Section 230 has had the unforeseen side effect of acclimating people to a world where free speech can be impaired without consequence. Instead of fighting back (for example by uploading videos to another service rather than bleeping out words like “sexual assault” when uploading to YouTube – even when creating a video pointing rape survivors to resources that would help them), people have simply capitulated.
This acclimation to censorship has devalued free speech. Kids worry about getting “cancelled” for offending somebody rather than living in a world where they cannot criticize the powerful without risk.
That’s the law of unintended consequences. Section 230 set out to create a free speech paradise, but also did nothing to stop (and arguably encouraged) the establishment of a handful of companies as gatekeepers of speech. The world didn’t stop turning, it just experienced a dulling of public discourse.
Do we need to change Section 230? I don’t think so. It remains important that I can delete comments on this blog that offend me, and that I can upload something to YouTube without their taking it down because they fear being sued for libel. But we can require companies to clearly set out what their policies are.
Consider a simple law: “Any Internet Service Provider subject to the protections of Section 230 of the Communications Decency Act must clearly and conspicuously inform any person uploading content as to the conditions under which the content may be deleted. If the full reach of Section 230 is claimed, the disclaimer shall read ‘Your content may be deleted at any time and for any reason should we find it offensive in any way.; If less than the full reach is claimed, the conditions under which content may be deleted shall be disclosed in plain English. These disclosures must be made upon the first upload and thereafter not less than every six months.”
What changes? Creators will be easily able to compare the relative security of relying on a service provider for their platform and in many cases their livelihood. Providers will start to compete with each other on the basis of protecting speech by having contracts limiting their use of Section 230’s “takedown for any reason” rights. And perhaps most importantly, the issue of free speech will be introduced to a generation that grew up, as a practical matter, without it.