PDA

View Full Version : Section 230 and Online Speech


Maru
11-02-2026, 06:51 PM
What Congress' Section 230 Debate Means for the Future of Online Speech
Full Article: https://www.consumerreports.org/federal-laws-regulations/what-is-section-230-communications-decency-act-a3205342497/

Twenty-five years ago, Congress passed a little-noticed law that shielded online platforms from liability for the content posted by users.

In the decades since, Section 230 of the Communications Decency Act, signed into law by President Bill Clinton on Feb. 8, 1996, has paved the way for the internet as we know it.

For the better: by enabling everything from unfiltered opinion in the comments sections of news sites to the phenomenon of social media, as well as giving platforms the option to moderate that online content.

And for the worse: by facilitating the mass distribution of disinformation, hate speech, and other objectionable content.

“It affects every aspect of the internet from online safety to online shopping,” says Laurel Lehman, policy analyst for Consumer Reports.

And now, as it celebrates its silver anniversary, Section 230 finds itself under attack from across the political spectrum, including legislators and others ready to revise the law and with it the digital lives of millions of U.S. consumers.

Here’s what you need to know about this important provision and its uncertain future.

What Is Section 230?

At the heart of Section 230, you’ll find 26 simple words. “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

At the time it was drafted, the law effectively shielded services such as AOL, Prodigy, and CompuServe from liability for comments posted by members on their message boards. That protection led to the kind of open exchange of information and opinion on those forums and on Facebook, Twitter, YouTube, and other online platforms today.

It also makes it possible for e-commerce sites such as Amazon and Yelp to host customer reviews without fear of reprisal from disgruntled manufacturers.

And, as Lehman says, it protects individual citizens as well. Without the provision, you could be sued for inadvertently forwarding an e-mail with specious claims or for moderating (or not moderating) the discussion in a Facebook group.

Essentially, Section 230 treats online platforms less like a newspaper, which can be sued for libel if it prints something that’s harmful and untrue, and more like a neighborhood newsstand or bookstore, which is free to sell a wide range of publications without vetting every last word.

It allows Facebook to safely share comments, likes, and photos from 1.82 billion people a day without having to eyeball each and every one of them.

...

How Can Section 230 Be Improved?
At the moment, no fewer than 23 bills that would amend Section 230 have been introduced in Congress, and yet more wait in the wings.

While some are bipartisan, they offer little broad consensus beyond the general feeling that Big Tech platforms currently get too much protection from Section 230.

(To learn more about the various proposals—and get analysis on key concerns from CR’s advocates—read this post from policy analyst Laurel Lehman.)

The proposed amendments fall into three broad categories.

The first, which includes the PACT Act introduced last June by Sens. Brian Schatz, D-Hawaii, and John Thune, R-S.D., would reduce the scope of the protections offered to platforms by the law or require platforms to change their behavior to keep those protections. By exposing the companies to more litigation, the thinking goes, you encourage them to protect consumers from potentially harmful or discriminatory content. The challenge here is to do so while striking a balance that doesn’t encourage overmoderation of marginalized communities.

The second approach, which includes the Online Freedom and Viewpoint Diversity Act, proposed by a group of senators led by Roger Wicker, R-Miss., would restrict moderation and fact checking to promote a freer flow of ideas.

“It’s really hard to see where the compromise is going to come from when their operating assumptions about what’s wrong with the platforms are directly opposite each other,” says Bergmayer at Public Knowledge. “There aren’t compatible policy goals.”

A third group of proposals, which includes a bill proposed by Sen. Lindsey Graham, R-S.C., would essentially eviscerate Section 230. Those proposals seem to be crafted to get Big Tech’s attention more than to actually advocate a return to a digital Wild West. But they also highlight the way Section 230, despite its flaws, helps to bring some order to the online world.

“Section 230 made the internet what it is today—for better and for worse,” says CR’s Lehman. “The recent scrutiny highlights both the wonders and the failures of the internet information ecosystem that Section 230 made possible. The challenge facing policymakers in 2021 is striking the right balance to ensure that the law makes life online better, not worse, for the next 25 years.”

Maru
11-02-2026, 07:02 PM
(Dec 9th 2025)
w56JY7_rHI8

(Dec 15th 2025)
Graham leads bipartisan demand for tech reform vote to 'bring social media companies to heel'
https://www.foxnews.com/politics/graham-leads-bipartisan-demand-tech-reform-vote-bring-social-media-companies-heel

(Jan 26th)
Florida Congressman Takes on Big Tech Immunity
LuTPOlJQdYY

(Jan 22th)
Congressman Patronis Speaks On The House Floor to Repeal Section 230
jJY3T1diXbQ

(Feb 4th)
Gordon-Leavitt on Section 230 sunset bill: ‘I want to see this thing pass 100 to zero’
https://thehill.com/blogs/in-the-know/5723006-gordon-levitt-capitol-hill-section-230/

zYvNk_HEs_M

Maru
11-02-2026, 07:06 PM
(Jan 19th)
Rand Paul: I’ve changed my mind — Google and YouTube can’t be trusted to do the right thing and must be reined in
https://nypost.com/2026/01/19/opinion/rand-paul-ive-changed-my-mind-google-and-youtube-cant-be-trusted-to-do-the-right-thing-and-must-be-reined-in/

https://i.postimg.cc/JhjQJFsw/newspress-collage-snplh5lji-1768869623266.jpg

Youtube and its parent company Google deserves to be sued.

For the past three weeks YouTube has been hosting a video that is a calculated lie, falsely accusing me of taking money from Venezuela’s Nicolás Maduro. It refused to remove the video.

It is, of course, a ludicrous accusation, but paid trolls are daily spreading this lie across the internet. This untruth is essentially an accusation of treason, which then leads the internet mob to call for my death.

Advocating for liability for Google is no small step for me. I have long defended the private-property rights of internet companies and long defended them against overzealous, partisan abuses of antitrust law, even when I was angry with YouTube for its policies that silenced my attempts to educate the public on the potentially deadly consequences of relying on cloth masks to prevent transmission of COVID-19.

But I will not sit idly by and let them host a provably false defamatory video, which is now part of a widespread harassment campaign. I am now receiving death threats.

The arrogance of Google to continue hosting this defamatory video and the resultant threats on my life have caused me to rethink Congress’ blind allegiance to liability shields.

Much like Big Pharma

The MAHA movement points out that liability shields allowed Big Pharma to ignore vaccine injuries. Arguably, liability shields aid and abet bad behavior.

My default position as a libertarian/conservative has been to defend the internet liability protections known in law as Section 230 of the Communications Act. The courts have largely ruled that Section 230 shields social-media companies from being sued for content created by third parties. If someone calls you a creep on the internet, you can call them a creep right back, but you can’t sue the social-media site for hosting that insult.

I always believed this protection is necessary for the functioning of the internet.

I have always accepted, perhaps too uncritically, that unmitigated liability protection for social-media sites was necessary to defend the principle of free speech. Until now, I had not sufficiently considered the effects of internet providers hosting content accusing people of committing crimes.

I asked one of Google’s executives what happens to the small town mayor whose enemies maliciously and without evidence, post that he is a pedophile on YouTube?. Would that be OK?

The executive responded that YouTube does not monitor their content for truth. But how would that small town mayor ever get his or her reputation back?

Historically, such false accusations were rarely published in newspapers because they were conscious of significant liability for publishing untrue, defamatory accusations. Liability protection now encourages bad actors, many of whom are actually paid for their bad actions.

Hypocritical acts

Social-media companies claim they are proudly and unselfishly protecting speech.

I discovered, though, during the COVID pandemic, that the social-media companies’ idea of free exchange of ideas did not include my speeches explaining that cloth masks have no value in inhibiting the transmission of COVID.

YouTube exercised its private-property rights to take down my speech. YouTube also decided to take down a speech I gave on the Senate floor that named the individual who alleged that President Trump’s phone call with the Ukrainian president was inappropriate.

So, Google and YouTube not only choose to moderate speech they don’t like, but they also will remove speeches from the Senate floor despite such speeches being specifically protected by the Constitution.

Google’s defense of speech appears to be limited to defense of speech they agree with.

Not to be outdone, Facebook, for over a year, buried any news story or opinion piece that argued that the pandemic began as an accident in a Wuhan lab.

And still, despite obvious left-wing biased censorship, I defended Google and Facebook’s private property rights to moderate their platforms as they saw fit.

But the straw that broke the camel’s back came this week when I notified Google executives that they were hosting a video of a woman posing as a newscaster posing in a fake news studio explaining that “Rand Paul is taking money from the Maduro regime.”

I’ve formally notified Google that this video is unsupported by facts, defames me, harasses me and now endangers my life.

Google responded that they don’t investigate the truth of accusations . . . and refused to take down the video.

Interestingly, Google says it doesn’t assess the truth of the content it hosts, but throughout the pandemic they removed content that they perceived as untrue, such as skepticism toward vaccines, allegations that the pandemic originated in a Wuhan lab, and my assertion that cloth masks don’t prevent transmission.

Promise to self-police

I can’t tell you how disappointed I am by Google’s decision to host this defamatory video. Part of the implicit grant of immunity is that the internet platforms would self-police their content, which all of the social companies do to a certain degree.

Google’s own content moderation policy states: “We don’t allow content that targets someone with prolonged insults or slurs based on their physical traits or protected group status. We also don’t allow other harmful behaviors, like threats or doxxing.

So Google believes that calling someone ugly should be policed and taken down but doesn’t believe that accusing someone of treason (taking money from Maduro) incites “threats or doxxing.”

If the woman defaming me had also ridiculed my race or sexuality, Google would happily take down the post.

And yet . . . they do monitor truth, or at least their version of it.

According to YouTube “cloth masks don’t work” is not true, so they took down my video. YouTube also will take down videos that are not true such as video simulations that place a person engaging in an activity that is a fake, but Google allows untrue words to be said that are harmful, harassing and inciting death threats. Part of the liability protection granted internet platforms, section 230(c)(2), specifically allows companies the take down “harassing” content. This gives the companies wide leeway to take down defamatory content. Thus far, the companies have chosen to spend considerable time and money to take down content they politically disagree with yet leave content that is quite obviously defamatory. So Google does not have a blanket policy of refraining to evaluate truth. Google chooses to evaluate what it believes to be true when it is convenient and consistent with its own particular biases.

Pursuing legislation

I think Google is, or should be, liable for hosting this defamatory video that accuses me of treason, at least from the point in time when Google was made aware of the defamation and danger.

Though Google refused to remove the defamatory content, the individual who posted the video finally took down the video under threat of legal penalty. Yet, the defamatory video still has a life of its own circulated widely on the internet and the damage done is difficult to reverse.

It is particularly galling that, even when informed of the death threats stemming from the unsubstantiated and defamatory allegations, Google refused to evaluate the truth of what it was hosting despite its widespread practice of evaluating and removing other content for perceived lack of truthfulness.

This complete lack of decency, this inconsistent moderation of truthfulness, this conscious refusal to remove illegal and defamatory content has led me to conclude that the internet exemption from liability, a governmentally granted privilege and a special exemption from our common law traditions, should not be encouraged by liability shields and I will pursue legislation toward that goal.

Oliver_W
11-02-2026, 07:48 PM
There should be no legislation with regards to "hate speech", any more than there should be for "love speech."

(As long as there are no calls to action)

bots
11-02-2026, 07:52 PM
I think laws should be adjusted over time, but, these days, changes aren't necessarily made for the benefit of society. No-one trusts government anymore, it's not a cat that can be put back in the bag

Mystic Mock
11-02-2026, 07:55 PM
There should be no legislation with regards to "hate speech", any more than there should be for "love speech."

(As long as there are no calls to action)

This.

If people don't want a first world democracy (which obviously includes all forms of freedom of speech) then move to one of China, Russia, or Iran.

Just leave the US in this case alone.

Mystic Mock
11-02-2026, 07:57 PM
I think laws should be adjusted over time, but, these days, changes aren't necessarily made for the benefit of society. No-one trusts government anymore, it's not a cat that can be put back in the bag

If anyone does trust the Government they'd need their head examined.

Maru
11-02-2026, 08:10 PM
There should be no legislation with regards to "hate speech", any more than there should be for "love speech."

(As long as there are no calls to action)

This bypasses that entirely as repealing would make it very hard for sites to operate that involve any user content if they don't heavily moderate. They'll be sued anyway honestly because there's no way to completely mitigate the risks reliably and so it will will inevitably lead to heavy censorship, imo.

bots
11-02-2026, 08:31 PM
This bypasses that entirely as repealing would make it very hard for sites to operate that involve any user content if they don't heavily moderate. They'll be sued anyway honestly because there's no way to completely mitigate the risks reliably and so it will will inevitably lead to heavy censorship, imo.

the reality is the sites would shut down, they wont trust AI to regulate

Maru
11-02-2026, 08:36 PM
the reality is the sites would shut down, they wont trust AI to regulate

I'm torn on whether that is even a bad thing.

bots
11-02-2026, 09:11 PM
I'm torn on whether that is even a bad thing.

i think it would be a net benefit if they all shut

Maru
12-02-2026, 12:37 AM
i think it would be a net benefit if they all shut

I still remember when the internet was much more limited. We didn't have the ability to spread info the way it's spread now through "feeds" and I never felt free speech was suppressed. People can still find opinions, but they would need to go back to self-hosting.

It would also be easier to tell if someone did or didn't know what they were talking about if they were forced to write more than a few sentences to support some view to make their site worth it. Anyone can write a tweet that won't add anything to the conversation but will garner 200K updoots.