Maru
11-02-2026, 06:51 PM
What Congress' Section 230 Debate Means for the Future of Online Speech
Full Article: https://www.consumerreports.org/federal-laws-regulations/what-is-section-230-communications-decency-act-a3205342497/
Twenty-five years ago, Congress passed a little-noticed law that shielded online platforms from liability for the content posted by users.
In the decades since, Section 230 of the Communications Decency Act, signed into law by President Bill Clinton on Feb. 8, 1996, has paved the way for the internet as we know it.
For the better: by enabling everything from unfiltered opinion in the comments sections of news sites to the phenomenon of social media, as well as giving platforms the option to moderate that online content.
And for the worse: by facilitating the mass distribution of disinformation, hate speech, and other objectionable content.
“It affects every aspect of the internet from online safety to online shopping,” says Laurel Lehman, policy analyst for Consumer Reports.
And now, as it celebrates its silver anniversary, Section 230 finds itself under attack from across the political spectrum, including legislators and others ready to revise the law and with it the digital lives of millions of U.S. consumers.
Here’s what you need to know about this important provision and its uncertain future.
What Is Section 230?
At the heart of Section 230, you’ll find 26 simple words. “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
At the time it was drafted, the law effectively shielded services such as AOL, Prodigy, and CompuServe from liability for comments posted by members on their message boards. That protection led to the kind of open exchange of information and opinion on those forums and on Facebook, Twitter, YouTube, and other online platforms today.
It also makes it possible for e-commerce sites such as Amazon and Yelp to host customer reviews without fear of reprisal from disgruntled manufacturers.
And, as Lehman says, it protects individual citizens as well. Without the provision, you could be sued for inadvertently forwarding an e-mail with specious claims or for moderating (or not moderating) the discussion in a Facebook group.
Essentially, Section 230 treats online platforms less like a newspaper, which can be sued for libel if it prints something that’s harmful and untrue, and more like a neighborhood newsstand or bookstore, which is free to sell a wide range of publications without vetting every last word.
It allows Facebook to safely share comments, likes, and photos from 1.82 billion people a day without having to eyeball each and every one of them.
...
How Can Section 230 Be Improved?
At the moment, no fewer than 23 bills that would amend Section 230 have been introduced in Congress, and yet more wait in the wings.
While some are bipartisan, they offer little broad consensus beyond the general feeling that Big Tech platforms currently get too much protection from Section 230.
(To learn more about the various proposals—and get analysis on key concerns from CR’s advocates—read this post from policy analyst Laurel Lehman.)
The proposed amendments fall into three broad categories.
The first, which includes the PACT Act introduced last June by Sens. Brian Schatz, D-Hawaii, and John Thune, R-S.D., would reduce the scope of the protections offered to platforms by the law or require platforms to change their behavior to keep those protections. By exposing the companies to more litigation, the thinking goes, you encourage them to protect consumers from potentially harmful or discriminatory content. The challenge here is to do so while striking a balance that doesn’t encourage overmoderation of marginalized communities.
The second approach, which includes the Online Freedom and Viewpoint Diversity Act, proposed by a group of senators led by Roger Wicker, R-Miss., would restrict moderation and fact checking to promote a freer flow of ideas.
“It’s really hard to see where the compromise is going to come from when their operating assumptions about what’s wrong with the platforms are directly opposite each other,” says Bergmayer at Public Knowledge. “There aren’t compatible policy goals.”
A third group of proposals, which includes a bill proposed by Sen. Lindsey Graham, R-S.C., would essentially eviscerate Section 230. Those proposals seem to be crafted to get Big Tech’s attention more than to actually advocate a return to a digital Wild West. But they also highlight the way Section 230, despite its flaws, helps to bring some order to the online world.
“Section 230 made the internet what it is today—for better and for worse,” says CR’s Lehman. “The recent scrutiny highlights both the wonders and the failures of the internet information ecosystem that Section 230 made possible. The challenge facing policymakers in 2021 is striking the right balance to ensure that the law makes life online better, not worse, for the next 25 years.”
Full Article: https://www.consumerreports.org/federal-laws-regulations/what-is-section-230-communications-decency-act-a3205342497/
Twenty-five years ago, Congress passed a little-noticed law that shielded online platforms from liability for the content posted by users.
In the decades since, Section 230 of the Communications Decency Act, signed into law by President Bill Clinton on Feb. 8, 1996, has paved the way for the internet as we know it.
For the better: by enabling everything from unfiltered opinion in the comments sections of news sites to the phenomenon of social media, as well as giving platforms the option to moderate that online content.
And for the worse: by facilitating the mass distribution of disinformation, hate speech, and other objectionable content.
“It affects every aspect of the internet from online safety to online shopping,” says Laurel Lehman, policy analyst for Consumer Reports.
And now, as it celebrates its silver anniversary, Section 230 finds itself under attack from across the political spectrum, including legislators and others ready to revise the law and with it the digital lives of millions of U.S. consumers.
Here’s what you need to know about this important provision and its uncertain future.
What Is Section 230?
At the heart of Section 230, you’ll find 26 simple words. “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
At the time it was drafted, the law effectively shielded services such as AOL, Prodigy, and CompuServe from liability for comments posted by members on their message boards. That protection led to the kind of open exchange of information and opinion on those forums and on Facebook, Twitter, YouTube, and other online platforms today.
It also makes it possible for e-commerce sites such as Amazon and Yelp to host customer reviews without fear of reprisal from disgruntled manufacturers.
And, as Lehman says, it protects individual citizens as well. Without the provision, you could be sued for inadvertently forwarding an e-mail with specious claims or for moderating (or not moderating) the discussion in a Facebook group.
Essentially, Section 230 treats online platforms less like a newspaper, which can be sued for libel if it prints something that’s harmful and untrue, and more like a neighborhood newsstand or bookstore, which is free to sell a wide range of publications without vetting every last word.
It allows Facebook to safely share comments, likes, and photos from 1.82 billion people a day without having to eyeball each and every one of them.
...
How Can Section 230 Be Improved?
At the moment, no fewer than 23 bills that would amend Section 230 have been introduced in Congress, and yet more wait in the wings.
While some are bipartisan, they offer little broad consensus beyond the general feeling that Big Tech platforms currently get too much protection from Section 230.
(To learn more about the various proposals—and get analysis on key concerns from CR’s advocates—read this post from policy analyst Laurel Lehman.)
The proposed amendments fall into three broad categories.
The first, which includes the PACT Act introduced last June by Sens. Brian Schatz, D-Hawaii, and John Thune, R-S.D., would reduce the scope of the protections offered to platforms by the law or require platforms to change their behavior to keep those protections. By exposing the companies to more litigation, the thinking goes, you encourage them to protect consumers from potentially harmful or discriminatory content. The challenge here is to do so while striking a balance that doesn’t encourage overmoderation of marginalized communities.
The second approach, which includes the Online Freedom and Viewpoint Diversity Act, proposed by a group of senators led by Roger Wicker, R-Miss., would restrict moderation and fact checking to promote a freer flow of ideas.
“It’s really hard to see where the compromise is going to come from when their operating assumptions about what’s wrong with the platforms are directly opposite each other,” says Bergmayer at Public Knowledge. “There aren’t compatible policy goals.”
A third group of proposals, which includes a bill proposed by Sen. Lindsey Graham, R-S.C., would essentially eviscerate Section 230. Those proposals seem to be crafted to get Big Tech’s attention more than to actually advocate a return to a digital Wild West. But they also highlight the way Section 230, despite its flaws, helps to bring some order to the online world.
“Section 230 made the internet what it is today—for better and for worse,” says CR’s Lehman. “The recent scrutiny highlights both the wonders and the failures of the internet information ecosystem that Section 230 made possible. The challenge facing policymakers in 2021 is striking the right balance to ensure that the law makes life online better, not worse, for the next 25 years.”