The Cons Of CDA 230
- 22 hours ago
- 3 min read
By Jeremy H. Gottschalk, Forbes Books Author.
In my previous blog post about the Communications Decency Act Section 230, also known as CDA 230, I discussed many of its positive aspects. No matter which side you fall on, CDA 230 is one of the most debated laws in tech policy. Supporters view it as the backbone of the modern internet, while critics argue it gives platforms too much protection. This time around, we’re going to look at some criticism CDA 230 has received over the years.
CDA 230 Doesn’t Do Enough—and Can’t
At the heart of the criticism of CDA 230 is the view that it shields platforms too broadly from liability, even when design choices, algorithms, or business models arguably contribute to harm. For example, a core concept of social networks is their reliance on algorithms that determine what content is presented to users in their feeds. The Association for the Advancement of Artificial Intelligence noted in 2020 that “YouTube is still plagued by such disturbing videos and its currently deployed counter-measures [to protect children] are ineffective in terms of detecting them in a timely manner.” The reality is that it is nearly, if not completely, impossible to remove all harmful content immediately or efficiently. Despite offering a separate YouTube Kids website and app, questionable videos still gum up the works. Simply offering a variant of a web app for kids may never be enough to protect them.
A common type of inappropriate content children might encounter is hate speech. To be fair, it’s impossible for companies to monitor everything when the Internet has grown so much in a short amount of time. The Electronic Frontier Foundation estimates that “in 1996, there were fewer than 300,000 websites; by 2017, there were more than 1.7 billion.” While social media apps leverage a myriad of tools to protect their users, including language filters, the reality is that technology does not always keep up with society. Case in point, language filters have a hard time keeping up with modern hate speech slang, making them an imperfect solution to tackle this problem.
Interpretation of CDA 230 is Inconsistent at Best
Opponents of CDA 230 point out that the biggest flaw is its inconsistent interpretation over the years. Policymakers argue that Section 230 benefits large incumbents more than startups because large platforms can afford sophisticated moderation tools and legal teams. At the same time, smaller companies struggle with compliance and safety despite being afforded the same legal immunity. This results in the reinforcement of dominant players rather than leveling the playing field.
The Yale Journal on Regulation argues that “given the importance that Congress placed on encouraging moderation and preventing harmful content in enacting the Communications Decency Act, an interpretation of Section 230 that favors large over small platforms is not inconsistent with congressional intent.”
CDA 230 is Just Too Old
Beyond that, many critics argue that CDA 230 is wholly outdated—reflecting the 1990s internet—a world without social networks, algorithmic feeds, online marketplaces, or the influencer economy.
Essentially, they claim that the law hasn’t kept pace with the power and role of today’s platforms. And, because so many cases are dismissed early on CDA 230 grounds, critics argue that courts rarely get to develop modern negligence or duty-of-care standards for digital platforms. This, in turn, puts the U.S. out of step with global norms. Other countries have moved toward platform “duty of care” or intermediary liability regimes (e.g., the EU’s Digital Services Act). The Villanova Law Review recently made a compelling point: “The very internet that Congress sought to prevent has not only evolved, but exceeded Congress’s wildest fears.”

