Changeflow GovPing Data Privacy & Cybersecurity US Senate Hearing Explores Section 230 Reforms
Priority review Notice Amended Consultation

US Senate Hearing Explores Section 230 Reforms

Favicon for iapp.org IAPP Privacy News
Published March 18th, 2026
Detected March 21st, 2026
Email

Summary

A US Senate hearing explored potential reforms to Section 230 of the Communications Decency Act, which provides liability protections for online platforms. Lawmakers and experts discussed the law's impact on the digital ecosystem and its applicability to emerging technologies like AI, with a general sentiment favoring amendments over full repeal.

What changed

The US Senate Committee on Commerce, Science and Transportation held a hearing on March 18, 2026, to discuss potential reforms to Section 230 of the Communications Decency Act, which shields online platforms from liability for user-generated content. The hearing, marking the law's 30th anniversary, featured legal experts who debated whether the current protections are too broad, especially in the context of artificial intelligence. Lawmakers indicated a bipartisan willingness to amend Section 230, with proposals including the introduction of a duty of care standard for platforms.

While a full repeal of Section 230 was generally opposed due to concerns about increased censorship by platforms, there is significant momentum to modify its scope. The discussions highlighted the need to address how Section 230 applies to AI-generated content and algorithmic amplification, suggesting that platforms may face new obligations regarding content moderation and platform design. Compliance officers should monitor legislative developments closely, as potential amendments could significantly alter platform liability and operational requirements.

What to do next

  1. Monitor legislative developments regarding Section 230 reform
  2. Assess potential impact of duty of care standards on platform operations
  3. Evaluate AI content generation policies in light of potential liability changes

Source document (simplified)


Published

20 March 2026

Subscribe to IAPP Newsletters

Contributors:

Alex LaCasse

Staff Writer

IAPP


With Section 230 of the U.S. Communications Decency Act of 1996 — which grants broad First Amendment liability protections to online platforms — turning 30 years old, a number of policymakers and civil society are questioning if it offers too much blanket cover for technology companies in the age of artificial intelligence.

On 18 March, the U.S. Senate Committee on Commerce, Science and Transportation marked the 30th anniversary of Section 230 with a hearing featuring legal experts on the impacts the law has had on the digital ecosystem since its inception, as well as its legal implications for emerging technologies.

The general tone among lawmakers suggested there is space to craft a bipartisan measure that would claw back some of the liability protections for tech platforms. During questioning, several senators floated proposals that would amend portions of Section 230, including legislation that would strip liability protections if platforms fail to meet a duty of care standard.

Committee Chairman Ted Cruz, R-Texas, explained Section 230 was originally passed within the Communications Decency Act to protect emerging internet companies at the time from lawsuits that would have sought to hold them liable for speech made on their platforms by another online user. Cruz said Big Tech has since abused the legal protections offered by Section 230 and claimed companies have worked in concert with the government to censor political speech online.

Cruz, however, said he does not support a full repeal of Big Tech's Section 230 protections, and touted his soon-to-be-introduced Justice Against Weaponized Bureaucratic Outreach to Network Expression Act, which would prevent government agencies from contacting platforms and "bullying" them into removing speech made by users.

"The same reasons why Congress enacted Section 230, to prevent liability for a different person's speech, are still relevant," Cruz said. "I'm concerned that a full repeal or sunset would lead platforms to engage in worse behavior — to engage in more censorship to protect themselves from litigation."

State of Section 230

Americans for Responsible Innovation President Brad Carson, who testified at the hearing, said there are two "competing" legal theories on how Section 230 went awry as the technology industry matured in the 21st century. The first, he said, is the law was flawed from its inception, and that "ordinary tort law" and First Amendment case law would create a more "nuanced and flexible" body of law establishing a liability threshold for online platforms as time wore on.

The second theory, Carson said, was that Section 230 "froze answers in place" over the question of online platforms' First Amendment liability of content they host, and courts have since interpreted the law to grant more liability protection beyond Congress' original intent in 1996, which now cover platforms' "significant algorithmic amplification and product design choices."

"Regardless of which (theory) is more accurate, both assessments recognize the resulting legal regime has been unable to hold platforms accountable in proportion to the harms they have enabled," Carson said.

Section 230's applicability to AI?

Carson called on lawmakers to reform Section 230 to ensure that outputs from generative AI models that create real world harm to users are not granted First Amendment liability protection. Amid the global AI race, he said leaving Section 230 in place as constituted could risk enshrining a "meta law" that would "determine who governs an emerging industry,” as opposed to a law establishing "how that governance would occur." He said the first "meta law" for any technology, oftentimes, is also the last significant law passed to govern it, not unlike Section 230 itself.

"Section 230 protects platforms from liability for content as the statute says, 'provided by another information content provider,' that framework assumes active users and passive hosts," Carson said. "Generative AI systems do not fit that model — a user provides a prompt, but the company designs the model, selects the training data, fine-tunes the system and deploys it with parameters of its choosing."

Section 230's First Amendment considerations

Several of the invited witnesses said while the tech platforms have been slow in responding to harms their products are causing, keeping Section 230 in place might be the least worst option for ensuring free speech remains a staple of the internet.

Stanford University Law School Director of the Platform Regulation Program Daphne Keller said Section 230 largely has "happened to strike a balance that has served its purpose" between ensuring free speech, by and large, is protected online, and tech companies were shielded from frivolous lawsuits. She said eliminating Section 230 would not remove First Amendment-protected, yet objectionable, speech from the internet and open platforms up to liability for speech made by users under laws like the Digital Millennium Copyright Act or the EU Digital Services Act.

"This is a question about what would actually happen, what would platforms, users and, importantly, governments would predictably do in a world without Section 230?" Keller said. "That legal change would very likely make the internet worse for user speech rights without making it any safer, and impose legal uncertainty and expense that today's incumbent (tech) giants could survive but their smaller rivals could not."

Also in favor of keeping Section 230 largely in-tact was Knight First Amendment Institute Policy Director Nadine Farid Johnson. She said Section 230's protections are "vital to free speech online."

Farid Johnson recommended the law be amended to make Section 230's protections on tech platforms conditional on their compliance with interoperability, privacy and transparency requirements. These recommended reforms would include legal protections for public interest researchers conducting work on online platforms, strengthen privacy disclosures for users, and "attack the platform's monopoly control over public discourse" by passing legislation requiring interoperability that allows users to port their social networks onto competitors when they choose to leave a platform.

"A better approach would be to pass structural regulation that would protect users' privacy, allow them to engage with platforms on their own terms, or leave them more easily, and make the platforms more transparent and accountable to the public," Farid Johnson said. "Section 230 effectively gave platforms the ability to moderate user content without having to fear doing so would give rise to liability … We believe conditioning Section 230 protection on these mandates can be done in a way that not only respects the limitations of the First Amendment but actually promotes the values that underline it."

Liability cases attack platforms' design, not content

The final witness was Matthew Bergman, founding attorney at the Social Media Victims Law Center. The group represents parents whose children have died by suicide after prolonged engagement with social media and/or AI chatbots.

Bergman said claims by tech companies that their Section 230 protections are geared toward upholding the First Amendment online is a red herring. He said Section 230 does not offer a First Amendment liability shield for companies' deliberate design decisions that are intended to addict young users, such as infinite scrolling, push notifications and promoting unsafe content via their proprietary algorithms.

In his testimony, Bergman recounted stories of several parents who accompanied him to the hearing whose children ultimately died by suicide and are now seeking legal damages. He said in many instances, their cases bump against tech companies invoking Section 230 protections to seek dismissal. However, he said he has experienced some success litigating companies' design choices, rather than the substance of the content the deceased children engaged with.

"These cases have nothing to do with protecting speech, they're about deliberate design decisions of companies to prioritize profits over the lives and safety of children," Bergman said. "(Design choices that) target children, not with material that they want to see, but the material they can't look away from. They exploit the underdeveloped frontal cortices of young individuals, the fear of missing out, the social anxiety that adolescents have, and they use intermittent reinforcement techniques and highly sophisticated AI to addict them to their platforms."

Congress sees need for reforms

U.S. Sen. Brian Schatz, D-Hawaii, said suggestions that Section 230 is above reproach in terms of being reformed or modernized are "preposterous." Schatz said the law in its current form has gone especially too far to shield companies from "egregious harms, harassment and abuse, frauds and scams" perpetuated against children using their services. He called on his fellow committee members to work on developing reforms that reflect advancements in the internet over the last three decades.

"It's not that (technology companies) don't know what’s happening or even why it's happening, it's that to do something about (children's online harms) would hurt their bottom line, and so long as federal law provides a shield, why even bother?" Schatz said. "We don't simply have to accept terrible outcomes as a fact of modern life. We can work together and fix the law."

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.

Submit for CPEs

Contributors:

Alex LaCasse

Staff Writer

IAPP

Tags:

AI and machine learning Children’s privacy and safety Customer trust and expectations Law and regulation Litigation and case law Personal impacts U.S. federal regulation Telecommunications Privacy AI governance

Related Stories

### The Internet Has Grown Up, Why Hasn’t the Law? Reexamining Section 230 of the Communications Decency Act 27 Aug. 2013

### AI and digital governance: Platform liability laws in the US 18 Sept. 2024

ANALYSIS

### AI and digital governance: Exploring platform liability 17 July 2024

### US Sen. Blackburn proposes AI framework to protect children, copyrights 19 March 2026

Named provisions

State of Section 230 Section 230's applicability to AI?

Source

Analysis generated by AI. Source diff and links are from the original.

Classification

Agency
IAPP
Published
March 18th, 2026
Instrument
Notice
Legal weight
Non-binding
Stage
Consultation
Change scope
Substantive

Who this affects

Applies to
Technology companies
Industry sector
5112 Software & Technology
Activity scope
Platform Liability Content Moderation
Geographic scope
United States US

Taxonomy

Primary area
Telecommunications
Operational domain
Legal
Topics
Artificial Intelligence Internet Law Platform Liability

Get Data Privacy & Cybersecurity alerts

Weekly digest. AI-summarized, no noise.

Free. Unsubscribe anytime.

Get alerts for this source

We'll email you when IAPP Privacy News publishes new changes.

Free. Unsubscribe anytime.