Changeflow GovPing Data Privacy & Cybersecurity Senator Blackburn Proposes AI Framework for Chi...
Priority review Consultation Added Draft

Senator Blackburn Proposes AI Framework for Child Safety and Copyright

Favicon for iapp.org IAPP Privacy News
Published March 19th, 2026
Detected March 20th, 2026
Email

Summary

U.S. Senator Marsha Blackburn has introduced a discussion draft for a federal AI policy framework focusing on children's online safety and copyright protection. The proposal aims to establish national standards, incorporating elements from the Kids Online Safety Act and the NO FAKES Act, and includes provisions for a private right of action for child harms.

What changed

Senator Marsha Blackburn has introduced a discussion draft for a comprehensive federal AI policy framework, aiming to preempt state-level legislation and establish national standards. The draft framework prioritizes child safety and copyright protection, drawing from existing proposals like the Kids Online Safety Act and the NO FAKES Act. Key provisions include a duty of care for AI developers, safeguards for children under 17, data protection standards, a consumer reporting mechanism for AI harms, and a private right of action for child-related damages, potentially impacting Section 230 protections. It also introduces transparency guidelines for AI-generated content, authentication, and detection, and tasks NIST with developing cybersecurity standards for content provenance and watermarking.

This discussion draft is intended to initiate dialogue among lawmakers and with the White House, which is expected to release its own legislative recommendations. Regulated entities, particularly technology companies and AI developers, should monitor the legislative process as this framework evolves. While currently a draft, it signals a move towards federal AI regulation with significant implications for product design, data handling, and content creation. The inclusion of a private right of action and potential changes to Section 230 could lead to increased litigation risk and necessitate updates to compliance programs. The draft also mandates third-party audits for bias and discrimination, requiring proactive measures to ensure fairness in AI systems.

What to do next

  1. Monitor legislative developments regarding the AI framework.
  2. Review existing AI systems for compliance with proposed child safety and data protection standards.
  3. Assess AI-generated content processes for potential transparency and watermarking requirements.

Penalties

Potential for litigation via private right of action for child harms.

Source document (simplified)


Published

19 March 2026

Subscribe to IAPP Newsletters

Contributors:

Joe Duball

News Editor

IAPP


U.S. Congress' slow move on federal artificial intelligence policy may be coming to an end. U.S. Sen. Marsha Blackburn, R-Tenn., introduced a fresh discussion draft 18 March aimed at kickstarting lawmaker dialogue toward delivering on the White House's goal to preempt state-level AI legislation, as outlined in its December 2025 executive order.

Blackburn's draft framework primarily focuses on protections and requirements around children's online safety and copyright issues, combining some of her previously introduced bills to create preemptive legislation. The children's provisions are based on the proposed Kids Online Safety Act while copyright portions are taken from the NO FAKES Act.

"Instead of pushing AI amnesty, President (Donald) Trump rightfully called on Congress to pass federal standards and protections to solve the patchwork of state laws that has hindered AI innovation," Blackburn said in a statement. "Congress must answer his call to establish one federal rulebook for AI to protect children, creators, conservatives, and communities across the country and ensure America triumphs over foreign adversaries in the global race for AI dominance."

For children under age 17, the framework would place a duty of care on developers while requiring AI chatbot safeguards, data protection standards and a consumer mechanism to report AI harms.

A private right of action is also included for child harms "caused by the AI system for defective design, failure to warn, express warranty, and unreasonably dangerous or defective product claims." Litigation would be viable with a proposed sunset of platform liability protections under Section 230 of the Communications Act.

Copyright provisions are highlighted by "new federal transparency guidelines for marking, authenticating and detecting AI-generated content." The framework would also task the U.S. National Institute of Standards and Technology with creating cybersecurity standards that "prevent tampering with provenance and watermarking on AI content."

The draft also requires third-party audits for bias and discrimination based on political affiliation, and measures to boost AI innovation.

Blackburn's discussion runs counter to the Trump administration's executive order, which indicated a forthcoming policy recommendation to Congress would avoid proposals that preempt state laws covering children's online safety and "other topics as shall be determined."

According to Axios, Blackburn has been in close contact with the White House, which is soon expected to introduce a separate legislative recommendation that will create a fluid policy discussion alongside Blackburn's draft. The goal is to blend the proposals, as deemed fit and appropriate, and arrive at the "uniform" policy mandated under the executive order.

"It basically states the policy of (the) administration is to create that federal framework," White House Special Advisor for AI and Crypto David Sacks said when the order was signed. "We're going to work with Congress ... to define that framework, but in the meantime, this (order) gives (Trump) tools to push back on the most onerous and excessive state regulations."

How KOSA fits

Blackburn's KOSA efforts have spanned multiple sessions of Congress, with its application in the AI context the latest attempt to get it over the finish line.

There is wide bipartisan support for KOSA in the Senate, where it advanced alongside the Children and Teens' Online Privacy Protection Act on a 91-3 vote in July 2024. That version has stalled since that passage due to First Amendment concerns in the House, which continues to run its own version of the bill.

KOSA's inclusion will test Senate Democrats, particularly Sen. Richard Blumenthal, D-Conn., KOSA's co-sponsor. With expected opposition to the broader Republican approach to AI legislation, Democrats could be left to explain why they would forego an opportunity to pass legislation they've long sought to finalize.

Digital Smarts Law & Policy Principal Ariel Fox Johnson, CIPP/US, was not surprised to see KOSA pop up in the discussion draft given "online safety concerns for kids on AI are no less great than for kids on all the platforms and apps for which KOSA was initially drafted." Though one unexpected nuance she highlighted was KOSA's preemption provision, which will allow states to go beyond the federal statute where they see fit to protect their children.

"Possibly lawmakers understand that with respect to kids, it may be very difficult to have a federal ceiling, especially when the states have been so active in passing a variety of kids privacy and safety laws, whereas Congress has been less so," Johnson told the IAPP.

AI chatbot safety

The framework's chatbot and AI companion safety provisions under the Guard Act rely heavily on age verification that applies to accounts belonging to minors under 18.

For both existing and new chatbot users, covered entities will need to collect age-related data and information from a government-issued ID or other "reasonable" verification methods defined by the bill. Verification reviews of previously verified accounts will continue on a rolling basis.

The bill also includes verification data security measures, including specified retention periods and necessity and proportionality standards.

The application of age verification is particularly relevant after the Federal Trade Commission recently issued a policy statement encouraging the use of age verification technologies while forgoing enforcement of verification data practices.

When the statement was released, FTC Bureau of Consumer Protection Director Christopher Mufarrige said the agency's new stance "incentivizes operators to use these innovative tools, empowering parents to protect their children online."

Blackburn's chatbot safeguards also call for required disclosures regarding interactions with technologies. There are separate reminders about conversations with non-humans and non-professionals.

Stakeholders weigh in

The Trump administration's preemption goals have raised questions about the fate of state-level digital governance laws that cover areas of AI. Blackburn's proposal addresses preemption in different ways, but ZwillGen Director of AI Division Brenda Leong, AIGP, CIPP/US, told the IAPP she does not see the bill impacting state's prior or future work on AI bias and automated decision-making.

"The full bill's general preemption provision in Section 1701 broadly preserves all 'generally applicable' state and local AI laws," she said. "State or local bias audit requirements, automated decision-making obligations, transparency requirements, and algorithmic accountability frameworks would likely survive, so companies operating in states like Colorado, Illinois and New York should expect those regimes to remain in force even if this legislation passes, and the door seems to remain open for state action."

Leong also called attention to the potential for "an extraordinary federal 'ask'" with covered entities deploying "advanced artificial intelligence systems" being left open to potential enforcer requests for code, training data, model weights and more.

"No U.S. regulatory regime has ever conditioned the right to operate on surrendering your entire intellectual property to a government agency on demand — not in pharmaceuticals, not in defense, not in finance," she said, noting those potential requests raise "profound constitutional questions about regulatory takings, due process and controls on government use and profit from this information."

On the general safety premise of the bill, Electronic Privacy Information Center Senior Counsel Calli Schroder told the IAPP the framework "suffers from trying to appeal to both the president and those concerned with AI's demonstrable harms."

"By attempting to cover so many parts of a broad-reaching technology at once, it fails to meaningfully address AI's problems and instead enshrines industry interests," she added.

Computer & Communications Industry Association Vice President of Federal Affairs Brian McMillan did not indicate a particular stance on Blackburn's bill, but noted CCIA supports legislation that "sets the global stage for AI leadership."

"While youth safety and transparency are important shared goals, unworkable provisions that unnecessarily hinder innovation or raise serious constitutional questions are fundamentally at odds with an approach that is primarily designed to promote the development and deployment of cutting-edge AI technologies," McMillan told the IAPP.

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.

Submit for CPEs

Contributors:

Joe Duball

News Editor

IAPP

Tags:

AI and machine learning Children’s privacy and safety Intellectual property Law and regulation U.S. federal regulation AI governance

Related Stories

### US President Trump signs state AI executive order, legal questions remain 12 Dec. 2025

### US Senate abandons proposed state AI law moratorium as compromise falls through 1 July 2025

### AI, digital policy shifts increase under latest Trump executive orders 24 Jan. 2025

### Kids Online Safety and Privacy Act clears US Senate 31 July 2024

Named provisions

Kids Online Safety Act NO FAKES Act Section 230

Source

Analysis generated by AI. Source diff and links are from the original.

Classification

Agency
IAPP
Published
March 19th, 2026
Instrument
Consultation
Legal weight
Non-binding
Stage
Draft
Change scope
Substantive

Who this affects

Applies to
Technology companies Manufacturers
Industry sector
5112 Software & Technology 3341 Computer & Electronics Manufacturing
Activity scope
AI Development Content Generation Data Protection
Threshold
Children under age 17
Geographic scope
United States US

Taxonomy

Primary area
Artificial Intelligence
Operational domain
Legal
Topics
Child Safety Copyright Data Privacy

Get Data Privacy & Cybersecurity alerts

Weekly digest. AI-summarized, no noise.

Free. Unsubscribe anytime.

Get alerts for this source

We'll email you when IAPP Privacy News publishes new changes.

Free. Unsubscribe anytime.