States Can and Should Regulate AI in Criminal Justice
Summary
Brookings Institution published a commentary arguing that state legislatures should regulate AI tools in criminal justice settings to protect civil rights and public safety while harnessing AI's potential benefits. The authors analyze the current landscape of AI adoption in law enforcement, noting risks of untested tools, and address concerns about a recent executive order that may limit state regulatory authority.
What changed
Brookings Institution published a policy commentary urging state legislatures to regulate AI tools in criminal justice settings, including surveillance, facial recognition, and risk assessment algorithms. The authors document documented harms from AI tools, including wrongful arrests of Black individuals due to erroneous facial recognition matches and automated license plate reader failures. The commentary specifically addresses a recent Trump executive order that threatens state-level regulation through federal preemption claims and funding threats, arguing these legal claims rest on shaky ground and should not deter states from acting.
For compliance officers and legal professionals, this document provides context on the evolving regulatory landscape for AI in criminal justice. While the article itself is non-binding commentary, it signals that state-level AI regulation in criminal justice is a growing area of policy focus. Organizations developing or deploying AI tools for law enforcement, prosecutorial, or correctional settings may face varying state-level requirements as legislatures respond to documented civil liberties harms. The document does not impose compliance obligations but forecasts potential regulatory divergence across states.
Archived snapshot
Apr 17, 2026GovPing captured this document from the original source. If the source has since changed or been removed, this is the text as it existed at that time.
Commentary
States can—and should—regulate AI in criminal justice
Chiraag Bains,
Chiraag Bains Nonresident Senior Fellow - Brookings Metro Alex Chohlas-Wood, and
Alex Chohlas-Wood Assistant Professor of Computational Social Science, New York University; Director, ADAPPT Lab Katie Kinsey
Katie Kinsey Chief of Staff and Tech Policy Counsel - Policing Project, NYU School of Law
April 16, 2026
- AI is shaping how we are surveilled, arrested, charged, and sentenced as companies promise tools will improve safety and fairness. Yet, we’ve already seen wrongful arrests, unconstitutional surveillance, and the deprivation of liberty based on unreliable or improperly deployed AI tools.
- State legislatures are the natural actors to set guardrails that harness AI’s genuine potential while protecting civil rights and public safety.
- A recent executive order from President Donald Trump threatens to chill essential avenues of state-level regulation.
Comuzi / https://betterimagesofai.org / © BBC / https://creativecommons.org/licenses/by/4.0/
10 min read
-
-
-
Sections
Sections
Contact
Governance Studies Media Office [email protected] 202.540.7724 Print
Read more from TechTank
Follow the authors
- Chiraag Bains
- See More
More On
Society & Culture Sub-Topics Crime, Justice & Safety
Technology & Information Sub-Topics Artificial Intelligence Technology Policy & Regulation
U.S. Economy Sub-Topics Regulatory Policy
U.S. Government & Politics Sub-Topics Congress Courts & Law
North America Sub-Topics U.S. States and Territories
Program Governance Studies
Center Center for Technology Innovation (CTI)
American criminal justice agencies are rapidly adopting artificial intelligence (AI) tools, shaping how we are surveilled, arrested, charged, and sentenced. These tools promise to improve safety, accountability, and fairness. Yet most tools have never been independently validated, and the harms they can cause are not hypothetical. Studies have shown these algorithms carry the risk of discrimination, and we’ve already seen wrongful arrests, unconstitutional surveillance, and the deprivation of liberty based on unreliable or improperly deployed AI tools.
In this moment, state legislatures are the natural—and necessary—actors to set guardrails that harness AI’s genuine potential while protecting civil rights and public safety. But a recent executive order from President Donald Trump threatens to chill essential avenues of state-level regulation through legally dubious tactics, including Justice Department lawsuits, funding threats, and claims of federal preemption. Those claims rest on shaky legal ground, and should not deter states from acting.
Promise and peril
Law enforcement agencies across America today face a barrage of marketing claims encouraging them to adopt the latest innovation in AI. Exhibition halls at criminal justice conferences are filled with vendors who tout ostensibly revolutionary products: automatically written police reports, nationally integrated surveillance networks, facial recognition algorithms applied to billions of images scraped from social media, and even data-scouring tools that claim to be able to place people at the scene of a crime.
In some cases, AI tools clearly benefit the justice system. AI-powered software is helping public defenders and prosecutors manage and locate evidence, and 911 call takers to improve emergency response. Novel algorithmic approaches to DNA analysis have solved cold cases previously thought unsolvable. AI-powered algorithms may even help reveal and reduce biases in areas of human decisionmaking, including in risk scoring and prosecutorial charging decisions.
Yet beyond limited success stories, most new AI technologies pitched to the criminal justice system remain untested by credible, independent sources, and increasingly lofty marketing claims are unlikely to hold up under scrutiny. At the same time, the unchecked use of AI can pose serious risks to our civil rights and liberties. In Colorado, for example, police held a woman and her four children—including a 6-year-old child—at gunpoint because an automated license plate reader falsely matched her vehicle to one reported stolen. Because of an erroneous facial recognition identification, a Black man in Georgia was arrested and spent nearly a week in jail for an alleged retail theft that occurred in Louisiana—a state he’d never set foot in. Numerous other wrongful arrests, primarily of Black individuals, have been documented. And law enforcement agencies across the country have been caught using facial recognition in ways that raise serious privacy and First Amendment concerns, from live tracking in Louisiana to surveilling peaceful protesters in Florida and Minnesota.
These realized harms make clear that unfettered deployment of AI in the justice system risks serious damage. But the potential benefits offered by AI also show that stymieing its use could inhibit real public safety progress.
State regulation as the way through
Our model of democratic governance provides a way forward in situations like these: regulation. Elected legislators can set rules that are responsive to state and local concerns and enable society to gain the benefits of this powerful technology while mitigating its harms.
This approach is nothing unusual. Nearly 100 years ago, U.S. Supreme Court Justice Louis Brandeis eloquently described the wisdom of letting state governments serve as laboratories to try different policy strategies, learning which safeguards are effective through observation and careful research. Such regulation also draws on a longstanding American tradition of states protecting the safety and well-being of their citizens. This is particularly true for the regulation of justice policy, a power that sits squarely within states’ inherent authority.
As members of an expert task force that has offered principles to guide policymakers in the responsible integration of AI across law enforcement, courts, and corrections, we know firsthand that there are many thoughtful proposals for legislating AI in the criminal justice system to guide state action.
A threat without legal force
States’ ability to rein in the harms of AI in the criminal justice system, however, appears threatened by recent executive actions. In December 2025, after months of intense industry lobbying, Trump issued an executive order aimed at curbing state-law protections against AI-enabled harms. Crucially, the order specifically exempts from its scope any laws regulating “state government procurement and use of AI.” This exemption means that states should continue to feel free to pass laws regulating their own law enforcement and criminal justice agencies’ use of AI.
Yet to truly ensure the public safety and civil rights and liberties of their citizens, states need broad leeway to enact appropriate governance measures—including ones that place important guardrails on developers of public safety AI, not just agency users. The order attempts to preclude such measures. It tries to justify its call for preemption by citing the need for national standards—which, in theory, make good sense. But no such legislation has passed, and Congress shows no sign of acting any time soon to prevent AI harms. The White House’s recently released national policy framework for AI mostly serves as a vague and abstract roadmap of potential areas where Congress might enact regulation rather than a real plan to develop enforceable standards. In this context, the executive order’s preemption mandate is actually just a broader anti-governance measure in disguise. This kind of purely deregulatory preemption could leave communities defenseless against unvetted technologies that could be used by justice agencies to deprive people of their liberty or their lives. It could chill democratic responsiveness, prolong our regulatory vacuum, and allow untested technology to endanger public safety.
The executive order itself is designed to intimidate in a number of legally dubious ways: deploying the Department of Justice (DOJ) to sue states; threatening to strip federal funding from states with laws deemed “onerous”; directing the Federal Trade Commission (FTC) to declare state algorithmic discrimination laws preempted; and instructing the Federal Communications Commission (FCC) to create reporting standards that would nullify state transparency requirements. By threatening lawsuits and the loss of federal funding, the administration aims to discourage state policymakers from even trying to protect their constituents. This is a direct assault on state sovereignty and violates basic principles of federalism. If successful, the effort would inhibit state regulation without establishing federal protections in their place.
For all its aggressive rhetoric, state officials should understand this simple truth: The order itself has no preemptive power whatsoever. An executive order is not a law. The president cannot unilaterally override state statutes. This order merely directs agencies to take actions that might eventually create pathways for preemption—and each pathway faces serious legal obstacles.
First, the contemplated DOJ lawsuits are unlikely to succeed. The administration’s main theory is that state AI laws violate the dormant commerce clause. That part of the Constitution primarily bars economic protectionism. But most state AI laws apply equally to in-state and out-of-state developers. Another argument, advanced by Trump-allied venture capital firm Andreessen Horowitz, would strike down state laws that impose burdens on commerce that are “clearly excessive” relative to local benefits. The Supreme Court is deeply fractured over whether, when, and how to apply this test, and three conservative justices would discard it altogether when it requires courts to weigh economic harms to companies against noneconomic harms to people—a job we typically ask of elected policymakers. AI companies would struggle to show excessive burden anyway, given their ability to tailor software to different markets.
Second, the order threatens to defund states that regulate AI, including by withholding Broadband Equity Access and Deployment (BEAD) Program money Congress authorized in 2021 to expand access to high-speed broadband. But the president cannot by executive fiat attach new policy conditions to BEAD. In addition, to be valid under the Constitution’s spending clause, any conditions must be related to the grant program’s purposes. The BEAD statute does not even mention AI, and AI deregulation does not advance Congress’ goal of expanding access to high-speed internet.
Finally, the FTC cannot preempt state regulation by mere assertion, nor will it have an easy time proving there is anything deceptive or not “truthful” about state laws that require AI systems not to discriminate—the very hook the executive order relies on to try to invoke FTC jurisdiction. Similarly, nothing in the FCC’s statutory authority, which is limited specifically to the communications industry, authorizes it to regulate AI generally, let alone preempt state AI laws. As one preemption advocate conceded, any such effort to contort the FCC’s authority so it can regulate AI is “a Quixotic exercise in futility.”
The bottom line is: States should not be deterred. The administration’s threats are built on legal quicksand. Mounting a defense takes time and resources, but states would be fighting from solid legal ground. The stakes are too high to yield, much less obey in advance; state efforts remain vital to ensure the use of AI systems in criminal justice settings is safe, reliable, and fair.
No time to yield
Fortunately, both Republican and Democratic governors have already begun to take action despite the federal government’s inaction and the executive order’s discouragement of state-level regulation. California Gov. Gavin Newsom recently issued an order requiring the state’s AI vendors to ensure specific privacy and safety guardrails; his office also cited the “failure of the federal government to enact comprehensive, sensible AI policy” when he signed legislation that “fills this gap and presents a model for the nation to follow.” Utah Gov. Spencer Cox argued that his state should be allowed to regulate AI to avoid harms, noting that “the Supreme Court is going to back me up on that.” Florida Gov. Ron DeSantis declared “we have a right” to pass AI regulation just a few weeks after Trump signed his order.
State-level regulation of AI is needed to ensure the safety and fairness of this technology. Even better, if done right, state-level regulation also can accelerate the responsible and effective adoption of AI tools shown to improve public safety while protecting civil rights and liberties. Requirements that vendors credibly validate new technologies, for example, could help local agencies better assess which “revolutionary” claims hold up under independent scrutiny, and which are simply snake oil or worse—ultimately avoiding costly contracts for ineffective technology. At the same time, state-level guidance on proper evaluation of AI could position agencies to harness these technologies in the most effective way possible.
In other words, we have a lot to gain from state-level innovation in AI policy. Our country’s laboratories of democracy—the states—should move with urgency and resolve to determine how we can avoid harm while opening the door to responsible and effective use of AI.
Related Content
Technology Policy & Regulation Addressing overlooked AI harms beyond the TAKE IT DOWN Act Brooke Tanner, Josie Stewart, Nicol Turner Lee
December 11, 2025
Artificial Intelligence States are legislating AI, but a moratorium could stall their progress Nicol Turner Lee, Josie Stewart
May 14, 2025
Technology Policy & Regulation The coming AI backlash will shape future regulation Darrell M. West
May 27, 2025
Authors
Chiraag Bains Nonresident Senior Fellow - Brookings Metro @chiraagbains
Alex Chohlas-Wood Assistant Professor of Computational Social Science, New York University; Director, ADAPPT Lab
Katie Kinsey Chief of Staff and Tech Policy Counsel - Policing Project, NYU School of Law The Brookings Institution is committed to quality, independence, and impact. We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).
More On
- Sub-Topics Crime, Justice & Safety
- Sub-Topics Artificial Intelligence Technology Policy & Regulation
- Sub-Topics Regulatory Policy
- Sub-Topics Congress Courts & Law
- Sub-Topics U.S. States and Territories
Program Governance Studies
Region North America U.S. States and Territories
Center Center for Technology Innovation (CTI)
Presidency Trump relies on showmanship and base appeal in State of the Union address William A. Galston, Elaine Kamarck
February 25, 2026
U.S. Economy Early warning signs for the DC region’s economy amid federal downsizing Amy Liu, Tracy Hadden Loh, Glencora Haskins
September 24, 2025
Crime, Justice & Safety In Washington, DC and elsewhere, ‘tough-on-crime’ policies make cities less safe Hanna Love, Hannah Stephens
April 23, 2025
Related changes
Get daily alerts for Brookings Regulatory Policy
Daily digest delivered to your inbox.
Free. Unsubscribe anytime.
About this page
Every important government, regulator, and court update from around the world. One place. Real-time. Free. Our mission
Source document text, dates, docket IDs, and authority are extracted directly from Brookings.
The summary, classification, recommended actions, deadlines, and penalty information are AI-generated from the original text and may contain errors. Always verify against the source document.
Classification
Who this affects
Taxonomy
Browse Categories
Get alerts for this source
We'll email you when Brookings Regulatory Policy publishes new changes.
Subscribed!
Optional. Filters your digest to exactly the updates that matter to you.