AI Standards, Regulations, and Enforcement Efforts Discussed
Summary
Global jurisdictions are discussing policies for responsible AI development and use, but the pace of AI innovation is outpacing regulation. Stakeholders at the AI Standards Hub Global Summit 2026 highlighted the importance of technical standards and assurance systems in guiding compliance amidst evolving regulatory frameworks like the EU AI Act and a patchwork of US state laws.
What changed
This article discusses the ongoing global efforts to establish standards and regulations for artificial intelligence (AI) systems, noting that the rapid advancement of AI technology is outpacing regulatory development. Key stakeholders, including representatives from the OECD and the British Standards Institution, emphasized the critical role of industry standards and international cooperation in ensuring AI is developed and deployed responsibly. The discussion highlighted the varying approaches to AI governance, such as the EU's AI Act and the US's reliance on NIST frameworks and state-level legislation, and the challenges posed by a fragmented regulatory landscape.
For regulated entities, this indicates a complex and evolving compliance environment. Companies developing or deploying AI systems must navigate a patchwork of international, national, and state-level regulations and standards. The article suggests a focus on technical standards and assurance systems as crucial tools for compliance. The lack of harmonized global enforcement and the potential for significant compliance costs and barriers to cross-border deployment are key risks highlighted, underscoring the need for proactive engagement with evolving AI governance frameworks.
What to do next
- Monitor evolving AI regulations and standards globally, particularly in key markets like the EU and US.
- Assess current AI development and deployment practices against emerging standards and frameworks.
- Engage with industry bodies and regulatory consultations related to AI governance.
Source document (simplified)
Published
18 March 2026
Contributors:
Lexie White
Staff Writer
IAPP
Global jurisdictions are increasingly open to considering policies to ensure artificial intelligence systems are used and developed responsibly while balancing safety and innovation. But the race to the top of the global AI market is outpacing regulation, leaving companies open to risks with streamlined implementation plans.
Stakeholders at the AI Standards Hub Global Summit 2026 noted that while regulatory frameworks such as the EU AI Act continue to develop, organizations may be focusing their efforts on technical standards, assurance systems and other tools that can set compliance practices on the right course.
Organization for Economic Co-operation and Development, Division on AI and Emerging Digital Technologies Head Sara Rendtorff-Smith, noted organizational and industry standards are "essential to governing AI well" and act as the "quiet infrastructure of innovation, as they enable us to scale AI safely and responsibly across our economies and societies."
The role standards play and how they are shaped
Rendtorff-Smith highlighted the importance of the OECD's industry standards, noting the "ability to govern effectively and around the world" must grow as rapidly as AI technologies are moving. She added international cooperation continues to be the linchpin for balancing sector-specific standards and regulations.
However, standards are perceived differently depending on the jurisdiction. Some EU organizations are currently arguing they cannot effectively comply with the AI Act without the industry standards they were promised before implementation deadlines. On the other hand, U.S. organizations are relying on the National Institute of Standards and Technology's suite of AI standards, including the AI Risk Management Framework and the AI Agent Standards Initiative, to pave the way to responsible AI with a patchwork of state laws and no cross-sectoral federal law.
Tailoring standards to common practices and policies is also an emerging priority. The OECD is keeping stakeholders from the public and private sectors apprised through its AI Policy Observatory, which tracks more than 2,000 AI policies across over 80 jurisdictions. The IAPP does similar tracking through its Global AI Law and Policy Tracker.
The variance across global proposals, in addition to the ever-evolving nature of AI, is putting new pressures on standards development.
"AI is testing the system like nothing else ever has. We shouldn't be blind to that. We shouldn't be afraid to admit it. AI is testing regulators. It's testing society. It's testing academia. It's testing industry,” British Standards Institution Standards Policy Director David Bell said. "AI is changing the way we work in so many fundamental ways. … Standards build the world, but the way we work going forward has got to change."
Another pillar to standards building is collaborative enforcement, which serves as a reference for best practices. Rendtorff-Smith warned insufficient global enforcement and "a very fragmented landscape" puts at risk the foundation that standards require for strength and adoptability.
"(The fragmented landscape) will be marked not just by significant compliance cost to businesses, but also by barriers to cross-border deployment as well as stifled innovation, ultimately," she said. "And so we need principles, we need a common definition, we need to align frameworks, and we need evidence-based standards."
London School of Economic and Political Science Data Science Institute Distinguished Policy Fellow Florian Ostmann also highlighted the relationship between technical standards and enforcement efforts when supporting responsible AI safeguards.
"Standards play an important role in facilitating the implementation and compliance with regulation. It's also clear that standards have an important role to play as a complementary tool to regulation (such as) performing functions that regulation can't perform," Ostmann said.
Next steps
Stakeholders argue enforcement efforts should focus on strengthening coordination across regulatory and technical tools while addressing potential gaps in AI implementation.
OECD AI Senior Economist Luis Aranda noted organizations should be looking to measure their compliance and data protection safeguards. He claimed while standards and regulations define expectations for trustworthy AI, consistent methods for assessing system performance remain limited.
"I think that's what we need to be thinking of today," Aranda said. "Standards are great. They tell you what good looks like. But they don't tell you how to measure good."
He also emphasized the need for more inclusive and globally representative governance approaches, noting those will require "shared foundations, shared concepts, and shared definitions." Standing in the way are concerns about addressing specific AI regulations that could hinder innovation.
"We all know there's a global AI race, and no country, of course, wants to be the first runner-up while everyone else is sprinting ahead," Aranda added. "We see all those factors contributing to this timing concern when it comes to regulation, and this is probably why we're also seeing a second wave of national AI initiatives."
This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.
Contributors:
Lexie White
Staff Writer
IAPP
Tags:
AI literacy AI and machine learning Enforcement Frameworks and standards Strategy and governance Law and regulation AI governance
Related Stories
### OECD privacy, AI leaders come together to bridge gaps 20 March 2024
### Proposed data provenance standards aim to enhance trustworthiness of AI training data 17 Jan. 2024
### ISO standard offers AI Management System template 18 June 2024
### US NIST publishes AI Risk Management Framework 1.0 27 Jan. 2023
Related changes
Source
Classification
Who this affects
Taxonomy
Browse Categories
Get Data Privacy & Cybersecurity alerts
Weekly digest. AI-summarized, no noise.
Free. Unsubscribe anytime.
Get alerts for this source
We'll email you when IAPP Privacy News publishes new changes.