What got lost in the global AI summit circuit
Summary
Brookings Institution published an analytical commentary on India's AI Impact Summit 2026, which attracted 600,000 participants and positioned "middle powers" and AI sovereignty as alternatives to Western-centric AI governance. The article critiques corporate capture and civil society exclusion while calling for stakeholder-inclusive reforms as the summit circuit moves to Geneva.
What changed
The Brookings Institution published a commentary analyzing the global AI summit circuit, focusing on India's AI Impact Summit 2026. The summit recorded 600,000 participants and sought to position "middle powers" and AI sovereignty as a third path of influence challenging traditional Western-dominated global AI governance. Despite its scale and stated goals, critics raised concerns about corporate capture and the physical exclusion of civil society from proceedings.
For compliance and policy professionals, this commentary highlights emerging tensions in multilateral AI governance discussions. While the article does not impose regulatory requirements, it signals growing advocacy for more inclusive summit structures. As the official AI summit circuit moves to Geneva, stakeholders involved in international AI policy discussions should anticipate potential calls for structural reforms to stakeholder representation and corporate involvement in future governance forums.
Source document (simplified)
Commentary
What got lost in the global AI summit circuit?
Sacha Alanoca and
Sacha Alanoca Co-Founder - Stanford Critical AI Group, Stanford University Chinasa T. Okolo
Chinasa T. Okolo Founder and Scientific Director - Technecultura
April 2, 2026
- India’s AI Impact Summit signaled a pivot toward a less Western-centric agenda by championing “middle powers” and AI sovereignty as a third path of influence against the traditional global order.
- Despite recording 600,000 participants, the summit faced criticism for corporate capture and the physical exclusion of civil society.
- As the official AI summit moves to Geneva next year, advocates are calling for a course correction to ensure that the agenda prioritizes genuine stakeholder solidarity over the interests of private corporations.
Commuters walk past a hoarding of the AI Expo along a street on the eve of the 'India AI Impact Summit 2026' in New Delhi on February 15, 2026. (Photo by Arun SANKAR / AFP via Getty Images)
Sections
Sections
Contact
Governance Studies Media Office [email protected] 202.540.7724 Print
Read more from TechTank
More On
Business & Workforce Sub-Topics Corporations
International Affairs Sub-Topics Geopolitics
Technology & Information Sub-Topics Artificial Intelligence Technology Policy & Regulation
U.S. Economy Sub-Topics Regulatory Policy
Program Governance Studies
Center Center for Technology Innovation (CTI)
In February, India’s AI Impact Summit proclaimed a bold promise: Middle powers can reshape the global artificial intelligence (AI) order. With 600,000 recorded participants, the first official AI summit organized in a global majority nation staked its claim, compared to over 1,000 participants total in Paris and 100 in Bletchley. Surfing on Canadian Prime Minister Mark Carney’s speech at Davos earlier this year, the summit and the Indian government resolutely championed the concept of “middle powers,” which form a third path of influence in response to a “rupture in the world order.” Such framing stood in sharp contrast with the Paris AI Action Summit in 2025. The Paris edition was defined by the turbulence of President Donald Trump’s first 100 days: deregulatory fever, anti-safety narratives, and the geopolitical noise of Greenland and NATO threats in the background. The AI world has since absorbed the shock and shifted its center of gravity toward a less Western-centric agenda. While the summit’s key visual remains the missed hand-holding between the CEOs of OpenAI and Anthropic, the rhetoric was anchored around middle powers and “AI sovereignty,” breaking path dependency from the old world order.
And yet, the question remains: How do these buzzwords and impressive metrics translate meaningfully, and who is sidelined in the process? As noted by Alondra Nelson—former director of the White House Office of Science and Technology Policy—the summit coincided with Chinese Lunar New Year and the start of Ramadan, thus physically excluding important stakeholders from the conversation. While applauding the fact that this year’s summit was, for the first time, not invite-only and welcomed a wider community, Nelson noted the paradox of civil society organizations’ exclusion from main summit discussions when major tech CEOs were in attendance on the fourth day. The panel where Nelson raised these points further encapsulated this tension—the sole women-only panel of the summit, also featuring “Empire of AI” author Karen Hao, was left for the last day, last session, in a far-off room.
Amber Sinha highlighted this dynamic in a piece for Tech Policy Press, demonstrating how the Internet Free Foundation’s mapping of the summit agenda confirmed what the room design already suggested: The organizers’ stated commitment to “democratization” left little room for the questions it actually demands—power and value redistribution and meaningful accountability. Despite industry actors setting much of the tone, the “democratic AI” framing nonetheless positioned the summit as an alternative model. The optics of inclusion masked a more selective reality.
Without civil society in the room, words lose their meaning. At the conference, Hao identified one of the summit’s biggest risks as corporate control over AI narratives. Industry capture over shared terminology, such as “sovereignty,” “regulation,” and “energy consumption,” is dictating a range of actions and social imaginaries that enable communities to resist AI domination. She also highlighted that while OpenAI CEO Sam Altman may nominally support “regulation,” it is in the most cosmetic, washed-out form—erring on the side of voluntary standards over regulatory compliance. At the summit, Altman also attempted a misleading equivalence to downplay AI’s energy costs, arguing “it also takes a lot of energy to train a human.”
When civil society is not actively engaged, moral equivalences like these go unchallenged, and concepts like AI sovereignty get watered down. Currently, a small cluster of private corporations controls the AI industry through closed, proprietary systems, where a single provider designs and executes the AI model, platform, pricing, and its safeguards. When governments build critical infrastructure on proprietary systems that they cannot audit, sovereignty becomes a marketing opportunity and not an autonomous strategy for greater inclusion.
This also raises a longer-term nomenclature question. From the inaugural U.K. AI Safety Summit in 2023 to India’s AI Impact Summit, sessions focusing on “solidarity” and “societal impact” were never truly centered in these agendas and now seem to have been fully sidelined. From this standpoint, the India AI Impact Summit, despite being organized in a global majority nation, appeared to be a continuation of the concepts discussed at the French AI Action Summit: a clear move away from governance and safety toward innovation and the projection of national AI champions.
An interesting phenomenon has emerged in parallel. As the official summits narrow their focus, events that occur outside of the main conference programming have quietly become more formalized spaces for AI ethics and safety communities, especially for the voices sidelined from the main agenda. Examples of these include the Participatory AI Research and Practice Symposium, the Multistakeholder Convening on AI Governance, the Global South AI Research Colloquium, and AI Safety Connect.
Next year, the AI summit moves to Geneva, Switzerland, a city accustomed to hosting the annual AI for Good Summit by the International Telecommunication Union (ITU). However, the ITU’s summit has also been marked by corporate capture, with almost half of last year’s speakers hailing from tech companies and notable speakers like Abeba Birhane being censored on critical topics like AI’s societal impact.
Next year offers an opportunity to correct course. We must ensure that this time, real talks include all relevant stakeholders and happen in the open; that summit dates allow participants from distinct geographies to join; that civil society organizations are included in the agenda design, not relegated to last-day slots in off-center rooms; and that, given the visa and budget barriers of organizing a summit in one of the world’s most expensive cities, meaningful support is extended to participants who need it most. As we move from “safety” to “action” to “impact,” solidarity should make a comeback.
Related Content
Artificial Intelligence Copyright alone cannot protect the future of creative work Mark MacCarthy
May 1, 2025
Artificial Intelligence If superintelligence isn’t imminent, the Trump administration may be right to loosen advanced chip export controls Mark MacCarthy
February 26, 2026
Artificial Intelligence What to make of the Trump administration’s AI Action Plan Sorelle Friedler, Cameron F. Kerry, Aaron Klein, Raj Korpan, Ivan Lopez, Mark Muro, Chinasa T. Okolo, Stephanie K. Pell, Jude Poirier, Landry Signé, Nicol Turner Lee, Judy Wang, Darrell M. West, Tom Wheeler, Niam Yaraghi +10 more
July 31, 2025
Authors
Sacha Alanoca Co-Founder - Stanford Critical AI Group, Stanford University
Chinasa T. Okolo Founder and Scientific Director - Technecultura The Brookings Institution is committed to quality, independence, and impact. We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).
More On
- Business & Workforce Sub-Topics Corporations
- International Affairs Sub-Topics Geopolitics
- Technology & Information Sub-Topics Artificial Intelligence Technology Policy & Regulation
- U.S. Economy Sub-Topics Regulatory Policy
Program Governance Studies
Center Center for Technology Innovation (CTI)
Artificial Intelligence If superintelligence isn’t imminent, the Trump administration may be right to loosen advanced chip export controls Mark MacCarthy
February 26, 2026
Artificial Intelligence Why AI policy thrives in some states and fades in others James S. Denford, Gregory S. Dawson, Kevin C. Desouza, Marc E. B. Picavet
January 14, 2026
Artificial Intelligence Trump wasted no time derailing his own AI plan Thomas Wright
August 6, 2025
Related changes
Source
Classification
Who this affects
Taxonomy
Browse Categories
Get Government & Legislation alerts
Weekly digest. AI-summarized, no noise.
Free. Unsubscribe anytime.
Get alerts for this source
We'll email you when Brookings Regulatory Policy publishes new changes.