All sessions

Smart Regulation: Rightsizing Governance for the AI Revolution

Contents

Executive Summary

This panel discussion explores pragmatic approaches to AI governance in a fragmented geopolitical landscape, rejecting the possibility of global consensus while advocating for coalition-building around shared priorities. The speakers emphasize that emerging economies face an acute "AI divide" due to compute, data, infrastructure, and skills gaps, and propose open-source models, federated learning, sectoral governance, and public-private resource sharing as mechanisms to democratize AI access and ensure technological sovereignty.

Key Takeaways

  1. Coalition-building over consensus: In a fragmented world, pragmatic alignment around specific issues (verification, risk mitigation, data governance) scaled through multilateral mechanisms is more achievable than global governance consensus.

  2. Sovereignty through strategic dependency mapping: Emerging economies should map which dependencies are acceptable (e.g., hyperscaler cloud services) and where to invest in indigenous solutions (e.g., open-source models, local data infrastructure), avoiding the trap of either total autonomy or total reliance.

  3. Open-source + local fine-tuning is the governance model for the Global South: The Linux/LAMP stack analogy shows that shared technical infrastructure with local adaptation (multilingual models, sectoral applications, data trusts) enables participation without requiring permission or dominance by frontier labs.

  4. Fix the unglamorous infrastructure first: Governance conversations often miss power, connectivity, skills, and institutional capacity—the "tissue" that holds AI systems together. These are prerequisites for meaningful participation by developing nations.

  5. Regulate by sector, build with innovation-first mindset: Effective AI governance should be sectoral (healthcare, finance, education) rather than horizontal, and should prioritize innovation before regulation—especially in emerging economies where creating capability is more urgent than constraining it.

Key Topics Covered

  • Geopolitical fragmentation and AI governance: The impossibility of global consensus and the viability of coalition-building around specific issues
  • The "AI divide" and resource constraints: Compute, data, infrastructure, and skills gaps in developing nations
  • Open-source and decentralized models: Linux and LAMP stack analogies; how open architectures enable local adaptation and sovereignty
  • Technical standards and interoperability: NIST, ISO frameworks; red teaming; multilingual benchmarks; shared evaluation practices
  • Public-private partnerships and shared resources: Compute consortiums; cloud credits; data collectives; indigenous data governance models
  • Sectoral governance approaches: Healthcare, education, climate resilience; horizontal vs. sectoral regulation
  • Institutional capacity and talent: The critical shortage of governance expertise in developing nations
  • Federated learning and data trusts: Privacy-preserving collaborative training; data provenance and licensing
  • Sovereignty and strategic autonomy: Mapping dependencies; deciding where to invest in indigenous solutions vs. accept foreign reliance
  • Innovation-first governance mindset: Prioritizing innovation before regulation, especially in emerging economies

Key Points & Insights

  1. Global AI consensus is unrealistic; coalition-building around specific issues is pragmatic

    • Isabella Wilkinson (Chatham House) argues that full geopolitical alignment on AI governance is a "no-go," but partial alignment on priority areas is achievable through coalition-building mechanisms that can then be scaled via multilateral formats. Success requires framing around sovereignty and strategic autonomy rather than shared values.
  2. The AI divide will be larger and more consequential than the digital divide

    • Rafik Ricoran (Mozilla CTO) notes that while the digital divide was about access, the AI divide is about agency and capability. This affects not just resource-rich nations but the ability of developing nations to participate meaningfully in AI governance and development.
  3. Compute is the foundational constraint, but the infrastructure problem is multi-layered

    • Raja Nambia (NASCOM, India) identifies compute costs as prohibitively expensive even under purchasing-power-parity adjustments; additionally, power infrastructure, connectivity, and data quality/organization are equally critical but often overlooked. The infrastructure problem includes the "tissue that holds it all together."
  4. Open-source and decentralized architectures offer a path to digital sovereignty

    • The Linux and LAMP stack models demonstrate that shared infrastructure with local sovereignty is viable. Participants can contribute to common codebases while fine-tuning implementations to local values, creating flexibility at personal, corporate, and national levels without requiring permission from centralized gatekeepers.
  5. Data provenance, licensing, and collective ownership models are underexploited

    • Rafie highlights emerging models like Mozilla's data collaborative, Hawaiian genomic data trusts, and indigenous data collectives. These ensure attribution, compensation, and local control over data use in AI training while preventing mass scraping by frontier labs.
  6. Federated learning enables cross-border collaboration without data transfer

    • Training models across distributed systems (e.g., on-device training with only model weights shared centrally) allows countries and regions to contribute healthcare, linguistic, and cultural data without ceding sovereignty, enabling larger, more representative models.
  7. Sectoral governance is more effective than horizontal frameworks

    • Nambia emphasizes that governance approaches must be tailored by sector (healthcare, finance, education) because the nature and impact of AI harm differs significantly. This is more effective than one-size-fits-all regulation.
  8. Institutional capacity and regulatory talent are the real bottleneck

    • A critical and underappreciated constraint is the lack of technical and governance expertise in regulatory agencies across developing nations. Capacity building must include training regulators and policymakers to understand AI harms, not just AI capabilities.
  9. Standards (NIST, ISO) and industry coalitions drive practical alignment

    • Halak Shiraasava (Coher) notes that flexible, evolving technical standards and cross-industry coalitions (including startups) create more durable and inclusive frameworks than rigid national regulations, which risk pricing out smaller companies.
  10. Procurement policies and public-sector deployment are levers for inclusion

    • Opening procurement policies to international competition and ensuring public agencies have the skills to deploy and govern AI creates a demand signal for diversified, locally-appropriate solutions, which stimulates innovation in underserved regions.

Notable Quotes or Statements

"Global consensus on how to govern AI is a no-go. It is not going to happen in this geopolitical environment. However, partial alignment on priority issue areas is possible and it's pragmatic to throw our weight behind these smaller gatherings that we can then scale using the multilateral format." — Isabella Wilkinson, Chatham House

"The AI divide is going to be much much bigger than the digital divide which we saw because...the biggest difference is that...the digital divide is about access...whereas this is all about agency." — Raja Nambia, NASCOM (India)

"You don't want four people in San Francisco making government's decisions for the entire world. That doesn't make a lot of sense." — Rafik Ricoran, Mozilla CTO

"What you don't want is to have this 100 versions of the same thing with a few nuances here and there." — Raja Nambia, on the need for reusable policy frameworks

"Capacity building isn't just running workshops or talking to regulators. Shared evidence—documents, results, performance, benchmarks—is what lifts up other players." — Halak Shiraasava, Coher

"The economics are starting to make a lot of sense. MIT economist Frank Nagel has a report recently that approximately 24 billion US dollars are being wasted by not switching to open source models right now." — Raja Nambia


Speakers & Organizations Mentioned

SpeakerOrganizationRole
Sabina Chofu (Moderator)Tech UKInternational Policy & Strategy Lead
Isabella WilkinsonChatham HouseResearch Fellow, Digital Society Program
Rafik RicoranMozillaChief Technology Officer
Raja NambiaNASCOM (India)President
Halak ShiraasavaCoher (Canadian AI company)Global AI & Public Policy, Regulatory Affairs
Nina SinghCredoi(Absent—meeting with president)

Related Institutions & Initiatives:

  • NIST (US National Institute of Standards & Technology)
  • ISO (International Organization for Standardization)
  • ITU (International Telecommunication Union)
  • Chatham House (UK think tank)
  • Tech UK (sister association of NASCOM)
  • AI Safety Asia
  • Mozilla Data Collective
  • MIT Economics research (Frank Nagel)

Technical Concepts & Resources

ConceptDescriptionKey Reference
Linux/LAMP stack modelOpen-source shared infrastructure enabling local sovereignty and derivative workRafik Ricoran analogy for decentralized AI governance
Federated learningDistributed training where data remains local; only model weights are centrally aggregatedHandwriting recognition on Android phones (Google); proposed for healthcare/language data across borders
Data trusts/collectivesCommunity-controlled data governance structures with provenance tracking and compensation mechanismsHawaiian genomic data trusts (UCSD); Mozilla Data Collective; Indigenous data models
Multilingual modelsLanguage models adapted to regional/linguistic contextsSoutheast Asian languages under one network (SEA LLM); local adaptation model
NIST & ISO frameworksFlexible, evolving technical standards for AI risk, evaluation, and interoperabilityMentioned as key enablers for startups and international alignment
Red teamingAdversarial testing of AI systems; shared evaluation documentationPart of industry coalition practices for risk mitigation
Procurement policiesGovernment procurement frameworks that can drive demand for diverse, locally-appropriate AI solutionsHighlighted as a lever for inclusion and innovation in developing nations
Sectoral governanceTailored AI regulation by sector (healthcare, finance, education) rather than one-size-fits-all frameworksEmphasized by Raja Nambia as more effective than horizontal approaches
Regional compute consortiumsShared GPU clusters and infrastructure pooled across countries/academia/industry/governmentExample: India's AI mission cluster
Cloud creditsSubsidized access to hyperscaler GPU resources (AWS, Google Cloud, etc.)Mechanism for emerging economy participation without capital investment

Additional Context & Implications

Governance Philosophy Shifts:

  • Away from: Centralized regulation by frontier labs and Western institutions
  • Toward: Distributed coalition-building, open-source infrastructure, sectoral governance, public-private partnerships

Practical Next Steps (12–18 months, per Halak):

  • Convergence on technical standards via NIST/ISO/ITU bodies
  • Increased institutional participation and technical literacy
  • Cross-sector industry coalitions forming around procurement and shared evidence
  • Economics of open-source adoption becoming undeniable (24 billion USD waste argument)

Unresolved Tensions:

  • How to ensure meaningful participation from smaller developing nations without creating regulatory burden
  • Speed of governance vs. speed of AI capability development
  • Balancing data protection/sovereignty with model training needs