All sessions

Multilingual AI in Universities: Advancing Inclusive Education

Contents

Executive Summary

This panel discussion examines how to foster meaningful multistakeholder participation in AI development across the entire lifecycle—from conception through post-deployment. The speakers argue that AI governance has been overly centralized in the Global North among industry and government actors, excluding valuable expertise from civil society, affected communities, and policy experts. The conversation emphasizes the need to move beyond voluntary commitments and soft law toward binding regulatory frameworks while drawing lessons from both social media regulation failures and international development best practices.

Key Takeaways

  1. Move from soft to hard law: Organizations and policymakers must establish timelines for transitioning AI governance from voluntary commitments to binding regulations and mandatory processes—learning from social media's 15+ year stall in the consensus-building phase.

  2. Put affected communities first: Meaningful AI development requires starting with community needs and expertise, not deploying models and asking communities to adapt afterward. This reverses current practice and requires genuine power-sharing in decision-making.

  3. Expand the table systematically: Civil society, policy experts, journalists, and affected communities must have structured, funded roles in AI governance conversations—not just token representation. This requires intentional decentering of industry and Global North perspectives.

  4. Learn from proven frameworks: Decades of international development, technology policy, and social science research provide tested models for participatory governance. The AI field should apply these rather than treating participation as a novel problem requiring only technical solutions.

  5. Define participation's endpoint: Before designing participation mechanisms, stakeholders must clarify what participation is for—what decisions it influences, what outcomes it drives, and what accountability follows when developers ignore community input.

Key Topics Covered

  • Multistakeholder participation in AI governance — mechanisms and effectiveness across the AI development lifecycle
  • Policy and regulatory approaches — from voluntary commitments to hard law, standards, and binding agreements
  • Global equity and representation — why AI governance conversations remain concentrated in the Global North
  • Lessons from social media regulation — cautionary parallels and institutional accountability failures
  • Community expertise and participatory design — drawing on international development models for AI inclusion
  • Power distribution in technological decision-making — who gets to decide what problems AI should solve
  • Language models and linguistic diversity — importance of community input in multilingual model development
  • Box-checking vs. genuine participation — distinguishing performative inclusion from meaningful engagement

Key Points & Insights

  1. Regulatory trajectory problem: There is a dangerous pattern of remaining in the "consensus-building" phase too long (as with social media), delaying the shift from voluntary commitments to structured regulation. A cutoff point is needed to move toward mandatory processes and standards.

  2. Historical repetition risk: Without hard regulatory mechanisms in the next few years, AI governance risks repeating the social media regulatory failure—platforms made repeated commitments without accountability mechanisms, resulting in systemic harms.

  3. Process-based regulation as interim approach: Rather than only mandating outcomes, regulations should mandate processes—requiring companies to integrate specific design and deployment practices that surface potential harms and challenges.

  4. Power dynamics at the core: Meaningful participation is fundamentally about how power is distributed in technological decision-making—specifically, who decides what problems technology should solve and how.

  5. Reverse-engineering use cases: Current AI deployment follows a pattern where frontier companies launch models first, then expect users to discover appropriate use cases afterward. This contrasts with participatory development where communities define their own needs first.

  6. Community expertise is underutilized: Communities and local stakeholders possess deep expertise about their linguistic contexts, cultural needs, and specific requirements—knowledge that is often excluded from model development processes.

  7. Lessons from international development: The "Putting People First" framework from development work (1990s onward) demonstrates that effective participation requires starting with community needs and concerns, not imposing solutions after the fact.

  8. Ahistoricizing problem in AI discourse: The AI policy conversation often treats these governance challenges as entirely novel, ignoring decades of experience building complex technologies (like privacy-enhancing technologies) with interdisciplinary teams including non-technical experts.

  9. Global North concentration of AI governance: Most AI system procurement and deployment conversations occur in industry or G2G (government-to-government) spaces in the Global North, systematically excluding affected populations and regional expertise.

  10. Participation vs. box-checking: A critical open question remains: how to ensure participation mechanisms are substantive rather than performative exercises that appear to include stakeholders without meaningfully influencing decisions.


Notable Quotes or Statements

  • Jalak Kakar (implied): "If we don't make that shift over the period of the next couple of years from voluntary commitments into harder, more structured forms of regulation we will...see a repetition of what we have seen with the history of social media where platforms ran free."

  • Vanraj Ther (implied): "When we talk about participation and collaboration and building partnerships with communities...we're in effect talking about how powers are distributed and what kinds of decisions are made by who."

  • Vanraj Ther (implied): "Communities...are defining what technology use cases are best for them, which is the reverse of what we see now where frontier model companies deploy models with the idea of...let people figure out what the use case is best for them after the fact."

  • Opening framing (Aliyia Bhhata): "More often than not these conversations are being led and sort of centralized in the global north in industry or G2G government to government spaces and that loses out on a lot of very valuable expertise both from civil society..."


Speakers & Organizations Mentioned

SpeakerRoleOrganization
Aliyia BhhataSenior Policy AnalystCenter for Democracy and Technology (Washington-based, nonpartisan nonprofit)
Jalak KakarExecutive DirectorCenter for Communication Governance at National Law University, Delhi
Vanraj TherInaugural Professor & DirectorEmerging Technology Initiative at George Washington University / George Washington University Law School
Marlina WiznjakFacilitator (absent)European Center for Not-for-Profit Law

Technical Concepts & Resources

  • Large language models (LLMs) — the primary AI systems discussed as subjects of governance challenges
  • Multistakeholder governance models — participatory frameworks for technology development
  • Soft law vs. hard law distinction — regulatory spectrum from voluntary commitments to binding regulations
  • Process-based regulation — mandating procedural requirements rather than outcome specifications
  • Privacy-enhancing technologies — cited as example of complex technologies successfully developed with interdisciplinary teams
  • "Putting People First" framework — international development principle emphasizing community-centered design (referenced to scholar Chambers, 1990s work)
  • Language model linguistic diversity — specific concern about ensuring linguistic communities inform model development
  • Participatory design methodology — approach to technology development centering affected communities' needs

Gaps & Limitations in This Transcript

The transcript appears incomplete and contains significant transcription artifacts (repeated phrases like "multistakeholder meaningful multistakeholder meaningful multistakeholder participation"). This limits full reconstruction of:

  • Complete arguments from both panelists
  • Specific policy or regulatory examples referenced
  • Outcomes of the breakout group discussions
  • Questions and responses from the audience
  • Concrete initiatives or case studies showcasing successful participation

For comprehensive analysis, the full video or a cleaned transcript would be necessary.