All sessions

Asia AI Diplomacy: Governing AI in a Fragmented World

Contents

Executive Summary

This panel discussion explores the critical gap between diplomatic timescales (months/years) and AI crisis timescales (milliseconds/seconds), examining how governments can coordinate across borders when AI-related incidents unfold faster than traditional governance mechanisms can respond. The session presents concrete crisis scenarios, regulatory frameworks, regional perspectives, and proposes institutional mechanisms for cross-border AI incident response, with particular emphasis on perspectives from Asia and the Global South.

Key Takeaways

  1. The Core Problem Is Institutional, Not Technical: We have built deeply interconnected AI systems but no corresponding cross-border coordination infrastructure. When incidents occur at machine speed, the absence of pre-established communication channels, verification protocols, and decision-making authority will cause escalation through attribution failures and trust collapse.

  2. Regulation ≠ Innovation Slowdown: Regulation and innovation coexist successfully in every other critical sector (food, medicine, aviation, building safety, nuclear power). The claim that "fast-moving technology can't be regulated" is a rhetorical tactic historically used to block regulation of cigarettes and carbon dioxide. Regulation should set acceptable risk levels and let technologists achieve them.

  3. Regional Sovereignty and Inclusive Governance Matter: A one-size-fits-all global AI regulation model (based on advanced-economy approaches) will fail. Developing countries need legitimate voice in governance frameworks and space to pursue innovation-first policies. Central Asia's UN resolution establishing a regional AI center and self-regulation principles demonstrates viable alternatives.

  4. Build Trust Channels Before Crisis: Establishing regular technical collaboration, evaluation capacity-building, and personal relationships between government AI officials creates the trust infrastructure needed to coordinate rapidly during incidents. These channels must exist in advance; they cannot be improvised during crisis.

  5. AI Crisis Diplomacy Requires Operational Infrastructure: Taiwan's proposal for a regional AI crisis liaison network—a technical hotline extending existing cybersecurity frameworks (FIRST, APERT) to cover AI-specific incidents—provides a concrete model. This bridges the gap between diplomatic time and algorithmic time through pre-positioned, trusted communication structures.

Conference Talk Summary


Key Topics Covered

  • AI Crisis Diplomacy: Cross-border coordination mechanisms for AI-related incidents
  • Crisis Scenarios: Financial cascades, synthetic media/deepfakes, autonomous infrastructure incidents
  • Regulatory Frameworks: Risk-based regulation, liability models, "behavioral red lines" approach
  • Extinction Risk: AGI safety and acceptable risk thresholds
  • Regional Perspectives: Central Asian, Southeast Asian, Japanese, and Taiwanese approaches
  • Verification & Attribution: Establishing shared reality under uncertainty
  • Trust-Building Mechanisms: Technical evaluation capacity, government-to-government channels
  • Self-Regulation vs. Government Regulation: Tension between innovation and safety in developing economies
  • Institutional Gaps: Absence of formal AI-specific crisis coordination infrastructure
  • Policy Innovation: Taiwan's white-list verification systems, polled consensus-building tools, regional liaison networks

Key Points & Insights

  1. Speed Asymmetry Problem: AI crises operate at machine speed (milliseconds to minutes) while diplomatic coordination operates at human timescales (days to months). Financial systems react in seconds; synthetic media spreads in minutes; autonomous systems can act before governments know something is wrong.

  2. Three Defining Characteristics of AI Crises: (1) Cross-border impact, (2) Speed exceeding any single authority's response capacity, (3) Attribution uncertainty—making it unclear who is responsible and whether incidents are accidents or deliberate.

  3. Liability as Fundamental Regulatory Tool: Professor Russell argues that liability (financial responsibility for harm) is a "permanent and future-proof" regulatory mechanism applicable to fast-moving technologies. However, tech companies systematically disclaim liability (e.g., Microsoft's $5 maximum compensation clause), circumventing this tool that has worked for millennia in other sectors.

  4. Extinction Risk Quantification Gap: AI company CEOs estimate 10-50% extinction risk from their AGI efforts, yet no regulatory framework exists requiring demonstration that loss-of-control risk falls below acceptable thresholds (e.g., 1 in 100 million per year). Current estimates are "seat of the pants," not based on actual calculations.

  5. Behavioral Red Lines Approach: Rather than attempting to regulate AI broadly, specific "obviously unacceptable behaviors" can serve as regulatory targets—e.g., AI systems should not explain how to build bioweapons, should not impersonate humans. Risk level determines required proof reliability.

  6. Trust-Building Through Technical Collaboration: Singapore/IMDA model emphasizes that governments should begin building trust through joint technical evaluation and testing efforts, regular communication channels between AI-focused government officials, and capacity-building in technical assessment—before crisis strikes.

  7. Information Asymmetry Shapes Regulation Needs: Regulation appropriateness varies based on participant knowledge and economic resources. Direct consumer sales (OpenAI to users) involve huge information asymmetry requiring protection; B2B relationships (enterprise software) create natural market-driven testing pressure, as customers are liable for losses.

  8. Central Asian Development Perspective: Countries like Tajikistan argue for outcome-based rather than technology-based regulation, emphasizing that AI self-regulation through market forces and professional standards may be more appropriate than government-imposed frameworks—particularly for developing economies competing globally.

  9. Regulatory Complexity Creates De Facto Barriers: EU AI Act's complexity (1000+ pages) is causing companies to exit EU markets rather than attempt compliance, demonstrating that poorly designed regulation can stifle innovation in developing markets without providing equivalent safety benefits.

  10. Existing Crisis Infrastructure Can Be Extended: Countries have established coordination mechanisms for pandemic, cybersecurity, and other cross-border crises. Rather than building entirely new institutions, AI-specific crisis response can leverage and extend these existing channels (e.g., FIRST for cybersecurity provides templates).


Notable Quotes or Statements

Stuart Russell (UC Berkeley, AI Safety):

"We need regulations that will require a demonstration that the probability of loss of control leading to human extinction is less than one in a 100 million. And that applies even to systems now. As the systems get more capable, it's only going to get harder."

"You the human race are not allowed to protect yourselves from the technology that we are building that may well make you go extinct."

"Regulation does not have to be an enormous burden... a little sandwich shop can manage to comply with a heavy regulatory burden. And we open across the world more than a million new innovative restaurants and sandwich shops every year."

Wani Lee (Singapore, IMDA):

"When you have trust then you can pick up a phone and call someone and say you know my colleague in Japan is Akiko right and pick up a phone and call Akiko and say hey you know something has happened can we just figure out is this what you're observing today as well or not."

"We can't rely on the labs themselves to show us... so if they give us a number, how do we validate that? Collectively I think we even have less resources than a single lab."

Haruyama Yuko (Japan, Former GPAI Executive Director):

"AI is amplifier of existing crisis... the difference is the speed and we are less equipped humans to make decisions quickly... the phenomena is becoming more critical because we are seeing that AI is gaining agency more and more."

Ajit Azimi (Tajikistan, AI Council Chair):

"Why do we hold AI to a standard that is much tougher and stricter than human decision-making? I find that to be very puzzling."

"Let's not go back into colonial thinking where the whole world should follow that model... frontier economies like Tajikistan have a voice of their own."

Audrey Tang (Taiwan, Cyber Ambassador, Former Digital Minister):

"In diplomacy, we think in years. In the AI world, crisis unfold in milliseconds."

"AI is no longer just a tool. It is a participant."

"Let Asia be not just a rule taker but a supplier of safety infrastructure."


Speakers & Organizations Mentioned

Panel Members:

  • Stuart Russell — Director, Center for Human-Compatible AI, UC Berkeley
  • Wani Lee — Director, Infocomm Media Development Authority (IMDA), Singapore
  • Haruyama Yuko — Former Executive Director, Global Partnership on AI (GPAI); Japan
  • Ajit Azimi — Founding Chair, AI Council, Republic of Tajikistan; Founder, Zipple
  • Audrey Tang — Cyber Ambassador, Taiwan; Former Digital Minister; UN Sustainable Development Goals Advocate
  • Anne Marie Engtoft Melgård — Tech Ambassador, Denmark (mentioned as potentially joining)

Organizations:

  • Safety Asia (convening organization)
  • UN General Assembly (Central Asian AI resolution, July 2024)
  • Global Partnership on AI (GPAI)
  • IMDA (Singapore)
  • Agency for Innovation and Digital Technologies (Tajikistan)
  • OpenAI
  • European Union (AI Act)
  • FIRST (cybersecurity incident response framework)
  • APERT (Asia-Pacific trusted contact framework)
  • Government of Taiwan (digital infrastructure programs)
  • Government of India (digital public infrastructure—Aadhaar, UPI)

Technical Concepts & Resources

Crisis Scenario Frameworks:

  • Financial cascade model (AI misinterpretation → false signals → automated trading across jurisdictions)
  • Synthetic media/deepfake spread model (verification lag vs. political timeline)
  • Autonomous infrastructure incident model (multi-jurisdictional actor responsibility)

Regulatory Approaches:

  • Behavioral Red Lines: Specific, demonstrably unacceptable AI behaviors (rather than broad technology regulation)
  • Outcome-Based Regulation: Focusing on harmful results rather than technical implementation
  • Liability Framework: Financial responsibility for harm caused by systems
  • Risk Quantification: Acceptable probability thresholds for harm categories (e.g., 1 in 100 million for extinction-level risks)
  • AI Self-Regulation: Market-driven professional standards and validation requirements

Technical Safety Tools:

  • Joint Testing & Evaluation: Collaborative government assessment to build mutual trust
  • Out-of-Sample Testing: Validation that models perform on unseen data distributions
  • Out-of-Time Testing: Validation that models remain performant across time periods
  • Synthetic Data Models: Using generated data for risk assessment without exposing real-world sensitive data

Crisis Coordination Infrastructure:

  • FIRST Network: Established cybersecurity incident response framework (potential template)
  • APERT: Asia-Pacific trusted contact framework for incident coordination
  • Regional AI Center Model: Central Asian UN resolution establishing regional coordination hub
  • White-List Verification Systems: Taiwan's SMS short code system for official government message authentication (with sender name + last 3 digits of recipient's phone)
  • Consensus-Building Tools: Pol.is (agree/disagree polling that surfaces bridging statements), Talk to the City (AI-assisted thematic analysis with auditability)
  • AI Crisis Liaison Network: Proposed regional technical hotline extending cyber frameworks to AI-specific incidents

Models & Technologies Referenced:

  • Large Language Models (LLMs)
  • Agentic AI systems (capable of planning and autonomous action)
  • Synthetic video/deepfake generation
  • Automated trading systems
  • Generative AI for impersonation/fraud
  • National LLMs (Tajikistan example of local model development)

Governance Concepts:

  • V Taiwan process: combining digital consensus tools with in-person dialogue
  • Co-responsible governance model: shared responsibility between government, private sector, users
  • Regional sovereignty in AI policy: legitimacy of non-Western regulatory approaches
  • Institutional trust-building: pre-crisis relationship formation between government AI officials

Document Type: Conference panel transcript from India AI Impact Summit
Primary Focus: AI Crisis Diplomacy and Cross-Border Governance
Regional Emphasis: Asia, Global South, and developing economies
Key Stakeholder Groups: Policymakers, diplomats, technologists, regulators, startup founders
Urgency Level: Critical (addresses imminent coordination gaps as AI capabilities accelerate)