All sessions

AI governance at the age of powerful AI international perspective and the code of practice

Contents

Executive Summary

This panel discussion focuses on the European AI Act's approach to governing frontier AI development through a flexible "code of practice" rather than rigid legislative prescriptions. The speakers emphasize that AI governance must balance innovation with safety, require international cooperation (including with China), and create structural conditions enabling companies to prioritize safety over competitive pressure—while explicitly addressing high-stakes risks like military AI and loss-of-control scenarios.

Key Takeaways

  1. Competitive pressure, not malice, drives inadequate AI safety practices – The real governance challenge is creating conditions where competitive incentives align with safety, not assuming companies are unwilling to be responsible.

  2. Regulation must be precise yet flexible – Clear objectives with adaptive implementation mechanisms (like codes of practice) outperform either vague legislation or overly prescriptive rules that quickly become obsolete.

  3. International cooperation is non-negotiable and urgent – Governance of frontier AI requires treating Chinese, US, and European actors as equals at the table; avoiding this multiplies risks in military AI and loss-of-control scenarios.

  4. Context-specific trust frameworks matter more than one-size-fits-all rules – AI governance effectiveness increases when tailored to specific use cases and sectors rather than applied uniformly across all applications.

  5. Public institutions must lead on existential and military risks – Not all AI governance can or should be delegated to industry; governments need enhanced technical capacity and authority to manage highest-stakes scenarios.

Key Topics Covered

  • EU AI Act & Code of Practice Framework – The legislative approach to defining and mitigating AI risks through collaborative rule-making
  • Risk Mitigation Strategy – Distinguishing between existential and systemic risks (democratic processes, misinformation, cyberbullying, criminal use)
  • Competitive Pressure as a Governance Challenge – Recognition that geopolitical competition prevents companies from implementing additional safety measures they acknowledge as necessary
  • International Cooperation Requirements – Need for multilateral engagement including European, US, and Chinese actors as equals
  • Context-Specific Governance – Importance of use-case-driven and domain-specific trust frameworks (medicine vs. customer service)
  • Military AI & Loss-of-Control Risks – Areas requiring government-level (not business-level) cooperation and governance
  • Building Citizen Trust – Innovation and fundamental rights protection framed as compatible, not contradictory goals
  • Implementation & Enforcement – Role of the European AI Office in making the code of practice effective and applicable

Key Points & Insights

  1. Flexible Regulation Over Prescriptive Detail – Rather than detailing every risk mitigation requirement in legislation, the EU AI Act delegates specifics to a collaborative code of practice involving civil society, developers, academia, and enterprises. This allows rules to evolve with the rapidly changing AI landscape.

  2. The Competitive Pressure Problem – CEOs of leading companies explicitly acknowledge they would implement additional safety measures if competitive and geopolitical pressure allowed it. This gap between what companies know is prudent and what they actually do represents a critical governance failure that regulation must address.

  3. Systemic Risks to Democracy – Beyond existential risks, the code of practice targets "systemic risks" that threaten democratic processes: misinformation, cyberbullying, and AI-enabled criminal activity. Building a "culture of restraint" in system design is framed as essential to protecting citizen rights.

  4. Trust Through Use-Case Specificity – One speaker argues the gap between perceived AI power and organizational deployment stems from lack of context-specific trust frameworks. Trust requirements differ fundamentally between medicine (high consequences) and customer service (lower consequences), requiring domain-by-domain governance approaches.

  5. Public Institutions Must Lead on High-Stakes Issues – Military AI use, loss-of-control scenarios, and international coordination cannot be delegated to private actors. Governments must take primary responsibility and "not lose any more time" establishing frameworks and agreements.

  6. Innovation and Safety Are Compatible – The closing argument reframes the innovation-vs.-safety debate: trust-enabled governance can coexist with rapid AI advancement across different national contexts, but requires sustained international collaboration.

  7. Implementation Capacity Gap – Effective governance requires ensuring public institutions (like the European AI Office) have sufficient resources and authority to enforce the code of practice and match the technical sophistication of "very powerful private actors."

  8. The Code as an International Reference Point – Speakers present the EU code of practice (particularly its safety chapters) as a potential template other countries can adopt, signaling an attempt to establish soft-law international standards rather than fragmented national approaches.

Notable Quotes or Statements

"They say they would like to be able to take additional steps, but under the competitive geopolitical pressure they're in, they do not feel that they are able to. We should be hearing that. That should be a red alarm bell for us." — Sean (governance expert/scholar)

"We want the code of practice to contribute in building trust among our citizens on the fact that we can innovate without sacrificing human rights and protection of our fundamental values." — Brando (Parliament speaker)

"The bottleneck is about how they know how to trust it in the right context because the answer is very different in medicine than it is for customer service." — Speaker emphasizing use-case specificity

"Don't lose any more time. You need to sit down and use these occasions to do progress." — Brando, on urgent need for government coordination

"Innovation and trust can go together and we can find different ways to make sure that trust is ensured or enabled in a particular country and in a particular continent." — Panel closing statement

Speakers & Organizations Mentioned

  • Brando – European Parliament representative, lead voice on code of practice implementation
  • Sean – Governance expert/scholar advocating for international cooperation and competitive pressure mitigation
  • Professor Benjo – Researcher on AI risks (including loss-of-control scenarios); cited on code of practice design
  • European AI Office – Institutional body responsible for implementing and enforcing the code of practice
  • EU Member States – Implied signatories to the AI Act framework
  • China – Referenced as a critical actor in international AI governance who must be brought "to the table as equals"
  • Leading AI Companies (unnamed CEOs) – Acknowledged as sources of competitive pressure concerns

Technical Concepts & Resources

  • EU AI Act – Primary legislative framework discussed; notably uses code of practice rather than prescriptive regulation

  • Code of Practice – Collaborative, evolving ruleset developed through multi-stakeholder process (civil society, developers, SMEs, large enterprises, academia)

  • Risk Categories:

    • Existential risks
    • Systemic risks (democratic, freedom-related)
    • Loss-of-control risks (cited as Benjo's research area)
  • Safety Chapters – Specific sections of the code of practice presented as potential template for international adoption

  • Trust Frameworks – Domain-specific control mechanisms (referenced but not technically detailed; implied need for development in medicine, customer service, and other sectors)

  • Military AI Use – Identified as requiring international governmental cooperation

  • Misinformation, Cyberbullying, Criminal Use – Concrete systemic risks the code targets


Note: This transcript lacks speaker identification for most participants and does not include formal citations to specific research. The discussion is policy-focused rather than technically deep, reflecting a governance/leadership panel rather than a technical research presentation.