All sessions

Democratising AI Access: Data, Governance, and Market Design

Contents

Executive Summary

This OECD-led panel discussion focuses on the Hiroshima AI Process—an international initiative launched under Japan's 2023 G7 presidency to create voluntary, comparable governance frameworks for advanced AI systems. The speakers highlight how the Hiroshima Reporting Framework serves as a bridge between rapid technological innovation and fragmented global regulation, enabling organizations to demonstrate responsible AI practices while informing policy development. Key emphasis is placed on adapting governance structures for emerging challenges like agentic AI and ensuring the framework remains inclusive for companies and countries across development levels.

Key Takeaways

  1. Voluntary international frameworks can move faster than regulation and may be necessary in the short term, but they work best when designed as persistent, adaptive structures rather than one-time commitments—and they require explicit incentives to maintain participation.

  2. Governance clarity across the entire AI value chain—from model development through deployment—is foundational for trustworthy adoption at scale, particularly for autonomous and agentic systems where traditional developer-only accountability breaks down.

  3. The framework's real value lies in creating shared language and common understanding across siloed teams within organizations and across jurisdictions, enabling conversations between technologists, policy teams, and regulators that were previously difficult.

  4. Emerging markets and smaller companies can benefit significantly from international frameworks if designed with accessibility, multilinguality, and integrated tool catalogs that reduce the burden of discovering and implementing best practices independently.

  5. As AI capabilities outpace regulatory processes, iterative voluntary frameworks informed by ongoing technological change become critical infrastructure for responsible innovation—treating governance as a dynamic conversation rather than a static compliance checklist.

Key Topics Covered

  • Hiroshima AI Process & International Code of Conduct — Development of developer-focused governance standards across the AI system lifecycle
  • Voluntary Reporting Framework (v1 and v2) — International comparable mechanism for organizations to demonstrate AI risk management
  • Governance vs. Rapid Innovation — Tension between slow regulatory processes and fast-moving AI technology evolution
  • Value Chain Alignment — Clarifying responsibilities of developers, deployers, and end-users across the AI ecosystem
  • Agentic AI Implications — Emerging challenges of autonomous agents communicating with each other and operating at scale
  • Global Participation & Incentive Mechanisms — Expanding beyond G7 to include emerging markets; designing benefits for voluntary participation
  • Capacity Building & Literacy — Need for government and public understanding to effectively interpret disclosed AI governance information
  • Accessibility & Multilingual Considerations — Ensuring AI governance frameworks account for global deployment and diverse populations
  • Tools & Metrics Integration — OECD.AI catalog connecting reporting with 700+ trustworthy AI tools and metrics
  • Internal Organizational Change — How governance frameworks drive alignment between technology developers and policy teams within companies

Key Points & Insights

  1. Framework as a Living Document: The Hiroshima Reporting Framework is explicitly designed as iterative and adaptable—v2.0 will increase comparability, integrate tool catalogs, and expand scope beyond developers to deployers (e.g., cloud providers, enterprises) operating across the full AI lifecycle.

  2. Voluntary Commitments Bridge Geopolitical Fragmentation: Given increasing regulatory divergence between the EU, US, and Asia, voluntary international frameworks provide practical alignment where formal regulation is stalled by geopolitical tensions. This is especially valuable for emerging companies operating globally.

  3. Agentic AI Requires New Standards: Agent-to-agent communication, autonomous decision-making, and distributed accountability create governance challenges absent from current frameworks. Responsibility boundaries between model developers, application developers, and deployers must be clarified urgently.

  4. Internal Organizational Alignment Is Critical: Companies report that participation in reporting frameworks surfaces misalignments between product development teams, government affairs functions, and policy teams—creating internal governance maturity that precedes and informs external compliance.

  5. Incentive Mechanisms Are Essential for Sustainability: Voluntary frameworks risk participant attrition without visible benefits (market trust, investor recognition, regulatory clarity). Japan emphasized that positive feedback loops—from government, markets, and investors—must accompany voluntary commitments.

  6. Capacity Building Is Underestimated: Policy makers, regulators, and citizens lack literacy to meaningfully interpret detailed AI governance disclosures. Simply publishing comprehensive reports without accompanying education limits their policy impact.

  7. Shared Language Reduces Compliance Burden: A common international framework allows companies to manage a single set of expectations rather than multiple jurisdictional requirements, reducing compliance costs and accelerating responsible innovation for smaller firms.

  8. Tool Catalog Integration Democratizes Best Practices: Linking the reporting framework directly to 700+ trustworthy AI tools and metrics helps resource-constrained organizations discover and adopt state-of-the-art governance approaches already validated by peers.

  9. Transparency as Competitive Advantage: Companies investing in governance frameworks and transparent reporting build customer trust and investor confidence, particularly critical for enterprise AI solutions and emerging competitors challenging incumbents.

  10. Developers and Deployers Have Distinct but Interdependent Responsibilities: Neither model developers nor application deployers alone can ensure responsible AI. Frameworks must clarify what each actor must do and provide visibility into the full chain so actors can make informed risk decisions.


Notable Quotes or Statements

"We need capacity building not only for the government but also for AI [organizations] including ordinary user citizens because all different players need to increase the literacy [in] governance." — Yoichi Iida, Japan's Ministry of Internal Affairs and Communications

"It's not just the right thing to do, it's also the smart thing to do for business to have these shared frameworks." — Paula Goldman, Chief Ethical and Humane Use Officer, Salesforce

"Regulation is very important and once it's codified it's codified hard to change, but frameworks can adapt as the technology evolves." — Paula Goldman, Salesforce

"Security is a shared responsibility between the entities that create technology and the entities that use technology. It's the same here [with AI governance]." — Paula Goldman, Salesforce

"Voluntary commitments can serve the purpose of essentially getting the whole community, in particular across enterprise, to adhere to similar commitments and similar practices, even though regulation may take longer to come into play." — Joel Pino, Chief AI Officer, Cohere

"This type of work has to bring all these stakeholders to the table together—the ability to have conversations about what are the trends in the technology, to articulate the capabilities and risks of this technology in a way that speaks beyond just the language of engineering and computer scientists." — Joel Pino, Cohere

"[Organizations] who submitted the report can benefit from their action [and] voluntary actions otherwise you know people will leave the framework. There should be some feedbacks from trust from the market or trust from the government, trust from the investors." — Yoichi Iida, Japan

"There's just this really big opportunity now to work on seeing the kind of connections across the value chain...and making sure that all the actors in the value chain have information they need to be able to make their own risk decisions." — Amanda Craig, General Manager, Office of Responsible AI, Microsoft


Speakers & Organizations Mentioned

Panelists

  • Yoichi Iida — Minister's Special Policy Envoy, Japan's Ministry of Internal Affairs and Communications (MIC); Chair of G7 Working Group on Hiroshima AI Process
  • Paula Goldman — Chief Ethical and Humane Use Officer, Salesforce
  • Joel Pino — Chief AI Officer, Cohere (Quebec, Canada)
  • Amanda Craig — General Manager, Office of Responsible AI, Microsoft

Moderator

  • Karinne Kittredge (implied) — OECD representative

Organizations Referenced

  • OECD (Organization for Economic Co-operation and Development) — Secretariat for Hiroshima AI Process and Global Partnership on AI
  • Salesforce — CRM and AI solutions provider for enterprises of all sizes
  • Microsoft — Cloud computing, AI models, and enterprise applications
  • Cohere — Enterprise AI solutions (model developer and deployer)
  • Infosys — Technology services firm; first Indian organization to submit Hiroshima report
  • G7 (Group of Seven) — Japan held presidency in 2023; launched Hiroshima AI Process
  • G20 — Previously agreed on AI principles (2019)
  • Brookings Institution — Published recent governance framework insights (referenced at end)
  • Center for Democracy & Technology (CDT) — Co-published framework improvement recommendations

Government & Multilateral Bodies

  • Japan — Lead champion of Hiroshima process
  • Global Partnership on AI (GPAI) — 46+ countries including India, Brazil, Argentina, and African nations
  • EU — Developing regional AI regulatory approach
  • US — Separate regulatory trajectory

Emerging Markets Mentioned

  • India — Participant in G20 AI principles; GPAI member
  • Brazil — GPAI member
  • Indonesia — Referenced as large emerging market for AI adoption
  • Argentina — GPAI member

Technical Concepts & Resources

Governance Frameworks & Processes

  • Hiroshima AI Process — International initiative (2023) establishing voluntary governance standards
  • International Code of Conduct for Advanced AI Systems Developers — Foundational document setting developer expectations across AI system lifecycle
  • Hiroshima Reporting Framework v1 — First voluntary, comparable mechanism for AI risk management accountability; 25 submissions from 9 countries as of discussion date
  • Hiroshima Reporting Framework v2 — Upcoming iteration (launch planned Q2 of discussion year, March pilot) with improvements:
    • More comparable and aggregable data
    • Direct integration with OECD.AI tool catalog
    • Expanded scope beyond developers to deployers (cloud providers, enterprises)
    • Tailored reporting across full AI lifecycle

Tools & Resources

  • OECD.AI Catalog — Platform hosting 700+ tools and metrics for trustworthy AI development and governance
    • URL: oecd.ai/hiroshima
    • Integrated directly into v2 reporting interface
    • Dynamically updated with new tools as organizations report

Technical Challenges Addressed

  • Risk Assessment & Mitigation — Gap between paper frameworks and iterative practice; organization-specific methods vary by technology and use case
  • AI System Lifecycle — Full value chain governance:
    • Model developers → Application developers/deployers → End users/customers
    • Responsibility clarity across each stage
  • Agentic AI — Autonomous agents; agent-to-agent communication; auditability and governance at scale
  • Open Source AI — Tensions between transparency (open sourcing) and comprehensive governance; both required

Concepts & Terminology

  • Voluntary Framework — Non-binding commitments enabling faster iteration than formal regulation
  • Shared Language & Common Understanding — Alignment on terminology, risk definitions, and governance norms across organizations and jurisdictions
  • Governance Maturity — Organizational capacity to implement internal processes (e.g., risk assessment, model governance) that predate and inform regulatory compliance
  • Capacity Building — Education for policy makers, regulators, and public to meaningfully interpret AI governance disclosures
  • Value Chain Accountability — Clarifying which actor (model developer, deployer, end-user) bears responsibility for each type of AI risk
  • Transparency — Disclosing capabilities, risks, data usage, and model properties
  • Trust — Market, investor, and regulatory confidence stemming from demonstrated governance

Regulatory Context

  • EU AI Act — Regional regulatory approach (referenced as one of diverging frameworks)
  • Geopolitical Fragmentation — Divergent regulatory approaches in US, EU, Asia limiting international alignment prospects through formal regulation
  • Regulation Timeline Problem — Formal regulation codification is slower than AI technology evolution; frameworks can adapt faster
  • Global Partnership on AI (GPAI) — Broader multilateral governance body with 46+ member countries
  • OECD AI Principles — Earlier guidance agreed 2019; Hiroshima Process operationalizes these principles
  • G20 AI Principles — Agreed 2019 with India as key partner

Note on Data Quality: The transcript contains repetitive and fragmented sections (likely transcription errors from automatic speech-to-text), particularly in speaker quotations. Core claims and framework details have been extracted with high confidence, but some speaker attribution and exact phrasing may contain minor inaccuracies due to source material quality.