All sessions

Agentic AI in Focus: Opportunities, Risks, and Governance

Contents

Executive Summary

This AI Impact Summit panel discussion examines agentic AI (autonomous AI systems that act independently within defined parameters) through dual lenses: business applications and policy implications. Industry leaders demonstrate concrete use cases—from chip design to fraud detection to data provisioning—while policy experts consensus-build around voluntary, standards-based governance frameworks rather than prescriptive regulation. The central thesis is that agentic AI's benefits depend on robust enterprise guardrails, international standards harmonization, and human oversight mechanisms scaled to autonomy levels.

Key Takeaways

  1. Agentic AI adoption will scale on trust, not just capability. Enterprise guardrails—identity verification, bounded permissions, auditability, security by design—are not obstacles but prerequisites for responsible deployment at scale.

  2. International standards harmonization is critical for Global South inclusion and technology diffusion. Without coordination through OECD, NIST, and regional bodies, fragmented regulations will create competitive disadvantage and reinforce tech inequality. India, Singapore, and developing nations must have meaningful voice in standards-setting.

  3. Regulate harms and use cases, not underlying technology. Prescribing rules for "AI models" fails because technology evolves faster than regulation. Defining regulations around financial fraud, autonomous vehicles, or critical infrastructure protection is more durable and risk-aligned.

  4. Multi-agent systems are the next frontier of unknown risk. Single agents can be tested; networks of agents may exhibit unpredictable emergent behaviors. Industry and academia must develop benchmarks and testing frameworks before these systems are deployed widely.

  5. Voluntary consensus-driven governance works better than government mandate in fast-moving tech sectors. NIST's process (RFI, listening sessions, collaborative standard-setting) has proven more effective than regulation-first approaches and maintains faster iteration cycles.

Key Topics Covered

  • Business Use Cases for Agentic AI

    • Chip and product design automation with physics simulation
    • Real-time fraud detection and payment security in financial networks
    • Data preparation and security threat detection in multicloud environments
    • Speed and complexity demands driving agentic adoption (1-year product cycles vs. 3–7 year cycles historically)
  • Enterprise Guardrails & Risk Management

    • Agent verification and identity authentication ("know your agent")
    • Security by design and credential protection
    • Clear consumer/user intent and permission boundaries
    • Auditability, traceability, and accountability mechanisms
    • Data governance as foundational layer (lineage, quality, manipulation prevention)
  • Physical AI Safety vs. Digital AI

    • Distinction between content moderation risks and autonomous system risks (autonomous vehicles, aircraft, nuclear systems)
    • 100% software verification/validation pre-deployment emphasis
    • Kinetic consequence potential (physical harm, not just information harm)
  • Policy & Governance Frameworks

    • Voluntary consensus-based standards vs. top-down regulation
    • Sector-specific (healthcare, finance, education) vs. monolithic AI regulation
    • Human-in-the-loop/human-on-the-loop continuum correlated with agent autonomy levels
    • Regulating harms/use cases rather than underlying technology
    • International standards harmonization and inclusion of Global South voices
  • Multilateral Coordination Mechanisms

    • OECD as policy foundation-setter
    • NIST standards development process
    • International Consortium of Safety Institutes
    • Singapore International Cyber Week
    • ITU and UN AI for Good
    • G7 Hiroshima AI Process

Key Points & Insights

  1. Agentic AI is not binary—it exists on a continuum depending on autonomy level, memory/context access, long-term planning capability, and real-world action potential. Policy should reflect this spectrum (e.g., human confirmation per step → human monitoring → human oversight) rather than treating agentic/non-agentic as categorical.

  2. Physical AI introduces fundamentally different risk profiles than language-based AI. Autonomous systems controlling vehicles, aircraft, or critical infrastructure can produce kinetic (physical) consequences. Software-defined systems create new attack surfaces; cyber-attacks could weaponize critical systems. This requires near-perfect pre-deployment verification and validation—not just accuracy benchmarks.

  3. Data governance is the foundational guardrail for agentic systems. Unlike humans, agents lack empathy or situational awareness—they make decisions purely on data. Manipulated, unverified, or lineage-opaque data produces systemic errors at scale across agent networks. This is the "blast radius" concern.

  4. The industry consensus strongly favors voluntary, industry-led standards over prescriptive regulation. NIST's bottom-up approach (convening industry to identify barriers, then developing standards) is viewed as more effective than top-down mandates. Standards are global, adaptive, and less likely to stifle innovation.

  5. Multi-agent ecosystems introduce novel risks not yet well understood. Single agents can be tested and validated; networks of agents interacting may produce emergent behaviors and new vulnerabilities. Benchmarks and testing frameworks for multi-agent systems don't yet exist at scale.

  6. The "know your agent" principle mirrors payment system security. Before an agent acts (especially with financial or physical consequences), it must be verified as legitimate, its permissions must be explicit and bounded, and all actions must be auditable for dispute resolution and regulatory oversight.

  7. Regulatory fragmentation creates compliance burden for global companies. Singapore frameworks, NIST standards, EU AI Act definitions, and national-level regulations must align or converge. The OECD 2019 AI Principles have emerged as a de facto baseline that subsequent regulations reference.

  8. Human accountability cannot be automated away. Agents cannot take accountability or responsibility; only humans and business owners can. Guardrails must preserve clear chains of responsibility and mechanisms for humans to intervene, override, or audit agent decisions.


Notable Quotes or Statements

  • Austin Mayron (NIST): "We take a little bit of humility and say we don't actually know what the problem is until we talk to the people who are closest to the issue... the people who are actually in the field working on innovation, working on adoption. They have a better sense of what the barriers are."

  • Pri Banerjee (Synopsys): "Agentic engineers...are going to complement the job of a human engineer...the human will still be in the loop to make sure that you're not doing drastic sort of bad things right. This is the incredible opportunity."

  • Pri Banerjee (warning on physical AI): "You could imagine a software-defined airplane being used as a missile. Right? So this is how important is... we have to be extra careful about the responsible safe AI that we do for our intelligent product design."

  • Caroline Luvo (Mastercard): "Autonomy can only scale if there is trust...these four guards [know your agent, security by design, clear consumer intent, traceability/auditability] are not there to slow adoption...they're going to be key to scale adoption in a way that is trusted by design."

  • Jennifer Mulvey (Adobe): "It's not what we can do with technology, it's what we should do. That really does think about what is this going to mean for humans and how can we advance that agenda."

  • Ellie Sakai (Google): "Policy makers...should be thinking about regulating the use or application or the harm that they actualize compared to regulating the underlying technology. Otherwise we end up regulating...the AI models that by the time the regulation goes into effect the AI model has evolved into something that is now agentic."

  • Sam Kaplan (Palo Alto Networks): "These are threats that all of a sudden can have kinetic consequences in real life...as these agents are executing decisions across the financial system...across autonomous systems."

  • Danielle Jie (Salesforce): "Governance is more than regulation. Governance can be regulation, but it's also standards. It's also global norms. It's also risk and quality assurance procedures in companies."


Speakers & Organizations Mentioned

Government & Standards Bodies:

  • Austin Mayron — Acting Director, Center for AI Standards and Innovation (CASEY), U.S. Department of Commerce; Senior Legal Adviser to Under Secretary of Commerce for Intellectual Property; Director, U.S. Patent and Trademark Office
  • NIST (National Institute for Standards and Technology)
  • Prime Minister Modi (India)
  • President Macron (France)
  • Singapore (cited for agentic AI governance framework)
  • OECD (Organization for Economic Co-operation and Development)
  • ITU (International Telecommunication Union)
  • G7 Hiroshima AI Process

Industry Panelists:

  • Pri Banerjee — CTO & SVP, Synopsys (chip design automation)
  • Caroline Luvo — Chief Privacy, AI & Data Responsibility Officer, Mastercard
  • Sam Nair — Chief Product Officer, NetApp (multicloud data infrastructure)
  • Jennifer Mulvey — Adobe
  • Ellie Sakai — Public Policy Team, Google (PhD in machine learning)
  • Karly Ramsey — Head of Public Policy, Asia Pacific, Cloudflare
  • Sam Kaplan — Assistant General Counsel, Global Policy, Palo Alto Networks
  • Danielle Jie — Director of Global Public Policy, Salesforce
  • Kambies (last name not fully provided) — Policy/governance perspective, unnamed organization
  • Jason Oxman — Moderator, ITI (Information Technology Industry Council)

Companies/Technologies Referenced:

  • Synopsys, ANSYS, Nvidia, AMD, Broadcom, Qualcomm, NXP, ST Microelectronics
  • Tesla, Tesla Autopilot, FLUENT, LS-DYNA
  • Mastercard, Visa payment networks
  • NetApp, public cloud providers
  • Adobe, Google, Cloudflare, Palo Alto Networks, Salesforce
  • YouTube, Facebook, YouTube

Technical Concepts & Resources

  • Agentic AI / AI Agents: Autonomous systems capable of perceiving environment, planning, and taking action with varying degrees of independence and long-term goal pursuit.

  • Physical AI vs. Digital AI: Physical AI systems interact with the real world (autonomous vehicles, aircraft, manufacturing) and carry kinetic risk; digital AI operates on data/text only.

  • Software-Defined Systems: Products controlled by software (e.g., software-defined vehicles, aircraft, nuclear systems) that can be updated over-the-air but are vulnerable to cyber-attack.

  • Human-in-the-Loop / Human-on-the-Loop / Human-in-Command: Governance models representing escalating agent autonomy:

    • Human-in-the-loop: Agent requests confirmation for each action
    • Human-on-the-loop: Agent acts; human monitors and can intervene
    • Human-in-command: Agent operates autonomously; human sets policy
  • Multi-Agent Ecosystems: Networks of multiple agents operating and interacting, potentially producing emergent behaviors and novel risks not present in single-agent systems.

  • Verification & Validation (V&V): Pre-deployment testing; in chip design, achieving ~100% digital coverage before hardware prototyping.

  • Standards Organizations & Frameworks:

    • NIST Risk Management Framework (AI RMF)
    • ISO/IEC 42001 (AI management systems control)
    • OECD AI Principles (2019)
    • EU AI Act
    • Singapore AI Governance Framework
    • International Consortium of Safety Institutes
  • Key RFI (Request for Information):

    • NIST AI agent security RFI (open for ~1 month from talk date)
    • NIST publication on AI identity and verification (open for comment)
  • Sector-Specific Listening Sessions: NIST planning April listening sessions on barriers to adoption in healthcare, education, and finance.

  • PII (Personally Identifiable Information): Critical compliance concern in regulated sectors; benchmarks for PII handling needed to enable adoption.

  • Attack Surface & Threat Models: Agents introduce new attack vectors; cyber-attack surface expands as agents integrate with critical systems (financial networks, autonomous vehicles).


Note: This transcript captures the planning and opening statements of a multi-panel AI governance summit. It provides policy-level consensus-building on agentic AI governance but does not present new proprietary technical research or detailed attack case studies. The value lies in capturing the industry-government dialogue on standards harmonization and guardrail design.