All sessions

How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance | India AI Impact Summit 2026

Contents

Executive Summary

This panel discussion explores the critical tension between promoting AI innovation and implementing responsible governance across borders. Industry leaders from Amazon, Zoom, Zscaler, and DeepL argue for risk-based, flexible regulatory frameworks that accommodate different use cases rather than prescriptive one-size-fits-all regulations. The consensus emphasizes that while governments must protect citizens, overregulation risks stifling innovation and fragmenting global AI deployment—a concern particularly acute for countries in the Global South.

Key Takeaways

  1. Flexible, risk-based regulation outperforms prescriptive rules — Governance frameworks should differentiate by actual use-case risk, allow for policy evolution as technology develops, and avoid abstract theoretical constructs ("high-risk" categories) that don't translate to practical application.

  2. The global AI economy depends on cross-border data flows and interoperability — Fragmented national regulations don't just slow companies; they deny citizens in conservative markets access to beneficial technology their peers enjoy globally. Harmonization isn't corporate convenience—it's about equitable access.

  3. User education and choice must scale with product complexity — From individual users protecting against prompt injection to enterprises managing agent security, governance is a partnership between platform providers (setting sensible defaults and offering toggles) and users (understanding basic security practices).

  4. Security and trust in AI outcomes requires continuous adversarial oversight, not just compliance documentation — Compliance frameworks often become outdated; security requires red-teaming, bias correction, output guardrails, and monitoring of model behavior—particularly as agentic AI matures.

  5. Regulatory humility is essential — Governments and industry should acknowledge what we still don't know about AI deployment at scale, work backward from observable harms rather than theoretical risks, and be willing to revisit early decisions (like Colorado did) when implementation reveals disconnects with reality.

Key Topics Covered

  • Global AI governance alignment — Need for harmonized approaches without excessive fragmentation
  • Risk-based regulation — Tailoring governance intensity to actual use-case risk profiles rather than blanket rules
  • Innovation vs. risk management trade-offs — The sliding scale between enabling technology adoption and protecting users
  • Data sovereignty and cross-border data flows — Essential infrastructure for global AI services but tension with privacy protections
  • Security and trust layering — Integrating cybersecurity across AI systems, particularly for agentic AI
  • Enterprise vs. consumer governance — Different regulatory requirements for different user categories and business models
  • Regional regulatory examples — EU's approach, Colorado's early AI regulation, and India's emerging framework
  • Upstream governance decisions and downstream impacts — How platform providers' governance choices affect customer obligations
  • Data classification and risk differentiation — Why not all data requires identical protection levels
  • Agentic AI governance challenges — New risks emerging as AI agents gain autonomous execution capabilities

Key Points & Insights

  1. Overregulation kills innovation without improving security — Jay Chowry argues that blanket compliance approaches often create outdated protections; by the time regulations are implemented, threat landscapes have evolved. Flexible policies that evolve alongside technology are more effective than static rules.

  2. Not all data is created equal — Risk-based approaches must differentiate between consumer data, enterprise data, and critical assets (e.g., jet engine IP vs. washer/dryer specs). Treating everything identically creates inefficiency without improving actual security outcomes.

  3. Regulatory uncertainty prevents global launches — David Zapolski (Amazon) describes internal product decisions being delayed or regionalized because regulatory frameworks are unclear. Companies often choose to launch in permissive markets first, delaying benefits in conservative regions.

  4. Colorado and EU experiencing "buyer's remorse" with early AI regulation — Both implemented comprehensive AI rules before understanding how to apply them in practice, requiring implementation holds and standards development. This illustrates the danger of regulating before understanding the technology.

  5. Security requires identity verification and autonomous agent governance — As AI agents mature, they will become targets for hacking/hijacking with access to enterprise systems. Zero-trust security architectures must extend to agentic AI with identity, authorization, and monitoring controls.

  6. Platform providers must offer granular user choice with sensible defaults — Aparna Bawa (Zoom) describes implementing controls at multiple levels: enterprise users can disable AI transcription and training; individual users need simple protections (mandatory waiting rooms, passcodes) without overwhelming optionality. The framework must serve both Fortune 500 companies and individual users on free accounts.

  7. Data privacy is table stakes, but insufficient — While privacy and encryption are baseline requirements, trust in AI outcomes requires assurance that model behavior aligns with enterprise expectations. This governance layer becomes more critical with higher-stakes applications (drug R&D documentation vs. email translation).

  8. Different technologies pose different risks — An AI shopping recommendation assistant has vastly different risk implications than an AI tool assisting medical diagnosis or medication prescription. Regulations that don't differentiate by use case prevent beneficial applications.

  9. Global technology adoption requires free cross-border data flows — Zoom, DeepL, Amazon, and Zscaler all depend on unencumbered data movement for core functions. Data localization requirements and cross-border restrictions impede not just companies but the citizens they serve.

  10. India's cautious, principles-based approach is being viewed favorably — David Zapolski explicitly praises India (alongside Peru and Japan) for proceeding cautiously with clear principles rather than prescriptive rules, positioning it as an exemplar for balanced governance.


Notable Quotes or Statements

"When you try to secure everything, you secure nothing." — Jay Chowry, attributing the insight to a GE CISO; illustrates why risk differentiation is critical.

"Compliance doesn't mean security. By the time compliance rules are out there, cyber needs have moved on." — Jay Chowry; argues for adaptive security over static regulatory compliance.

"We live with multiple states privacy frameworks. Is it great? No. Is it inefficient? Yes. There's something in between." — Aparna Bawa (Zoom); advocates for a common baseline framework that respects sovereignty without requiring absolute alignment.

"Imagine an agent getting hacked or hijacked in your company with access to all kinds of stuff." — Jay Chowry; highlights emerging threat model as autonomous agents proliferate.

"If we have conviction something's good for customers, why just do it in one place?" — David Zapolski (Amazon); describes internal product launch philosophy, noting regulatory uncertainty delays global deployment.

"The danger in regulating before you really understand the technology is that you create costs, uncertainty, and inhibit innovation." — David Zapolski; reflects on early LLM-era regulation trends.

"Everything goes back to the user experience. Our customers are not monoliths." — Aparna Bawa (Zoom); advocates for user-centered rather than monolithic governance.

"Privacy and security are just table stakes. The battle is creating a layer of trust into the outcomes of the AI." — Yarik Coutilowski (DeepL); emphasizes assurance and alignment beyond baseline compliance.


Speakers & Organizations Mentioned

SpeakerOrganizationRole
Jay ChowryZscalerCEO
Aparna BawaZoomChief Operating Officer
David ZapolskiAmazonChief Global Affairs & Legal Officer
Yarik CoutilowskiDeepLCEO
(Host/Moderator)India AI Impact Summit 2026(Not fully identified in transcript)

Other institutions/frameworks referenced:

  • EU (regulation and approach)
  • Colorado (early AI regulation attempt)
  • Peru, Japan, India (cautious, principles-based governance approaches)
  • General Electric (security/CISO practices)
  • Federal government of the US (Zscaler certification process)

Technical Concepts & Resources

  • Zero Trust Architecture — Zscaler's cloud-based security model; emphasizes identity verification and authorization over network perimeter defense. Extended to agentic AI contexts.
  • High-Risk vs. Low-Risk Use Cases — Regulatory framework attempting to differentiate impact levels; noted as still lacking practical definition across jurisdictions.
  • Data Classification — Risk-based approach to differentiating enterprise data protection levels by criticality (IP vs. consumer-facing information).
  • Large Language Models (LLMs) — Referenced as the current wave of AI adoption; regulation's timing relative to LLM maturity noted as premature in several jurisdictions.
  • Bedrock — Amazon's enterprise AI service platform providing model choice, guardrails, and customer data isolation.
  • Red-teaming — Adversarial testing of models for bias, toxicity, and safety; mentioned as core responsibility of model builders.
  • Guardrails — Technical controls enabling enterprises to filter model outputs (toxicity, bias, content types).
  • Bias Correction — Upstream governance practice in responsible model building.
  • Agentic AI — Autonomous agents capable of executing tasks on behalf of users, with emerging governance challenges around authorization, hijacking, and cross-border operation.
  • Prompt Injection — User-level security risk where sensitive information provided to AI engines can be retained for training; users advised not to embed personal/sensitive data in prompts.
  • CI/CD (Continuous Integration/Continuous Deployment) — Development workflow that must be maintained alongside security and privacy controls during rapid AI adoption.
  • Privacy by Design — Integration of privacy considerations from product conception rather than retrofit.

Contextual Notes

  • The summit's location in India is highlighted as significant—first global AI governance discussion held in the Global South, suggesting effort to include developing nation perspectives.
  • The panel deliberately includes both upstream platform providers (Amazon, Zoom, DeepL) and security specialists (Zscaler), reflecting different governance pressures.
  • Zoom's pandemic pivot from enterprise to consumer platform illustrates real-world governance challenges: enterprise customers (with IT/compliance teams) vs. public schools (without IT infrastructure) require fundamentally different feature toggles and defaults.
  • The discussion implicitly acknowledges post-Colorado regulatory learning: early comprehensive AI rules (Colorado, EU) are being reconsidered as implementation reveals impracticality, suggesting future frameworks will be more adaptive.