From Policy to Practice: Governing AI for Global Impact
Contents
Executive Summary
This panel discussion examined the critical gap between AI governance intentions and their practical implementation across jurisdictions, featuring perspectives from academic researchers, policy organizations, technology companies, and service deployers. The consensus emerged that effective AI governance requires multi-stakeholder collaboration, clear baseline standards, and a shift from viewing compliance as a check-box exercise to understanding it as a strategic capability for sustainable growth.
Key Takeaways
-
Governance must precede release, not follow it: For open-weight models, staged releases with researcher-only access and rigorous pre-deployment testing are essential; post-release governance is ineffective.
-
Clear baselines and standards prevent race-to-the-bottom: Industry consensus on governance expectations—whether through law, standards, or best practices—levels the playing field and prevents competitive pressure from forcing corner-cutting.
-
Shared responsibility means shared accountability, not responsibility-shifting: Effective governance requires every stakeholder (regulators, companies, deployers, users) to take ownership rather than attempting to defer obligations to others.
-
Organizational governance is evolving rapidly but remains immature: Companies are moving beyond checkbox compliance to strategic governance structures, but role clarity, cross-functional integration, and resource allocation remain significant challenges.
-
India's implementation of its data protection law and development of governance tools could set global standards: With strong regulatory clarity and accessible compliance infrastructure, India can lead in demonstrating effective, scalable AI governance.
Key Topics Covered
- Open-weight models governance: Release strategies, safeguards, and risk assessment methodologies
- Regulatory fragmentation: Balancing global standards with local compliance requirements
- Organizational governance infrastructure: Governance structures within enterprises and their evolution
- AI safety frameworks: Risk management approaches, model evaluations, and safety principles
- Deployer responsibility: The critical role of service providers in governance implementation
- Emerging risks: AI agents, multimodal systems, spatial intelligence, and bystander privacy
- Standardization and benchmarking: Need for industry-wide metrics and governance tools
- Data protection and transparency: Implementation challenges in different jurisdictions
- Shared liability and responsibility: Distribution of governance accountability across stakeholders
- Child safety and legal compliance: Platform obligations regarding minors and harmful content prevention
Key Points & Insights
-
The "genie in the bottle" problem: Once open-weight models are released, governance becomes nearly impossible; the focus must shift to pre-release evaluation and staged rollouts rather than post-release controls.
-
Governance requires measuring model capabilities first: Understanding what a model can do directly translates to understanding its risks—the same capability that detects software vulnerabilities can exploit them (dual-use potential).
-
Organizational chaos is the current state: Even large organizations struggle with governance infrastructure—unclear role definitions between privacy officers, legal teams, AI governance leads, and business units create friction and redundant compliance efforts.
-
The "evaluation gap" undermines current practices: Models that pass laboratory benchmarks often fail in real-world deployment (example: a medical model passing exams but giving misleading answers 19% of the time in practice).
-
Competitive pressures drive a "race to the bottom": Without clear baseline standards and consensus rules, companies are incentivized to cut corners; competitors who move faster gain advantage, forcing others to follow, creating systemic risk (historical parallels: adtech, mobile location data).
-
Governance is not localization OR globalization: Companies must apply universal foundational principles (safety, privacy, bias avoidance, accountability) while adapting implementation to local context (e.g., India's high multimodal usage requires different safety considerations for images/video/audio).
-
Deployers have critical governance skin in the game: Service providers and implementation partners cannot hide behind "client requests"—they must proactively push back on risky approaches and educate clients about long-term liability exposure.
-
Transparency barriers are commercial, not technical: Many companies can implement transparency around training data and model details but choose not to due to trade secret concerns; regulators and the public cannot assess safety without this information.
-
Spatial intelligence and real-world scraping create new bystander privacy issues: The next frontier (robots, self-driving, physical world AI) requires scraping real-world data including people's homes and faces without explicit consent—existing privacy frameworks inadequate for this scale.
-
Governance infrastructure tools are grossly inadequate: Most organizations lack practical, affordable tools for compliance assessment; platforms like Credto AI represent a critical gap being filled—"one-size-fits-all" governance platforms are necessary for startups and smaller entities.
Notable Quotes or Statements
-
Karina Prunski (Oxford): "Once [open-weight models] are out, it's really difficult to take them back or modify them... the best thing to do is to try to govern the model before it is released."
-
Jules Polonetsky (Future of Privacy Forum): "It's urgent that we have clear, understandable baselines, otherwise the competitive pressures over time end up forcing a bit of a rush to the bottom."
-
Ivana (Wipro): "Governance is not a tick-the-box exercise of compliance. It's actually part of our strategy and capacity to generate growth and to offer our clients sustainable long-term value."
-
Ashish Sharma (NASCOM moderator): "Sometimes intent itself is under pressure. There is competition, there is engagement, there is time to deployment. Those are the areas when governance come into pressure."
-
Gail Kent (Google): "We're not going to achieve a product that has the level of success that search has without thinking about both the local and the global... governance absolutely critical."
Speakers & Organizations Mentioned
Speakers:
- Ashish Sharma – Head of Public Policy, NASCOM (moderator)
- Karina Prunski – Senior Research Fellow, Oxford University; lead author of AI Safety Report
- Jules Polonetsky – Future Privacy Forum (representing 200+ companies; focus on privacy and AI governance)
- Gail Kent – Director of Global Affairs and Public Policy, Google
- Ivana – Global Privacy and AI Governance Officer, Wipro
- Vifredo – Representative, XAI (Elon Musk's AI company)
Organizations:
- NASCOM (Indian IT industry association)
- Future Privacy Forum (FPF)
- Oxford University (AI safety research)
- Google (search, Gemini, cloud products)
- Wipro (IT services and deployment)
- XAI (frontier AI lab, Grock model)
- Italian Data Protection Regulator (first to formally challenge OpenAI)
Technical Concepts & Resources
Models & Technologies:
- Grock (XAI's LLM) – with published model cards detailing benchmarks, evaluations, and risk assessments
- Gemini (Google's multimodal model) – deployed globally with localized safety considerations
- Open-weight models – publicly released model weights (distinct from closed proprietary models)
- Agentic AI / AI agents – autonomous systems taking multi-step actions (identified as frontier governance challenge)
- Spatial intelligence – AI systems trained on physical-world data; emerging risk area
Governance & Safety Frameworks:
- AI Safety Report (173-page Oxford report, 20-page policy summary) – comprehensive assessment of open-weight model risks
- Model cards – standardized documentation of model training, benchmarks, capabilities, and limitations
- Frontier AI Framework (XAI) – risk management framework covering dual-use, child safety, self-harm, political bias, etc.
- Risk-based approach – governance prioritized by severity of potential harms
- Red teaming – adversarial testing to identify vulnerabilities
Governance Tools & Practices:
- Impact assessments – evaluating model/system effects; currently duplicated across compliance frameworks (privacy, liability, regulatory)
- Ecosystem monitoring – tracking model provenance, fine-tuned variants, and downstream modifications
- Staged release strategy – limiting initial access to researchers before broader deployment
- Adversarial fine-tuning – testing how easily safeguards can be removed
- Benchmarking for governance – quantitative metrics for privacy, safety, bias (identified as critical need)
- Credto AI – newly launched platform providing accessible governance/compliance guidance
Data & Regulation:
- EU AI Act – European regulatory framework; creates ambiguity companies cannot resolve without clarification
- Indian Data Protection Act – pending implementation; critical for governance clarity
- California laws (2024) – prompted XAI's "Frontier AI Framework" renaming
- Anonymization/deidentification standards – vary by jurisdiction; critical gap for spatial intelligence scraping
Key Technical Challenges:
- Evaluation gap – gap between laboratory benchmarks and real-world model performance
- Dual-use potential – same capabilities enable both beneficial and harmful use cases
- Safeguard removal – ease of fine-tuning or removing built-in safety constraints
- Cascading hallucinations – errors in agentic systems that compound across multiple steps
Document metadata: AI Summit panel discussion; 2025; moderator Ashish Sharma (NASCOM); runtime approximately 45 minutes
