All sessions

From Guidelines to Ground: Institutional AI Safety in the Global South

Contents

Executive Summary

This panel discussion examines the critical gap between AI safety guidelines and their practical implementation in Global South countries, particularly India. Speakers from policy, finance, academia, and startup ecosystems argue that institutional readiness is lagging behind AI adoption, and that effective governance requires rethinking regulatory design, capacity building, liability frameworks, and incentive structures rather than simply importing regulatory templates from developed nations.

Key Takeaways

  1. Implementation failures stem from design flaws, not just malice: India's fragmented governance structure, capacity gaps, and unclear liability frameworks are systemic problems that require architectural fixes—single interventions won't suffice.

  2. Regulation alone cannot outpace AI deployment: Procurement mandates, impact assessments, and built-in technical safeguards must precede full deployment while regulatory frameworks catch up.

  3. One-size-fits-all safety standards will fail in the Global South: Effective frameworks must account for linguistic diversity, economic heterogeneity (startup vs. enterprise), infrastructure differences, and local governance structures.

  4. Incentive alignment is as important as enforcement: Making safety a market advantage through liability reductions, competitive differentiation, and ecosystem benefits will succeed where pure compliance mandates may create resistance or workarounds.

  5. International cooperation on AI safety requires clarity on state behavior and corporate cooperation: UN-level conversations on responsible state conduct in AI (parallel to cybersecurity frameworks) are needed alongside business-government trust-building on areas of cooperation versus contestation.

Key Topics Covered

  • Institutional design gaps in AI governance frameworks, particularly India's fragmented, light-touch regulatory approach
  • Capacity constraints limiting enforcement of AI safety rules (technical, human, GPU resources)
  • Liability and responsibility clarification across the AI value chain (developers, deployers, users)
  • Procurement as a safety lever — using government purchasing power to enforce AI safety standards
  • Incentive structures for responsible AI adoption versus compliance-driven approaches
  • AI safety institutes — design, international cooperation, and information-sharing models
  • Accountability mechanisms — algorithmic impact assessments, auditability, and third-party verification
  • Global South coordination and avoiding imported governance templates that don't fit local contexts
  • Environmental and labor concerns — e-waste, data center sustainability, and informal economy impacts
  • Financial sector challenges — false positives, fraud detection, and trust in AI-driven payment systems

Key Points & Insights

  1. Fragmented institutional design is India's primary weakness: There is no overarching horizontal AI law; instead, governance is split across the IT Act, data protection, sector-specific regulators, and proposed AI safety institutes. Coordination between these bodies remains unclear and is a critical bottleneck.

  2. Capacity is the forgotten constraint: Even if rules are sound, regulators lack the technical capacity (GPU resources, expertise, verification infrastructure) to enforce them. The 3-day limit in SGI rules cannot be verified without adequate testing capacity.

  3. Light-touch enforcement may be insufficient but heavy-handed regulation risks innovation stifling: India has chosen not to create an "EU AI office" model; instead, it relies on industry standards and self-regulation as a first line of defense. This creates a trust gap without clear accountability.

  4. Liability allocation across the value chain remains unresolved: Who is responsible when harm occurs — the AI developer, the deployer, the procuring government, or the user? This ambiguity prevents both legal recourse and clear incentive alignment.

  5. Procurement is an underutilized but powerful governance tool: Governments can mandate algorithmic impact assessments (as Brazil's São Paulo metro did) and impose safety requirements before deployment, without waiting for regulation to catch up with deployment speed.

  6. "Least common factor" standards exclude smaller players: Safety standards designed around large organizations often become compliance burdens for startups operating in different socioeconomic contexts, reducing participation from the ecosystem that needs guidance most.

  7. Trust and safety must become competitive advantages, not just compliance costs: Privacy-first culture succeeded by making privacy a competitive differentiator. Similarly, AI safety should be incentivized through market mechanisms (liability reductions for good actors, transparency benefits) rather than purely regulatory enforcement.

  8. AI safety institutes should be information hubs, not enforcement arms: Better design involves removing enforcement authority from safety institutes, creating incentives for proactive vulnerability disclosure (like in cybersecurity), and enabling bilateral knowledge-sharing across country networks.

  9. Accountability requires auditable systems: Before deployment, institutions must be able to answer three questions: (1) What is this AI solving? (2) Who is accountable if it fails? (3) What happens if the model degrades or data shifts? If answers are unclear, deployment is premature.

  10. Global South countries bring essential human-centric perspectives: Institutions in developing countries can surface linguistic exclusion, bias, discrimination, and appropriateness concerns that technical safety evaluation in developed countries may miss—this is a key value of international AI safety networks.


Notable Quotes or Statements

  • Dr. Arjun Goswami: "If you have heavy compliance burdens on market players and inadequate capacity on regulatory enforcement you'll have an issue." — Summarizes the core governance dilemma.

  • Kamesh Shaker: "Standards that we actually are setting for ourselves has to be least common factor... most of the times we keep the bigger organization in mind and then we synthesize it, which may or may not let the smaller players out."

  • Jamila (The Dialogue): "AI safety should be looked at as a capacity problem, not just a regulation problem. Safety accrues from the way governments procure and deploy AI, not just how they regulate it."

  • Arunati Banerjee (Nvidia Inception): "Evaluation always has to be continuous. It's not a one-time deal. And governance cannot be just dictated by terms and conditions because what worked in one geography may fail in another."

  • Dr. Goswami: "We have a blind spot on e-waste generation... it's not just toxic leakage, it's about who's doing it—gig workers, the informal economy—and linking it back to labor codes and better standards."

  • Sadhart (Vidhi Centre): "Safety cannot only be looked at through a technical lens. You need a human-centric point of view and surface issues like linguistic exclusion, bias, and discrimination." — On the value of Global South perspectives in AI safety.


Speakers & Organizations Mentioned

  • Arunati Banerjee — Nvidia Inception Program, VC Alliance (South Asia)
  • Jamila & Kamesh Shaker — The Dialogue (India); manage coalition on responsible evolution of AI (CORE AI)
  • Dr. Sadhart — Vidhi Centre for AI Law and Regulation; director of endowed research center
  • Dr. Arjun Goswami — Technology policy and legislation expert; discusses India's fragmented AI governance
  • Mr. Aurora — Mastercard; discusses trust, fraud detection, and responsible AI in financial systems (180 billion transactions/year)
  • Juan Carlos (Colombia) — Colombian delegation member; discusses Latin American AI governance and GPAI participation
  • Professor Johan — Not fully identified; contributes on procurement and graded liability
  • The Dialogue — Secretariat for Coalition on Responsible Evolution of AI (CORE AI); 55+ member coalition
  • Nvidia — Inception Program and VC Alliance supporting deep tech startups
  • Mastercard — Financial sector perspective on AI safety in high-volume systems
  • Vidhi Centre for AI Law and Regulation — Research on healthcare AI and parliamentary engagement
  • UNESCO — Referenced guidelines on AI in judicial systems (adopted by Colombia, Dec. 2024)
  • GPAI (Global Partnership on AI) — International coordination mechanism; includes India and Colombia

Technical Concepts & Resources

  • SGI Rules (Synthetic Generated Information Rules) — February 10 regulation on synthetically generated information; cited as example of rule with verification capacity gaps
  • Algorithmic Impact Assessments — Mandatory pre-deployment risk evaluation tool (exemplified by São Paulo Metro case)
  • Sandboxing — Restricted deployment environments to assess risk before full rollout; noted as necessary but insufficient alone
  • Graded Liability — Proportional responsibility scaling based on entity size and role (startup vs. enterprise)
  • Third-Party Verification & Audits — Independent audits of AI systems submitted to regulators rather than regulator-conducted audits
  • Guardrails — Infrastructure-aware safety constraints requiring continuous evaluation, not one-time implementation
  • Model Evaluation — Early-stage assessment of frontier models by developed-country AI safety institutes, with information-sharing through global networks
  • Thematic Working Groups — Network-based problem-solving clusters on specific AI safety issues (Bletchley process framework)
  • Algorithmic Registers — Registration systems tracking AI/algorithm deployment and use (referenced from Chinese approach)
  • Procurement mandates — Government purchasing requirements enforcing safety standards
  • Data Infrastructure Gaps — India produces 20% of global data but holds only 3% of world's data centers/datasets
  • GPU Capacity — Critical bottleneck for verification and testing of compliance with AI safety rules
  • Liability Reduction Schemes — Incentive structures rewarding proactive vulnerability disclosure and good-faith compliance
  • Bias auditing in financial systems — Specific focus on credit scoring, fraud detection, and fair authorization in payment systems

Additional Context

The Bletchley Process — International framework referenced for AI safety coordination; included Summit on AI Safety (UK) and subsequent multilateral engagement; feeds into UN Global Digital Compact and Independent Scientific Panel on AI Safety.

Colombia's Leadership — First country in Global South to adopt UNESCO guidelines for AI in judicial systems (Dec. 2024); early adopter of OECD membership and GPAI participation; developing renewable energy infrastructure (60% hydro) for potential data center hosting.

Brazil's Precedent — Algorithmic impact assessment model used in São Paulo Metro cited as replicable governance practice; also moving toward overarching AI law.