All sessions

The Rise of AI Agents | Ensuring Safety & Inclusion in the Global South

Contents

Executive Summary

This panel discussion explores AI agent safety, regulatory frameworks, and inclusive development in the Global South, specifically India. The panelists emphasize that AI governance must move beyond principles to practical implementation, balance sectoral regulation with cross-cutting safety standards, and ensure that technology benefits underrepresented communities rather than amplifying existing inequalities.

Key Takeaways

  1. One Framework Doesn't Fit All: EU-style prescriptive regulation works for EU coordination; India's principles-based approach fits India's institutional culture. Effective AI governance requires jurisdictional alignment, not global uniformity—but with cross-country learning mechanisms.

  2. Education and Healthcare Need Immediate Attention: Education has negligible safeguards despite high deployment excitement; Healthcare has highest variability and systemic risk. These sectors demand outcome-focused, context-aware deployment strategies before expanding AI agent use.

  3. Accountability Starts at the Top: Decision-makers and C-suite executives must own AI governance from the start—not engineers or technology providers alone. Governance frameworks, standards adherence, and compliance oracles must be embedded at requirement-definition, not after deployment.

  4. The Global South Has an Unexpected Advantage: Women participation in STEM is organically growing faster in India, Saudi Arabia, and LATAM than in advanced economies. This represents an opportunity for more inclusive, representative AI development—not a lagging challenge.

  5. Phased Rollout with Built-in Learning Wins: India's DPI success (financial infrastructure → services → future layers) shows that strategic sequencing, institutional readiness, and human capital development beat rush-to-market approaches. Apply lessons from BFSI to healthcare, education, and public services progressively.

Key Topics Covered

  • AI Safety in Agentic Systems: Lifecycle monitoring, emergent behavior safeguards, ethical autonomy, recursive governance, regional adaptive governance, and human oversight
  • Sectoral Safety Maturity Gaps: BFSI (high compliance), Healthcare (highest variability/risk), Education (minimal safeguards), Retail, and Public Services (context blindness)
  • Regulatory Approaches: Comparison between EU's prescriptive regulation vs. India's principles-based framework; context-dependent governance strategies
  • Accountability Frameworks: Layered responsibility across developers, providers, deployers, and international standards bodies
  • Inclusive AI Development: Gender representation in AI, language/cultural preservation, representativeness in training data
  • Global South Collaboration: Regional flavor within centralized guidelines; sovereignty concerns; cross-country knowledge exchange
  • Sectoral Prioritization: Strategic phased rollout (e.g., DPI in financial services as foundation)
  • Women Participation in STEM: Trends in India, Saudi Arabia, and LATAM; organic growth in Middle East/North Africa
  • Governance Models: Dynamic compliance oracles, federated watcher agents, constitutional AI perspectives
  • Public Sector AI Deployment: Challenges in institutional knowledge, framework readiness, and systemic risk

Key Points & Insights

  1. Sectoral Maturity Varies Dramatically: BFSI has near-100% human oversight due to RBI frameworks; Education is a "sleeping giant" with almost no safeguards against deviant behavior; Healthcare shows high variability and lacks escalation protocols and automated safeguards; Public services deploy urban AI blindly into rural contexts.

  2. Accountability Must Be Layered, Not Singular: Responsibility cannot rest on developers alone—it requires involvement of decision-makers (C-suite), deployers, providers, and international standards bodies. Legal personality for AI systems should be rejected to prevent responsibility dilution.

  3. Principles Without Practice Are Insufficient: UNESCO, OECD, and national guidelines (transparency, accountability, proportionality, rule of law) are well-intentioned but must translate into concrete implementation mechanisms. Standards like ISO 42001 and AI impact assessments are critical bridges.

  4. Outcome-Based, Not Technology-First Approach: Regulators should focus on desired outcomes (student learning, healthcare safety) rather than mandating technology adoption. Previous tech rollouts (tablets in education) failed because pedagogy, teacher training, and cultural content were ignored.

  5. Data Representation Gaps Drive Inequality: Underrepresented communities in training data risk being further marginalized. Women comprise only 20% of major AI development workforces globally. India shows higher female engineering participation than Belgium—a potential advantage for the Global South.

  6. Sovereignty and Cultural Preservation Are Non-Negotiable: Agents must understand local context and respect sovereign decision-making. Language, religious values, and cultural norms cannot be treated as secondary concerns in product design.

  7. Hybrid Regulatory Model Most Feasible: Define minimum common cross-sectoral requirements (shared risk definitions, assessment frameworks) while allowing sector-specific mandates (clinical safety for healthcare, age-appropriateness for education). This avoids one-size-fits-all approaches.

  8. India's DPI Model Is a Strategic Template: India's successful rollout of digital financial infrastructure—handling 100M+ transactions monthly—demonstrates how to build human capital, institutional capacity, and governance frameworks before advancing. This phased, outcome-focused approach is replicable globally.

  9. Three-Pronged Agentic AI Framework Proposed: (1) Dynamic compliance oracles that sync evolving rules into execution engines; (2) Federated watcher agents monitoring for drift/violations; (3) Constitutional AI ensuring secularism, equity, and justice despite testing/standards.

  10. Overreliance on Agents Creates New Problems: As with prior technologies, solving one problem with agents may create others. Continuous measurement of actual benefits vs. harms is essential; banning or pivoting technologies based on outcomes is legitimate.


Notable Quotes or Statements

  • Gabriela Ramos (UNESCO): "We don't need AI for everything...It's about the rule of law...We need to find ways to build not only the most important principle but also bring it to very concrete ways of doing it."

  • On Legal Personality of AI: "We banned the notion that you can give legal personality to AI developments...then you will be delinking the responsibility and the outcomes and there should always be human determination."

  • Abdul Raman (Saudi/MENA perspective): "Sovereignty is very important and we cannot just deal with it especially with agentic now without thinking of the consequences of not having the agents in your country...if it's not localized and in some cases it needs to be also sovereign."

  • On Education Complexity: "Education is the most complicated setting to deploy these things not because of the technologies but because of the organizational changes that need to be made in these systems."

  • On Outcome Focus: "Ultimately we are interested in the outcomes and so that is what we judge the system by. But the thought process around compliance, legality, and ethics should start at the very beginning of the process when defining requirements."

  • On Overreliance: "Over reliance is a big issue...we want this to improve our efficiency, solve some problems, [but then] we have new problems and then we need to solve them some other way maybe not with that technology."


Speakers & Organizations Mentioned

  • Gabriela Ramos – UNESCO (ethics of AI recommendations, former OECD)
  • Dr. Krishna Shri – Amitita University (South India), presenter of AI Safety Report
  • Dr. Shivarama Krishna – Organizer, AI Safety Summit follow-up convener
  • Abdul Raman – Saudi Arabia/MENA region (ethical AI guidance initiatives)
  • Dr. [Name incomplete in transcript] – Healthcare/sectoral regulation expert
  • Dr. Aik – DPI architect (Digital Public Infrastructure), India
  • Ganesh Bharat Jan – Healthcare applications researcher (mentioned but absent)
  • UNESCO – Standards-setting body for AI ethics
  • OECD – AI principles developer (2019)
  • Ministry of IT / Department of Telecom – India government bodies supporting AI mission
  • RBI (Reserve Bank of India) – Financial services regulator with AI framework
  • Council of Europe – Developing AI convention and impact assessment frameworks
  • NASCAM – Research partner on AI sovereignty report
  • Mozilla Foundation – Cited on bottom-up AI assessment approaches
  • Amitita University – Hosted event, research on safety across BFSI, healthcare, education, defense, retail, public services

Technical Concepts & Resources

  • ISO 42001 – AI Management System standard; provides lifecycle perspective on governance
  • AI Impact Assessment Frameworks – Practical tools for validating compliance and outcomes (Council of Europe convention-related)
  • IEEE Standards on AI Ethics – Referenced as emerging governance infrastructure
  • DPI (Digital Public Infrastructure) – India's financial/digital backbone; model for phased, outcome-focused AI integration
  • RBI's AI Framework – 100% human oversight mandate in BFSI transactions
  • "Stop and Think" / "Stop and Ask" Mechanisms – Proposed regulatory approaches to constrain agent autonomy
  • Federated Watcher Agents – Monitoring systems for drift detection and violation alerting
  • Dynamic Compliance Oracles – Engines that synchronize evolving regulations with AI execution
  • Constitutional AI – Approach ensuring protection of secularism, equity, justice in agent behavior
  • Life Cycle Monitoring – Six-pillar framework: lifecycle monitoring, emergent behavior safeguards, ethical autonomy, recursive governance, regional adaptive governance, human oversight
  • Bias Detection & Truthfulness Tracking Systems – India AI mission-backed system for testing India-specific application behavior
  • AI Sovereignty Report – Forthcoming research from Amitita University + NASCAM on Global South AI governance (release date: Friday, 3:00 PM at Bharat Mandapam)

Document Status: Transcript quality is variable with repetition artifacts and incomplete speaker attributions, but key substantive arguments are preserved. Some speaker names remain unidentified due to transcript gaps.