Beyond Guardrails | Adaptive AI Governance in the Global South
Contents
Executive Summary
This panel discussion explores how the Global South must develop adaptive, contextually-relevant AI governance frameworks rather than simply importing regulatory models from the Global North. The central argument is that static, linear regulatory approaches cannot keep pace with rapidly evolving AI systems, and that governance must remain humanentric, constitutionally anchored, and rooted in local values—particularly inclusion, democratization of technology, equity, and access to justice. India is positioned as a pioneer in leading this conversation at the G20 and through digital public infrastructure initiatives.
Key Takeaways
-
Adaptive governance is not a future concept—it is an immediate necessity. Static regulatory models will fail to govern rapidly evolving AI systems. The Global South must develop frameworks flexible enough to evolve with technology while maintaining ethical anchoring.
-
Transparency and human oversight are non-negotiable. Whether in courts or clinics, AI must remain a tool that enhances human decision-making, never replaces it. Systems must be transparent ("glass boxes") and accountable to the constitutive values and ethics of their communities.
-
Data bias and geographic misalignment cause real harm. AI models trained in one population cannot be mechanically applied elsewhere without validation, localization, and evidence of effectiveness. This is a critical safety and equity issue, not merely a technical refinement.
-
The Global South must cooperate to avoid dependency and shape the AI agenda itself. Rather than adopting external governance models or importing unvalidated technologies, the Global South must build capacity, conduct regional research, and establish partnerships grounded in shared values like inclusion and democratization.
-
Institutional capacity and infrastructure are as important as policy. Regulatory frameworks on paper mean nothing without internet connectivity, trained judges and healthcare workers, post-market monitoring systems, and sustained public trust-building.
Key Topics Covered
- Adaptive vs. Static Governance Models: The mismatch between fast-evolving AI systems and traditional regulatory frameworks
- AI in Judicial Systems: Ensuring transparency and human decision-making in criminal sentencing and legal proceedings
- Digital Health & Medical Devices: Challenges of implementing AI-driven healthcare tools across diverse global contexts
- Data Bias & Localization: Geographic bias in training datasets and the imperative to contextualize AI solutions for specific populations
- Global South Cooperation: Why pooling resources and expertise among developing nations is essential for shaping AI governance
- Infrastructure & Capacity Building: Internet connectivity, digital literacy, and regulatory capacity as foundational prerequisites
- Publishing & Research Equity: Creating platforms for regional, population-specific research that may have smaller sample sizes but critical local relevance
- Ethical Guardrails: Embedding values like inclusion, transparency, and human oversight into AI system design
- Technology Assessment Frameworks: Moving from device-centric to contextualized AI health technology assessment
- Real-World Evidence & Post-Market Monitoring: Long-term longitudinal follow-up and validation of AI solutions in deployed settings
Key Points & Insights
-
The Governance Paradox: AI systems are dynamic and rapidly evolving, but regulatory processes remain static and linear. Existing guardrails are insufficient without adaptive frameworks that can evolve alongside technology.
-
Human Decision-Making Must Remain Central: In high-stakes domains like criminal justice, final decisions must remain with human adjudicators (judges). AI serves as a tool for organizing and processing information, but cannot replace the empathy, contextual understanding, and consideration of human sentiments (rasa in Indian philosophy) essential to just outcomes.
-
Transparency as Prerequisite: "Black box" AI systems must become "glass boxes"—transparent to all stakeholders, especially professionals responsible for accountability and constitution-building.
-
The Data Problem: AI models trained on geographically biased datasets will fail when applied to different populations, risking misdiagnosis, mistreatment, and harm. This is not merely an accuracy problem—it is a human safety and equity crisis.
-
Technology as Student, Humans as Teachers: Poor prompt engineering, inadequate training data, or flawed system design will produce poor outputs. "There is no bad student, only a bad teacher"—responsibility for quality lies with system designers and operators.
-
The Global South Must Set Its Own Agenda: Rather than adopting frameworks designed for the Global North, the Global South must cooperate to shape governance models rooted in its own values: inclusion, democratization, peace, tolerance, and people-centered development.
-
Infrastructure & Capacity are Foundational: High-quality AI deployment requires reliable internet, digital literacy, regulatory capacity within institutions, and ongoing capability-building—not just policy documents.
-
Localization Requires Multiple Pillars: Education, infrastructure, regulatory capacity, and needs assessment must be built simultaneously. One-size-fits-all solutions or importing external devices without local contextualization guarantees failure.
-
Evidence Gaps in Global South Context: Current research often tests non-locally-designed devices on Global South populations with short time horizons and no follow-up. Evidence must address felt clinical needs, longitudinal outcomes, and sustained monitoring post-deployment.
-
Institutional Oversight & Public Trust: The future roadmap requires ethical guardrails, transparent implementation, institutional oversight, capacity building within judiciaries and health systems, and—critically—continuous public trust and engagement.
Notable Quotes or Statements
-
"The final decision-making definitely remains with the adjudicators and there we ensure we all as the law professionals what is required for us to ensure is that since we all are dedicated to the constitution of morality, this black box system whatever we use it remains a glass box for all of us—transparent." (Judicial panelist on criminal sentencing and AI)
-
"AI is a two-horse race, and what are we going to do? Are we going to sit as witnesses to whatever is unfolding before us? Therefore, conferences like this bring together governance people, policy makers, researchers, and academicians to realize that we cannot be left behind. We have to set the agenda." (Dr. Vidushi Shaki, on Global South cooperation)
-
"There's no such thing as a bad student, but a bad teacher. If you put a bad trunk or prompt order to that AI, chances are you'll have bad results. The student will have a bad command." (Dr. Ricardo, on AI system design responsibility)
-
"The main shift is: before, it was humans who were defining the logic; now it is the data which is defining the logic and this treatment." (On how AI inverts the traditional knowledge hierarchy in healthcare)
-
"We cannot afford to leave anybody behind. Democratization is very important to us... We think of people, people, people and people planet is one of the goals of this conference." (On Global South values in AI governance)
-
"AI is forcing us... there is no other way. We have to cooperate and partner with each other. We have to create an R&D ecosystem because one country or one part of the world cannot do everything." (On Global South interdependence)
-
"The road map ahead requires clear ethical guardrails, transparent implementation, institutional oversight, capacity building within the judiciary, and continuous public trust." (Closing synthesis of AI governance requirements)
Speakers & Organizations Mentioned
- Dr. Vidushi Shaki — Indian civil servant (20+ years); expertise in emerging technology, policy, and AI impact; represented India's Global South AI leadership
- Dr. Ricardo — Pathologist, Senior Lecturer, Head of Department of Basic Science, Fiji School of Medicine; Global South clinical perspective; expertise in telemedicine and digital healthcare
- Dr. Aurva — Medical doctor with PhD from National Institute of Mental Health and Neuroscience; expertise in child/adolescent psychiatry, neuro-imaging; policy editor for The Lancet Southeast Asia
- Norwegian Development Agency Representative — Senior Global Health Advisor; focus on international health financing, digital innovation, and equitable health technology access
- Justice(s) Mah Shah — Indian Supreme Court justice; judicial perspectives on AI and sentencing
- National University of Delhi — Host institution for the panel
- India AI Impact Team — Dr. Son Gupta and team; organizational support
- The Lancet Regional Health Southeast Asia — Publishing platform for regional health research
- LPI (International regulatory network for AI in health) — Switzerland-based independent nonprofit; partner on global regulatory frameworks, navigator tool, health technology assessment frameworks; India identified as a pioneer signatory country
- WHO Southeast Asia Region — Referenced for regional health concerns and cooperation
Technical Concepts & Resources
-
Adaptive Governance Models: Two-track approach combining:
- Regulatory guardrails across AI product lifecycle (development → deployment → post-market monitoring)
- Contextualized AI health technology assessment framework
-
Navigator Tool (LPI): Implementation planning instrument that maps governance gaps to actionable steps across responsibility dimensions; uses circular/cyclic approach rather than linear waterfall
-
Baseline Assessment & Discovery Workshop: Initial phases of governance maturity evaluation and capacity planning
-
AI Health Technology Assessment Blueprint: Framework integrating:
- Clinical value and economy
- Clinical workflows
- Ethics and equity
- Real-world learning and evidence mechanisms
-
Digital Public Infrastructure (India G20): Public trust doctrine emphasizing citizen ownership of created assets and governance structures
-
LLMs (Large Language Models): Referenced in context of last-mile digitization in health; need for national standardization frameworks
-
Rasa (Nine Sentiments): Indian philosophy concept (love, laughter, sorrow, anger, energy, fear, distress, wonder, peace) as framework for empathetic human decision-making in law
-
Real-World Evidence & Post-Market Monitoring: Longitudinal follow-up, continuous validation, and sustainability assessment beyond initial deployment
-
Data Provenance & Transparency Protocols: Standardized documentation of dataset origins, geographic bias assessment, and disclosure requirements for AI-assisted research
-
Hackathons & Competitions: Innovation engagement model for developing locally-contextualized AI solutions (referenced for ASHA worker tools)
Document Quality Notes
⚠️ Transcript Quality: The provided transcript contains significant repetition, fragmentation, and apparent OCR/transcription errors (e.g., "sentiment" → "momentto," incomplete sentences, unclear passages). The summary has been constructed by synthesizing coherent themes and statements from this imperfect source. Some quotations have been slightly reconstructed for clarity while preserving original meaning.
