All sessions

Safe AI in Education: Practitioner Insights from the Global South

Contents

Executive Summary

This panel discussion from IIT Madras's Center for Responsible AI explores practical approaches to implementing safe, responsible AI in educational settings across the Global South, with emphasis on the Global North. The panelists argue that safety and access are not competing priorities but complementary goals that require human oversight, teacher agency, and integration into existing pedagogical frameworks. Rather than waiting for perfect AI systems, stakeholders must proactively manage risks while scaling solutions that demonstrably improve educational outcomes for underserved populations.

Key Takeaways

  1. Safety in AI education is fundamentally a deployment problem, not just a design problem. Integration into existing systems with teacher oversight and institutional accountability structures matters more than the technical sophistication of the AI itself.

  2. Human-in-the-loop doesn't mean real-time supervision; it means institutional humans with authority to audit, override, and course-correct learning trajectories based on pedagogical objectives.

  3. The Global South and Global North need identical safety standards but substantially different implementation architectures due to differences in digital literacy, infrastructure, institutional capacity, and cultural trust models.

  4. Pragmatism with guardrails beats perfectionism with paralysis. A 95% accurate AI system reaching 10x more underserved children may be more ethically defensible than a 99% accurate system reaching no one.

  5. AI literacy for students, teachers, and parents is foundational. Without understanding what AI is, where it operates, and how to interact with it critically, safety frameworks become theater rather than practice.**

Summary of IIT Madras Impact Summit 2026 Panel Discussion


Key Topics Covered

  • Safe-by-Design Frameworks for AI educational tools (Khan Academy's Khanigo model)
  • Higher Education Challenges with AI in admissions, grading, and research workflows
  • Cognitive Decline Risks from over-reliance on AI systems replacing peer learning and critical thinking
  • Human-in-the-Loop Architecture as the central safety mechanism across educational contexts
  • Accountability & Liability Models for AI-caused educational harms
  • Global North vs. Global South Considerations in AI deployment (literacy, trust, infrastructure differences)
  • AI Literacy Initiatives for students, teachers, and parents (AI Summer initiative)
  • Scaling Mechanisms in government systems and the critical role of policy integration
  • Pedagogical Integrity as the foundation for safe AI adoption
  • Risk-Benefit Analysis and pragmatic deployment despite imperfect systems

Key Points & Insights

  1. Safe-by-Design is Deployment-Focused, Not Just Development-Focused

    • Swati (Khan Academy) emphasized that design for safety extends beyond the tool itself into how it's deployed, supervised, and integrated with existing systems. Teacher-directed deployment with human control significantly shifts the risk-benefit calculus toward safety.
  2. Human-in-the-Loop Has Multiple Valid Interpretations

    • While teacher supervision is ideal for in-school use, immediate 24/7 teacher availability is unrealistic. The solution isn't eliminating teacher involvement but ensuring some institutional human can audit the trajectory of student learning over time, even if not in real-time.
  3. Student Dependency on AI as Largest Perceived Risk

    • Audience polling showed ~70% believed "reduced critical thinking/student dependency" is the biggest safety concern. Shini elaborated: peer learning and social interaction are being replaced by "ask Claude, ask Gemini," with measurable cognitive and mental health consequences.
  4. Context Matters More Than One-Size-Fits-All Rules

    • Anil framed safety not as universal absolutes but as context-dependent frameworks addressing fairness, bias (e.g., accent-based content filtering), and inclusion. Shaveta emphasized that age-appropriateness, curriculum-alignment, and instructional pedagogy must anchor all deployment decisions.
  5. Safety Standards Should Not Differ by Development Level, But Implementation Contexts Differ Significantly

    • While Swati and Shaveta agreed that children's safety needs are universal (whether in Global North or South), the implementation environment varies dramatically: shared devices, semi-literate parents, lower digital literacy, government-dependent trust structures in Global South require different deployment scaffolding (B2G2C models, parent awareness campaigns, teacher capacity building).
  6. Accountability is Distributed; Liability is Specific

    • Responsibility for safety spans data curators, algorithm designers, developers, deployers, and end-users. However, liability should rest with whoever approves deployment at scale—typically the state/institution making the go/no-go decision, not the tool developer alone.
  7. Perfection is the Enemy of Scale and Access

    • Sunil's Gujarat example: a 5% error rate in AI-based reading diagnostics serving 3+ million children in government schools generated measurable improvements in reading proficiency and reduced dropout rates. Waiting for perfect systems means decades of preventable educational failure. Risk management, not risk elimination, is the operative principle.
  8. AI Literacy (Not Just Fluency) is a Prerequisite for Safe Adoption

    • The AI Summer initiative targets awareness-building: 82% of children aged 14-16 already use smartphones for learning; they're passively interfacing with AI without understanding it. Moving them from passive consumption to active, informed agency requires deliberate literacy programs (1M students/teachers already engaged across 10 Indian states).
  9. Socratic/Questioning Methods as Safety Mechanism

    • Khan Academy's design choice to guide students toward answers rather than provide them directly prevents both surface-level learning gains and "hallucination acceptance" (students accepting LLM outputs as truth). This pedagogical approach provides genuine safety against cognitive harm.
  10. Data Logging and Access Control are Distinct Issues

    • Audience and panelists agreed logs are valuable for personalized learning and teacher oversight but the critical question is who has access and under what conditions. Anonymization and minimization matter; unrestricted logging of student AI conversations creates new privacy and psychological safety risks.

Notable Quotes or Statements

"It's not design for safety that's just about designing the development of the tool, but it's in fact more about the deployment of the tool and the implementation, making it human-directed and making sure it's supervised and controlled by human beings." — Swati (Khan Academy India)

"What we're moving towards is essentially using AI to evaluate someone else's use of AI, as opposed to the individual. This is a problem." — Shini Patasarati (Ohio State), on admissions filtering

"There is nothing in the world—there is no solution in the world, tech or AI, that is perfect. Everything involves a trade-off... Yet are these systems 100% perfect? No. Do the benefits greatly outweigh the risks? Yes." — Sunil Vadwani (AI Foundation), on pragmatic deployment in Gujarat

"The single biggest reason for high dropout rates in grades 1–5 in the Global South is the inability of children to read proficiently in their mother tongue... 50–60% of children in grade five couldn't read in Gujarati effectively at the second grade level." — Sunil Vadwani, on the Gujarat reading remediation case study

"Safety is co-agency. How do we give that agency, especially for digitally unsophisticated learners?" — Krishnan Narayan, moderator, on structuring human-in-the-loop systems

"82% of children aged 14–16 use smartphones for learning. Right now they are interfacing with AI without realizing it. How do we move them from passive users to having agency, making informed, safe, and responsible decisions?" — Shaveta Sharma (Central Square Foundation), on AI literacy necessity


Speakers & Organizations Mentioned

Key Panelists

  • Swati — Country Director, Khan Academy India
  • Shini Patasarati — Professor, Ohio State University
  • Anil Anat Swami — Professor of Practice, IIT Madras; author of Why Machines Learn
  • Shaveta Sharma Kukra — Managing Director, Central Square Foundation (CSF)
  • Sunil Vadwani — Founder, AI Foundation; investor and philanthropist

Moderator & Host

  • Krishnan Narayan — Co-founder and President, Ethihasa Research
  • Shatsson (Name partially unclear in transcript) — Representative, Center for Responsible AI, IIT Madras

Institutions & Organizations

  • IIT Madras — Hosting institution; home to Vadwani School of Data Science and AI and Center for Responsible AI
  • Center for Responsible AI (CI) — IIT Madras multi-disciplinary nonprofit research center focused on ethical and responsible AI
  • Khan Academy — Global learning platform; Khanigo is their AI tutoring assistant
  • Central Square Foundation (CSF) — Education implementation and scale-up organization
  • Ohio State University — Implementing AI fluency program for 60,000+ students across 100+ disciplines
  • AI Foundation (founded by Sunil Vadwani) — Develops AI solutions for health, education, and agriculture in social sector
  • Government of Gujarat & Government of Rajasthan — Deployment partners for reading remediation AI systems
  • Global Learning Council — Organizer of college-level hackathons in India
  • Prime Minister's Office / Government of India — Policy partners; announced Center of Excellence for AI in Education at IIT Madras (Bodhen AI Conclave)

Technical Concepts & Resources

AI Models & Systems Referenced

  • Khanigo — Khan Academy's AI tutoring assistant (40+ lakh students, 14 lakh teachers globally; 2 lakh students, 2 lakh teachers in India)
  • LLMs (Large Language Models) — Claude, Gemini mentioned as systems students increasingly use
  • A-War — Fairness auditing tool developed by Shini Patasarati's team for online evaluation of AI systems in deployment contexts

Methodologies & Frameworks

  • Co-Intelligence Systems Architecture (6-layer model)
    • Layers 1–3: Infrastructural (data, models, agents)
    • Layers 4–6: Phenomenological (human-AI interaction, life experiences, ecosystem view)
  • Socratic Method — Question-guided learning approach to prevent hallucination acceptance and surface learning
  • Human-in-the-Loop Architecture — Institutional oversight, teacher auditing, and course correction of student trajectories
  • Safe-by-Design Approach — Integrating content from vetted sources, no capture of PII, logging risky interactions, teacher oversight
  • Risk-Benefit Analysis — Pragmatic framework for deciding whether imperfect systems should scale (e.g., 95% accuracy serving 10M vs. 99% accuracy serving 0)
  • B2G2C Model — Business-to-Government-to-Consumer deployment pathway for reaching underserved communities

Key Research & Data Points

  • Nature Paper (recent) — Showed LLM use correlates with NIH grant success but produces "safe bet" research ideas rather than high-risk/high-reward innovation
  • AI Summer Initiative — 1M students and teachers across 10 Indian states engaged in AI literacy (as of last month in the talk)
  • CSF Digital Society Surveys — Found lower digital literacy in India correlates with higher reliance on external systems for approval/trust
  • Audience Poll Results:
    • ~70% identified "student dependency/reduced critical thinking" as the biggest safety risk
    • ~50–60% favored decentralized (teacher/principal-level) over centralized mechanisms
    • Split opinions on whether to keep anonymized/minimized logs of student AI conversations

Practical Case Studies

  • Khan Academy Pilot in Latin America — Khanigo flagged a student at risk for self-harm; teacher intervention prevented harm
  • Gujarat Reading Remediation Initiative
    • 20-second AI assessment of reading proficiency per child
    • Diagnostic + remediation plan generation
    • Cohort-based teacher/parent guidance
    • Results: impressive enough for government mandate (all 3M+ children in Gujarat government schools)
    • Expansion: Rajasthan (all schools mandated) and 10 states scaling
    • Projected: millions of children improving reading proficiency by end of next year

Policy & Governance References

  • DPIs (Digital Public Infrastructure) — New infrastructure layer being created by Government of India's Center of Excellence for AI in Education (led by IIT Madras)
  • State Curriculum Alignment — Critical for Khanigo deployment; content quality-checked and aligned to state board standards
  • Teacher Capacity Building — Essential component of safe deployment, especially in Global South where institutional literacy is lower

Additional Context

Event Details

  • Title: "Safe AI in Education: Practitioner Insights from the Global South"
  • Venue: Impact Summit 2026 (date and location inferred as IIT Madras, Chennai)
  • Format: Panel discussion with moderator and live audience polling
  • Duration: ~90 minutes (partial transcript; some rapid-fire questions reserved for later)

Underlying Tensions Highlighted

  1. Safety vs. Access — Not truly contradictory, but require different implementation strategies for Global South
  2. Perfection vs. Pragmatism — Waiting for zero-error AI systems delays benefits to millions
  3. Centralized vs. Decentralized Oversight — Teacher/principal-level control preferred by audience, but government-scale deployment requires state coordination
  4. Standardization vs. Context-Sensitivity — Universal safety principles but differentiated implementation
  5. Data Collection vs. Privacy — Logs enable personalization and oversight but create surveillance risks if access is uncontrolled