All sessions

AI and Children: Turning Safety Principles into Practice

Contents

Executive Summary

This summit session convened government officials, tech industry leaders, child rights experts, and youth advocates to address the urgent need to translate AI safety and inclusion principles into actionable practice for children. The overarching message emphasized that AI must be designed as safe, inclusive, and empowering by default—not retrofitted afterward—while ensuring children participate as creators and governance partners, not merely users.

Key Takeaways

  1. Principles Must Become Practice: Generic ethical AI frameworks mean nothing unless embedded into product design, procurement standards, regulatory systems, data governance, and capacity building. The gap between intent and implementation is where children remain at risk.

  2. Build for Inclusion from Day One, Not Later: Systems designed for English speakers with high-speed internet in developed nations will inherently exclude rural, multilingual, and disabled children. Inclusion requires diverse data, diverse teams, and testing in marginal contexts—upfront, not as afterthought.

  3. Children Are Stakeholders in Their Own Future: The most transformative insight: children must move from being subjects of AI governance to partners in designing and governing it. Youth councils, co-design processes, and meaningful voice in decision-making are not nice-to-haves—they are essential.

  4. Accountability & Transparency Are Non-Negotiable: When AI fails (and it will), children must not bear the cost. This requires explainability, redress mechanisms, human-in-the-loop design, and regulatory teeth—not voluntary compliance.

  5. This Is a Whole-of-Society Responsibility: No single actor (government, tech, civil society, educators, parents) can solve this alone. The panel's diversity underscores that effective child-safe AI requires simultaneous action across regulation, product design, education, parental support, and research.

Summit Session Summary


Key Topics Covered

  • Child-Centered AI Design & Governance: Frameworks for embedding child safety and inclusion into AI systems from inception
  • Digital Rights & Data Protection: GDPR compliance, data literacy, and protection from harmful synthetic content and child sexual abuse material (CSAM)
  • AI as Educational Opportunity: Personalized learning, adaptive pedagogies, and accessibility for children with disabilities
  • AI Literacy & Critical Thinking: Building capacities for children to understand, evaluate, and challenge AI systems
  • Workforce Readiness & Economic Inclusion: Skilling, upskilling, and equitable access to AI opportunities for marginalized youth
  • Parental Guidance & Support: Creating mechanisms to help parents navigate and protect children's AI interactions
  • Global & National Governance Initiatives: India's AI governance framework, Norway's digital upbringing strategy, international regulatory approaches
  • Accountability & Risk Mitigation: Addressing emotional dependency, over-reliance, bias, and systemic failures in AI systems
  • Inclusive AI by Design: Ensuring systems serve all children, not just the digitally privileged; addressing linguistic, disability, and geographic diversity

Key Points & Insights

  1. Scale & Urgency: India has 250+ million schoolgoing children with 1-in-3 internet users under 18; 85.5% of households own smartphones with 86% internet access—digital exposure is immediate and pervasive, demanding urgent governance.

  2. The Double-Edged Sword: AI offers transformative potential (personalized learning, accessibility, knowledge democratization) but carries grave risks (emotional dependency, CSAM generation, algorithmic bias, over-reliance that weakens critical thinking). Governance must "sharpen the edge of opportunity while blunting the edge of risk."

  3. Safety by Design, Not Afterthought: Core principle articulated repeatedly: age-appropriate safeguards, data protection, built-in accountability, and transparent mechanisms must be foundational—retrofitting fails and leaves children vulnerable.

  4. Children as Creators, Not Mere Users: The most powerful insight: children must participate in designing and governing AI, not just consuming it. This is "nation building" and "planet building," shifting from passive users to active co-creators.

  5. Verified Positive Impact Example: Tong district, Rajasthan AI-enabled personalized learning (Parai with AI) achieved 96% class 10 mathematics pass rates within six weeks, demonstrating concrete educational gains when properly implemented.

  6. Concerning Data on Digital Harms:

    • 64,000+ CSAM cases in India (2024 alone)
    • Norway: 72% of 9-11 year-olds use social media despite 13-year legal minimum; 95% have mobile phones; nearly half of 13-14 year-olds have seen violent/scary content
    • 89% accuracy rate for AI-based psychological condition detection shows potential but also risks of over-reliance on automation
  7. Inclusion Crisis: 90% of AI systems globally are built by 10% of the world's population; most systems fail for non-English speakers, rural contexts, and Global South needs. Facial recognition initially worked only for white populations—systemic bias is built-in unless deliberately addressed.

  8. Governance Gaps: Long-term effects of AI companions, personalized learning apps, algorithm-driven feeds, and synthetic media on child development, mental health, and education remain largely unstudied and unknown.

  9. Democratization via AI Literacy: The mantra "learn to learn" paired with AI access can equalize knowledge and opportunity—but only if access and curriculum inclusion actually reach marginalized communities (first-generation learners, rural women, linguistic minorities).

  10. Youth Demand: 54,000 young people across 184 countries co-authored a children's and youth statement with eight concrete demands—evidence that young people are ready to be partners in solutions, not passive subjects.


Notable Quotes or Statements

Secretary S. Krishnan (Ministry of Electronics & IT, India):

"We need to look at this not with fear but with understanding that this is something which can truly be very meaningful for another generation and very meaningful for humanity itself... Look at the opportunities and see what it can do in terms of increasing human potential, human capability, and how productivity can go up and economic progress can reach everybody."

Prasidhi Singh (13-year-old UNICEF Youth Advocate, speaking on behalf of 54,000 young people):

"When young people shape AI as equal partners, it is nation building, it is planet building, it builds creators, not merely consumers of the future. So yes, as Gen AI, we want the future to be smart. Certainly, we want the future to be smart, but we also want it to be fair. The only question left is whether you are ready to build it with us."

Professor Ajay Kumar Sud (Principal Scientific Adviser, Government of India):

"The challenge is that we still do not fully know the long-term effects of growing up with AI companions, personalized learning apps, algorithm-driven feeds, and synthetic media. More evidence is needed... The need to embed child-specific safeguards and guardrails into governance frameworks is not a choice—it is a must."

Thomas Devin (UNICEF, Global Innovation Director):

"AI must be safe by design, not as an afterthought. It must be inclusive by default and empower children as creators, not just users, having voice in governance and the ability to challenge whether it works."

Ambassador Malin Stener (Norway):

"We cannot allow the algorithms and screens to take over childhood. Children must be protected from harmful content, abuse, commercial exploitation, and misuse of personal data."

Mr. Gokul Subramanium (Intel India):

"AI cannot become the next generation digital babysitter. It cannot take the human out of the loop. The words that come from AI are not as powerful and accountable as words from a human, and the generation growing up with AI must know what that means."


Speakers & Organizations Mentioned

Government Officials:

  • Secretary S. Krishnan (Ministry of Electronics & IT, Government of India)
  • Professor Ajay Kumar Sud (Principal Scientific Adviser, Government of India)
  • Dr. Sanjiv Sharma (Member Secretary, NCPCR—National Commission for Protection of Child Rights)

International Representatives:

  • Ambassador Malin Stener (Norway, to India, Sri Lanka, Bhutan, Maldives)
  • Thomas Devin (Global Innovation Director, UNICEF)
  • Henita Ridley (Chief AI, UNICEF) [Moderator]
  • Cynthia McAfee (UNICEF India representative)

Industry Leaders:

  • Gokul Subramanium (President, Intel India; Co-Chair, FIKI AI Committee)
  • Hector D. Rivera (Director, Responsible AI Public Policy, Microsoft)
  • AJ Witch (Senior Country Managing Director, Accenture)
  • Kumar Anraata (Vice President, Capgemini)

Youth Advocate:

  • Prasidhi Singh (13 years old, UNICEF India Youth Advocate from Changalpattu, Tamil Nadu)

Institutions & Organizations:

  • UNICEF (Office of Innovation)
  • FIKI (Founding organization of the summit)
  • NCPCR (National Commission for Protection of Child Rights, India)
  • Ministry of Electronics & IT, Government of India
  • Ministry of Education, Government of India
  • Norwegian Government
  • Microsoft, Intel, Accenture, Capgemini

Technical Concepts & Resources

AI Tools & Initiatives Mentioned:

  • Parai with AI: AI-enabled personalized learning initiative in Tong district, Rajasthan; achieved 96% class 10 mathematics pass rate within 6 weeks
  • Chat GPT: Referenced as example of generative AI creating perfect CVs and issues around over-reliance
  • AI Chatbots: Discussed for mental health support and as examples of emotional dependency risks
  • AI-Based CSAM Detection Tool: NCPCR is developing AI-based tool to proactively identify child sexual abuse material circulation on social media platforms
  • AI Health Authority Recommendations: Norway issuing recommendations on screen use and social media time for children
  • Microsoft Youth Council: Microsoft's new initiative to onboard young people in strengthening AI product safety

Regulatory & Policy Frameworks:

  • GDPR (General Data Protection Regulation): Referenced; Norway proposing raising age of consent from current level to 15 years
  • India's AI Governance Framework (November 2025): Includes pathway to build capability and widen access while strengthening safeguards
  • India's AI Safety White Paper (two weeks before summit): Focuses on strengthening AI safety using technical/legal frameworks; applies learnings from DPI 1.0 to AI safety and data governance
  • National Education Policy 2020 (India): Emphasizes AI's role in school curriculum; directs embedding AI and computational thinking from grade 3
  • UN Convention on the Rights of the Child: Referenced as alignment standard for trustworthy, equitable, empowering digital/AI environments

Metrics & Data Referenced:

  • India: 250+ million schoolgoing children; 1-in-3 internet users under 18
  • India: 85.5% household smartphone ownership; 86% internet access
  • UK: 67% of teens use AI
  • US: ~40% of elementary-age children use AI-powered educational tools
  • CSAM in India: 64,000+ cases in 2024 alone
  • Norway: 95% of 9-11 year-olds own mobile phones (up from 85% a decade ago)
  • Norway: 72% of 9-11 year-olds use social media despite legal minimum age 13
  • Norway: Nearly 50% of 13-14 year-olds have seen violent/scary content
  • AI psychological condition detection accuracy: 89%
  • Global AI development: 90% of systems built by 10% of world's population

Responsible AI Principles (Microsoft):

  • Privacy, fairness, reliability, safety, security, transparency, inclusion (established 2018)

Key Concepts & Frameworks:

  • Safe by Design (not afterthought)
  • Inclusive by Default (not retrofitted)
  • Human-in-the-Loop (transparency and human oversight)
  • Data Literacy (children understanding digital footprints)
  • Digital Competence (parents and children)
  • Rights-Based Governance
  • AI Literacy (critical thinking, understanding how systems work)
  • Child Sexual Abuse Material (CSAM) detection and prevention
  • Algorithmic Bias (facial recognition, language, cultural representation)
  • Emotional Dependency & Over-Reliance on AI companions
  • Anthropomorphic Behavior (humans attributing human-like qualities to AI)

Methodological Notes:

  • The summit drew on a "children and youth statement" co-authored by 54,000 young people across 184 countries plus focused group discussions across India
  • Emphasis on evidence gaps: long-term longitudinal studies on AI's impact on child development are largely absent
  • Whole-of-society approach advocated: simultaneous action across regulation, design, education, parental support, and research