Safeguarding Children with Responsible AI
Contents
Executive Summary
This panel discussion from the UN AI Impact Summit in Delhi addresses how AI systems can be designed responsibly to protect children while enabling their learning, creativity, and agency. The session emphasizes that reactive, post-harm regulatory models (like those that emerged with social media) are insufficient for AI, and that safety must be designed into systems from the outset through age-appropriate experiences, transparent evaluation, child participation in governance, and culturally diverse development.
Key Takeaways
-
Safety-by-design is non-negotiable. Companies must embed age assurance, parental controls, data privacy protections, and appropriate content moderation before launching child-facing AI systems. Reactive regulation has failed with social media; it will fail worse with AI.
-
AI literacy is foundational, not optional. Children need to understand how AI works, its biases, and its limitations before using it. Schools must teach critical thinking and agency, supported by teachers with adequate training and resources.
-
Continuous, independent evaluation in real-world contexts is essential. Platform-provided analytics are insufficient. Researchers and regulators need access to data and ability to conduct behavioral studies to uncover hidden harms (profiling, manipulation, reduced curiosity).
-
Children are not miniature adults and must have a voice in governance. They are the most effective users of AI and can articulate what works and what doesn't. Policies and designs must involve them as participants, not just subjects.
-
Global cooperation on baseline standards, local flexibility on implementation. Harmonize core protections (age assurance, privacy, content standards) while allowing jurisdictions to customize experiences around cultural norms, economic contexts, and regional vulnerabilities.
Key Topics Covered
- AI's unique risks for children — The distinction between AI and social media platforms; simulated intimacy, emotional dependency, and manipulation
- Age assurance and privacy-preserving technology — Technical approaches to verify age while protecting privacy
- AI literacy and education — How to teach children critical thinking about AI rather than just how to use it
- Safety by design — Embedding protections into systems before deployment rather than reacting to harms
- Transparency and evaluation — Real-world, contextual studies of AI impact on children; behavioral research beyond platform-provided analytics
- Governance frameworks — Moving beyond post-harm regulation to proactive, multi-stakeholder approaches
- Cultural diversity and monoculture risk — Ensuring AI development is not dominated by global north models
- Inclusion and accessibility — Solutions for unconnected, offline, and disabled children
- Parental controls and redress mechanisms — Practical safeguards and accountability structures
- Child agency and participation — Involving children in the governance and design of AI systems
Key Points & Insights
-
AI is fundamentally different from social media. It is not a platform but increasingly a one-to-one adaptive interaction embedded in how children learn, communicate, and form identity. Children cannot reliably distinguish authentic human connection from artificial intimacy, especially when systems are persuasive and always available.
-
Post-harm regulation will not work for AI. Governments and companies must move toward safety-by-design models, embedding guardrails before deployment. Early adopters like OpenAI are demonstrating receptiveness to this approach through age assurance technology and parental controls.
-
AI literacy must precede AI use. Children should learn foundational skills and critical thinking before relying on AI tools—analogous to learning basic math before using calculators. The emphasis should be on teaching children how to think rather than what to think.
-
Personalization and profiling pose serious risks. Behavioral research shows children are exposed to sophisticated profiling and influence through influencers and non-formal advertising on platforms like TikTok—often invisible to parents and children themselves. Current AI opacity is a feature, not a bug, making external evaluation essential.
-
Age verification and privacy-preserving technologies are now viable. The Open Age Alliance and cryptographic/biometric solutions enable age-appropriate experiences without exposing children's identity. There is no technical excuse for companies not to implement robust, privacy-preserving age assurance.
-
Real-world, contextual evaluation is critical. Lab testing is insufficient. Continuous behavioral studies simulating real child interactions (e.g., 16-year-olds vs. adults with identical profiles) are necessary to understand actual exposure to harms like harmful content recommendations.
-
Data protection and privacy tools already exist but are underutilized. Regulations like GDPR provide mechanisms to control profiling and extract data from platforms for independent evaluation. The challenge is enforcement, not invention.
-
Children must be at the center of governance, not an afterthought. Involving children in AI policy design, curriculum development, and safety evaluation ensures systems are actually fit for purpose and builds their agency and critical capacity.
-
Cultural and regional diversity in AI development is essential. A monoculture of AI models from the global north risks erasing cultural uniqueness and human diversity. Different jurisdictions have different vulnerabilities and norms that require localized solutions.
-
Agency is context-dependent. Individual capacity to use AI responsibly is inseparable from broader socioeconomic and institutional contexts, particularly in the global south. Solutions must account for access inequality and power dynamics.
Notable Quotes or Statements
-
Baroness Joanna Shields (opening): "How we manage AI on behalf of children will be the clearest test yet on whether we are governing this technology responsibly and for the public good."
-
Baroness Joanna Shields: "AI is not a platform. It is increasingly a one-to-one adaptive interaction embedded in how children learn, communicate, create, and form their own sense of self. When a model says to a child, 'I care. I understand.' That's not conscience. That's code."
-
Rahul John Aju: "AI will not take your job but someone using AI can. But at the same time, the most important thing in the world of AI is also to be as human as possible."
-
Rahul John Aju (on AI literacy): "We should learn how to write essays. We should learn how to sing. Maybe then you should use AI. You should know the basics and the foundations before you start using AI."
-
Tom Hull (LEGO Education): "AI literacy is ultimately handing children a screwdriver and saying here is a fairly complex box but let's take it apart and let's understand what's under the hood."
-
Chris Lehan (OpenAI): "This technology is an incredibly leveling technology. It scales the ability of anyone to think, to learn, to create, to build, to produce. The question is, do you actually encourage people to be able to use it that way?"
-
Maria Bikova (Campbellan Institute): "Current AI is not transparent but this is not a bug. This is a feature. We have to provide behavioral studies... We cannot just take data from social media providers."
-
Baroness Joanna Shields: "If we have a world where we are accepting models from just the global north, I really believe we will lose so much of our cultural diversity, our uniqueness as people."
-
Tom Hull (closing reflection): "You've got to think about what kind of ancestor you want to be. Surely now this is our chance to make some really sharp decisions and pay it forward for the next generation."
-
Moderator Thomas David (closing summary): "If we have a model that actually gives the right answer to children all the time, they might actually lose their sense of curiosity... Can we design a model that actually gives the wrong answer on purpose so that the child actually struggles because grit is going to be one of the huge skills of tomorrow?"
Speakers & Organizations Mentioned
Government & International Organizations:
- Baroness Joanna Shields — UK government roles in internet safety, child online safety coalitions
- Thomas David — Director of the Office of Innovation, UNICEF
- Under Secretary General Amandib Gil — UN Special Envoy for Digital and Emerging Technologies
Industry & Research:
- Rahul John Aju — AI entrepreneur, founder of ARM Technologies/Think Academy, featured young AI innovator ("AI Kid of India")
- Chris Lehan — Chief Global Affairs Officer, OpenAI
- Tom Hull — Vice President and General Manager, International, LEGO Education
- Maria Bikova — Director, Campbellan Institute for Intelligent Technologies
Other Organizations:
- OpenAI
- LEGO Education
- Campbellan Institute for Intelligent Technologies
- International Legal Foundation
- Open Age Alliance
- Digital Futures Lab
- UN AI Impact Summit, Delhi
Technical Concepts & Resources
Tools & Technologies Mentioned:
- Age assurance technology — Privacy-preserving cryptographic and biometric systems enabling age-appropriate experiences without exposing identity
- Open Age Alliance — Organization harmonizing age assurance standards across platforms; generates "age keys" that travel with users
- Rescue AI — Tool created by Rahul that analyzes terms of service and contracts to identify high-risk and low-risk clauses
- Notebook LM — Google tool generating videos and podcasts from text content; used for personalized learning
- Study Fetch — Tool converting chapter content into gamified learning experiences
- Think Academy — Free online course platform (7+ lakh participants) teaching AI fundamentals to LLM fine-tuning
Methodologies & Concepts:
- Safety-by-design — Embedding protections before system deployment rather than reacting to harms
- Behavioral research / bot simulation studies — Sending bots with simulated user profiles (e.g., 16-year-old vs. adult) to track algorithmic exposure to content and recommendations
- Data privacy and data sovereignty — Core principles for AI in education; enforcing existing GDPR-like regulations to prevent unaccountable profiling
- Age-appropriate design — Customizing experiences based on developmental maturity and capability
- AI literacy curriculum — Teaching children to understand data, sensing, predictability, bias, and the "under the hood" mechanics of AI systems
- Contextual evaluation — Real-world testing in deployment contexts, not just lab testing
- Child-centered governance — Involving children as participants in policy design and safety evaluation
Data & Research Findings:
- Global learning crisis: 7 out of 10 children in classrooms cannot explain a text they read at age 10
- Teacher readiness gap: 80% of US teachers believe AI literacy is foundational; only 41% feel ready to teach it
- Profiling exposure: Children on TikTok see less formal advertising but are exposed to 5x more profiling and influencer-driven content not labeled as advertising
- Largest teachers union partnership: OpenAI works with the largest US teachers union (400,000 teachers) to develop and train AI-supported individualized teaching
Compiled by: AI Research Summarization System
Source: UN AI Impact Summit, Delhi — Session on "Safeguarding Children with Responsible AI"
Date: [Derived from transcript; exact date not specified]
