All sessions

AI in Schools: Protecting Learners and Empowering Educators

Contents

Executive Summary

This panel discussion addresses the urgent intersection of AI deployment in education with child safety, online protection, and regulatory frameworks. Multiple panelists emphasize that the challenge is not whether to integrate AI in schools but how to do so safely, through shared responsibility across platforms, governments, educators, parents, and—critically—by centering youth voices in policy decisions. The conversation reveals a global shift from voluntary safety commitments to binding obligations, while highlighting the digital divide as a parallel threat to those facing AI-specific harms.

Key Takeaways

  1. Center youth voices and developmental biology in policy design. Children's perspectives on technology harms are nuanced and actionable; adolescent neuroscience shows brakes are still forming, making design—not willpower—the ethical lever.

  2. Regulation works best as part of a multi-stakeholder system with enforcement teeth. Fines must exceed profit incentives; design-by-default requirements matter more than content takedown; shared accountability must be backed by resources and capacity-building.

  3. Address the business model, not just the symptoms. Algorithmic recommendation, infinite scroll, and attention-based monetization are root causes. Regulatory focus should shift from content moderation to incentive restructuring and transparency.

  4. Bridge the digital divide alongside protecting from digital harms. Excluding 30%+ of the world from AI access/literacy is as damaging as exposing unprepared children to algorithmic harms. Voice-enabled, accessible technologies are part of the solution.

  5. Build new institutional literacy around AI in education. Schools and governments often move slower than vendors; teachers need training on AI biases and limitations; children need critical thinking frameworks to navigate generated content and subtle deception.

Key Topics Covered

  • AI in education systems: curriculum integration, deployment models, and vendor partnerships with schools
  • Child safety and online harms: misinformation, deepfakes, image-based AI abuse, infinite scrolling, algorithmic recommendation systems
  • Global regulatory approaches: age-gating (Australia's ban under 16), EU AI Act, Spain's proposed coalition, New Zealand's cautious stance
  • Shared responsibility frameworks: accountability distribution among platforms, governments, educators, parents, and children
  • Youth agency and participation: necessity of centering children's voices in technology policy design
  • Digital divide and equitable access: 30%+ of global population without internet access; voice-enabled AI as potential solution
  • Surveillance and data protection in schools: profiling risks, biometric systems, data boundaries, impact assessments
  • Business model critique: algorithmic incentive structures, attention economy, cost-benefit analysis by platforms
  • Digital literacy and critical thinking: teaching children to identify AI hallucinations, biases, and subtle deception
  • Community-level interventions: cyber agencies, reporting mechanisms, local awareness programs

Key Points & Insights

  1. Youth perspectives are non-negotiable: Children draw clear red lines on image-based AI and creative content appropriation but find legitimate uses (e.g., academic support, learning assistance). They must be consulted—not just informed—during policy development. UNICEF consultations with 50,000+ children across 180 countries demonstrate this is operationally feasible.

  2. The "ambient AI media environment" requires new vocabulary: AI is not a discrete tool but an embedded, invisible force shaping what children see through algorithmic feeds. Current regulatory frameworks designed for content moderation are inadequate for predicting and personalizing feeds in real-time.

  3. Adolescent developmental neuroscience matters: Ages 11–25 represent a critical window when impulse control (brakes) is still forming while learning speed is rapid. This creates asymmetric risk—teens are incentivized by algorithmic design to stay engaged before their decision-making capacity fully develops. This is not a character flaw but a biological reality requiring design-level solutions.

  4. Regulation is necessary but insufficient without enforcement and business model reform: EU fines (€2.7B against Meta for GDPR/child protection violations) remain profitable to companies conducting cost-benefit analyses. Without addressing the underlying business model (selling attention and data), regulation treats symptoms rather than root causes.

  5. The digital divide is as urgent as algorithmic harms: 30%+ of the global population lacks internet access. Excluding them from AI literacy and access perpetuates inequality. Voice-enabled AI solutions are emerging as potential pathways to reach underserved populations (e.g., India's AI mission targeting 500M offline users).

  6. Surveillance-by-default in schools is a critical risk: AI profiling of students across their entire educational timeline (grade 1–college) poses long-term harms. Strict data boundaries, impact assessments before deployment, and explicit bans on student surveillance are essential guardrails.

  7. Big tech controls the means of distributing knowledge about itself: During Australia's regulatory push, platforms muddied public understanding through targeted information control. Citizens may support regulation (70% in Australia) while doubting its efficacy (25% believe it will work), partly due to asymmetric information.

  8. India's rapid, culturally grounded response offers a model: January 3rd response to non-consensual intimate imagery demonstrates that agile, common-sense regulation rooted in cultural values can work with broad public support. Young people globally viewed it as an international standard worth adopting.

  9. Shared responsibility requires resource distribution to communities: Placing burden on parents, educators, and self-regulation fails without systemic support. A "village" metaphor only works if the village has resources—training, tools, and institutional backing.

  10. Academic integrity in the ChatGPT era requires new frameworks: International Baccalaureate and similar systems have integrity rules but no mechanisms to limit or guide AI use. The solution isn't restriction but educating students and teachers on biases, hallucinations, and critical evaluation while leveraging education-specific platforms that promote discussion over direct answers.


Notable Quotes or Statements

  • Nikki (Youth and Media Researcher): "The question about AI is not 'how much AI is in children's time' but 'how is it structuring their childhood?' That's a big change."

  • Nikki on adolescent development: "Young people, children and teenagers, are like a race car where the brakes are still being built. So the onus of decision making is put on the kid, and that's not okay."

  • Ainid (Access Partnership): "It's not just the provider's fault—it's a shared responsibility. But it's not shared equally. Providers have to do the brunt because they decide what content goes to kids and how it's distributed."

  • Nikki on listening to youth: "Ask the kids. They have amazing stuff to say. Every student I've spoken to draws the same red line: 'We don't like image-based AI stuff where it gets into our creative work. Just stay away.' They know where the red lines are. We just have to listen."

  • Ainid on business model problems: "Unless and until we solve the business model problem, I think this is going to be a challenge. We have failed our kids in the age of social media. We will have the same challenges [in AI]."

  • Nikki on Turing's legacy: "Let us never forget that [AI] began as a game of subtle deception. It was never meant to be overt. If we can call out subtle forms of deception as deception, then we have a better starting point."

  • Libby (NetSafe): "There is no simple fix. We're seeing rapid conversations about regulation ranging from some regulation to outright bans."

  • Closing sentiment: "Technology does not extract value from children but empowers them."


Speakers & Organizations Mentioned

Panelists (by role/affiliation):

  • Libby – NetSafe (New Zealand-based, 20-year history in online safety; created "Hector's World")
  • Ainid – Vice President, Access Partnership (global AI policy and regulation)
  • Nikki (Nikila Natraan) – Youth and media researcher; conducts surveys and consultations with children/teens across US (e.g., statewide New Jersey survey, hundreds of students)
  • Alexandra (Alexandraka) – Giga (joint initiative of ITU and UNICEF; mission to connect every school globally to internet)
  • Moderator/Facilitator – Kelly (based in UAE; notes AI introduced to school curriculum in UAE as second country after US)

Organizations:

  • AI Asia Pacific Institute – Host of the conversation
  • NetSafe – Founded 20 years ago; online safety organization
  • Access Partnership – Global policy and regulation consultancy
  • Giga (Joint ITU-UNICEF initiative) – School connectivity mission
  • UNICEF – Conducted consultations with 50,000+ children across 180 countries on AI in education
  • UNESCO – Investing in child/educator training on AI use
  • International Baccalaureate (IB) – Referenced for academic integrity frameworks
  • Meta – Cited for €2.7B GDPR/child protection fines by EU
  • Australian Government – Regulatory precedent: age-gating (ban under 16) with 70% public support
  • EU/European Commission – AI Act (enacted February 2024); GDPR enforcement
  • Spain – Proposing age threshold 16 ban; leading coalition of 6 European countries
  • New Zealand Parliament – Bill pending to ban social media for under 16s; NetSafe opposes
  • India – January 3 response to intimate imagery; India AI mission targeting offline populations
  • Singapore – Introduced cyber agency for harm reporting

Notable References:

  • Instagram/Meta platforms – 1 in 3 girls thinks negatively of their bodies (per leaked research cited by speaker)

Technical Concepts & Resources

AI Systems & Tools:

  • ChatGPT, Gemini, Claude – Referenced as tools used by students; study cited: 72% of US teenagers have intimate conversations with chatbots
  • Image-based AI – Identified by students as a red line; creative appropriation concerns
  • Deepfakes – Emerging harm; children struggle to assess authenticity
  • Algorithmic recommendation systems – Core mechanism shaping "ambient AI media environment"; personalized based on behavioral patterns
  • Infinite scroll – UI design pattern; identified as harmful feature being targeted in regulations (Spain, EU)

Policy/Regulatory Frameworks:

  • EU AI Act – Enforced February 2024; groundbreaking but results still emerging
  • GDPR (General Data Protection Regulation) – €2.7B Meta fine case study; child protection provisions
  • Age verification/verification systems – Discussed as problematic (data extraction concerns); digital ID verification more intrusive than age-only checks
  • Safety by design – Regulatory pillar gaining convergence globally
  • Impact assessments – Required before AI deployment in schools
  • Content classification & minor modes – Mentioned as tools (details not fully elaborated)

Concepts & Frameworks:

  • Digital global citizenship – Framework for teaching responsibility across all ages
  • Developmental lens – Applying child development research to tech policy (vs. technology-first approach)
  • Shared responsibility model – Multi-stakeholder framework: platforms, governments, educators, parents, youth
  • Red lines – Student-drawn boundaries on acceptable vs. unacceptable AI use
  • Academic integrity in AI era – Evolving rules; education-specific platforms promoting discussion over direct answers
  • Peer orientation in adolescence – Ages 11–25 heightened orientation to peers over adults; design implication
  • Biometric surveillance concerns – Mentioned as "PTSD-inducing" lockdown of childhood; raised in militarized zone context

Data/Studies Cited:

  • UNICEF children's statement on AI in education – 8 clear asks from 50,000+ children across 180+ countries
  • Australian public polling – 70% support for age-gating regulation; 25% believe it will work
  • NetSafe/Access Partnership study – Three-step approach: preparation, code (change), intervention
  • Instagram body image study (Meta leak) – 1 in 3 girls negative self-perception
  • Statewide New Jersey youth survey – Multiple hundreds of student interviews on AI attitudes
  • US chatbot intimacy study – 72% of teenagers have intimate conversations with chatbots
  • Australia as regulatory test bed – Live case study on efficacy of age-gating; kids still accessing platforms (bypass behavior noted)

Methodologies:

  • Youth consultation/co-design – Direct engagement with children/teens in policy development
  • Survey and ethnographic research – Researcher spending time with youth to understand lived experience
  • UNICEF global consultation model – 50,000+ child consultations across 180+ countries
  • Cyber agency model – Singapore as early adopter; enables harm reporting

Document Metadata

  • Type: Conference panel discussion (AI summit)
  • Duration: ~60 minutes (approx., with audience Q&A)
  • Date/Venue: Not explicitly stated; references to real-time updates (EU AI Act February 2024, India January 3 incident suggest 2024 timeframe); venue mentioned as having flowers and regional context suggesting South Asia (India references)
  • Primary Language: English
  • Transcript Quality: Good (minor repetition/filler words present; colloquial phrasing preserved)

Structural Notes for Further Use

This talk is valuable for:

  • Policymakers: Multi-country regulatory models and enforcement mechanisms
  • Educators: Practical frameworks for AI literacy and academic integrity in schools
  • Parents: Understanding adolescent neuroscience, red flags, and shared responsibility models
  • Researchers: Global landscape of child-centered AI governance; youth voice integration methods
  • Tech industry: Design-by-default expectations, accountability frameworks, shared responsibility implications

The discussion balances optimism (positive steps in regulation, youth engagement) with caution (enforcement gaps, business model inertia, digital divide persistence).