All sessions

Child-Centric AI Policy: Safeguarding India’s AI Future

Contents

Executive Summary

This panel discussion from the AI Impact Summit addresses the urgent need for child-centric AI governance in India, emphasizing a shift from "child safety" to "child well-being" as the organizing principle. Featuring policymakers, technology platform representatives, legal experts, and academics, the conversation explores preventive design mechanisms, regulatory frameworks, and multi-stakeholder coordination to protect India's 300+ million children from AI-facilitated harms—while enabling beneficial uses of AI technology.

Key Takeaways

  1. Shift the frame from "safety" to "well-being": Child well-being is broader than safety alone; it includes positive development, access to beneficial AI uses (e.g., educational tools), and protection from harm—not just risk avoidance.

  2. Design is the primary lever: Technical architecture and product design decisions have a far more powerful effect on child well-being than after-the-fact content moderation. Platforms must make safety/well-being the default, not an option users must actively enable.

  3. Multi-stakeholder accountability is non-negotiable: No single actor (platforms, government, parents, schools) can address this alone. India needs coordinated mechanisms—including youth advisory councils, fast-track law enforcement cycles, and cross-platform data sharing on harms—to keep pace with AI innovation.

  4. India must chart its own course: While learning from global benchmarks (EU, Israel, Australia), India cannot simply import foreign regulatory models. Solutions must account for linguistic diversity, shared devices, variable digital literacy, and India's unique demographic as the world's largest population of children.

  5. Legal and technical innovation must converge: The recommendation of a "child safety solutions observatory" and "innovation sandbox" suggests creating structures (akin to India's UPI for payments) where technical solutions, regulatory compliance, and best practices are aggregated and scaled nationally and for the Global South.

Conference Talk Summary


Key Topics Covered

  • Terminology shift: From "child safety" to "child well-being" as a more holistic framing
  • AI-enabled harms: Deepfakes, synthetic child sexual abuse material (CSAM), disinformation, and generative AI misuse
  • Design-first approaches: How platforms can build safety/well-being into product architecture rather than relying on post-hoc content moderation
  • Policy and regulatory frameworks: India's current legal landscape (IT Act, POCSO Act, intermediary guidelines, draft Data Protection Act) and gaps
  • Comparative global approaches: Learning from Israel, EU, and other jurisdictions while accounting for India's linguistic and cultural diversity
  • Parent and community engagement: The critical role of parents, educators, and digital literacy in child well-being
  • Age-appropriate design codes: Need for child-specific legal provisions beyond horizontal data protection laws
  • Multi-stakeholder coordination: Collaboration between platforms, government, law enforcement, academia, and civil society
  • Youth voice: Including children and adolescents in policy design through advisory councils
  • Implementation barriers: Enforcement challenges, language diversity, digital literacy disparities, and shared device usage in India

Key Points & Insights

  1. Scale and urgency in India: India produces more children than any other country and cannot afford to ignore this issue; over 300 million children globally experienced technology-facilitated abuse in 2024, with a 1,325% increase in AI-generated CSAM reported.

  2. Nuanced youth perception: Indian youth (survey of 410 young people) view AI as both beneficial and harmful; only 1 in 4 feel safe online, with young women reporting notably higher rates of stress and unsafety than young men, indicating gendered exposure to harassment and image-based abuse.

  3. Responsibility distribution: Youth surveys show 48% believe technology companies bear primary responsibility for online safety, followed by parents/carers and government—indicating platform accountability is a key expectation.

  4. Design over moderation: Both Snapchat and LEGO Education emphasized that product architecture (ephemeral messaging, default privacy settings, one-to-one communication models, no anthropomorphization of AI) prevents harm far more effectively than reactive content moderation.

  5. Legal frameworks are necessary but insufficient: India has updated intermediary guidelines to address synthetically generated content (SGI), but lacks a comprehensive, child-exclusive online safety statute comparable to the EU's approach; laws cannot be meaningfully enforced if they outpace technological reality or lack resources.

  6. Prevention > punishment: Advocate Napina argued that law should primarily function as preventive and protective (via regulatory guardrails and standards) rather than purely punitive, placing accountability on platforms before harms occur.

  7. India-specific implementation challenges: Linguistic diversity, shared device usage, variable digital literacy, and direct-to-mobile-phone access for generation Z require localized, culturally nuanced solutions rather than one-size-fits-all global approaches.

  8. Barriers to enforcement: Current Indian law enforcement mechanisms (cybercrime.gov.in, 1930 hotline) are slow (48+ hours); the new 2-hour takedown requirement for CSAM is an improvement, but requires proactive engagement by schools and parents with police to be effective.

  9. Child rights impact assessments (CRIA): Academics advocated for mandatory child rights impact assessments and audits by tech designers and service providers—similar to privacy impact assessments—to embed rights-based approaches from the design phase.

  10. Literacy and awareness are foundational: Multiple panelists emphasized that regulatory mechanisms work best when paired with media literacy for children, parents, educators, and even policymakers; understanding how AI actually operates (not just how to use it) is critical for informed decision-making.


Notable Quotes or Statements

"Child safety must be that golden thread that's woven through every stage of a young person's life." — Zoe Lmberernet, COO, Child Light

"I think safety is a very patriarchal term. We should worry about child well-being, not safety." — Gorav Agarwal, ISRO Spirit Foundation (chairing the engagement group)

"A law which cannot be enforced is not going to be passed and it should not be passed too." — Advocate Napina, Senior Advocate, Supreme Court of India, Founder of Cyber Sadhi

"AI shouldn't be seen as magic. It's not a friend—it's a tool. We should help kids break apart this black box and build the future of AI themselves rather than using it." — Atish (LEGO Education)

"Design is crucial. The architecture of a product has a far more powerful effect on the user experience than anything we can do afterward." — Utra Ganesh, APEC Head of Public Policy, Snapchat

"We're always seeing this kind of gaps even when we have very strong technology in this space. The problem is in the delivery." — Maya Sharma, Science and Technology Attaché, Embassy of Israel

"Nothing about children without children." — Chitra Aiyar, Space to Grow (emphasizing the necessity of youth voice in policy design)

"If a movie like 'Her' someone can fall in love with an AI, even a child is capable of falling in love with AI because it gives prompts which are so easy." — Prof. Charu Malhotra, Indian Institute of Public Administration, New Delhi

"Children's online harms are not a transaction between one account and another—it's inherently about behavioral, relational harms occurring in the real world, and therefore infinitely more complex." — Utra Ganesh, Snapchat (on why horizontal legislation is insufficient)


Speakers & Organizations Mentioned

Government & Policy

  • MEITY (Ministry of Electronics and Information Technology, India)
  • ISRO Spirit Foundation (organizing the expert engagement group)
  • Indian Institute of Public Administration (New Delhi)

Civil Society & Research Organizations

  • Child Light — Zoe Lmberernet (COO); co-organizing partner
  • Space to Grow — Chitra Aiyar; co-organizing partner
  • UNICEF, Brookings Institution, Alan Turing Institute, OECD (guidelines referenced)

Technology Platforms

  • Snapchat — Utra Ganesh (APEC Head of Public Policy)
  • LEGO Education — Atish (builder/designer focus)

Legal & Law Enforcement

  • Advocate Napina (Senior Advocate, Supreme Court of India; Founder, Cyber Sadhi)
  • Maya Sharma (Science & Technology Attaché, Embassy of Israel) — comparative jurisdiction perspective
  • Indian police & law enforcement (cybercrime.gov.in, 1930 hotline referenced)

Academia

  • Prof. Charu Malhotra (Senior Professor, Indian Institute of Public Administration, New Delhi)

Other

  • Akash Pagala — Chief Digital Officer, COTP; provides law enforcement/implementation perspective
  • Needle Labs — mentioned as relevant to AI safety work

Technical Concepts & Resources

AI & Generative AI Terms

  • Deepfakes — synthetic media of children created via generative AI
  • Nudification — AI-generated sexually explicit imagery of minors
  • Synthetic Child Sexual Abuse Material (CSAM) — AI-generated CSAM, distinct from non-consensual image sharing
  • Generative AI misuse — providing harmful advice to offenders, creating persuasive disinformation
  • Anthropomorphization — designing AI to seem human-like (flagged as harmful, especially for children)
  • Algorithmic bias — echo chambers around what a child wants to hear based on behavioral targeting

Regulatory & Policy Frameworks

  • IT Act (Information Technology Act, India)
  • POCSO Act (Protection of Children from Sexual Offences Act, India)
  • Intermediary Guidelines (modified to include synthetically generated imagery/content as of 2024)
  • Data Protection Act (India; notified but not yet implemented; expected May 2027)
  • UN Convention on Cyber Crime (recently approved by UNGA; focuses on global cooperation for investigation and prosecution; India not yet a signatory)
  • EU Digital Services Act — referenced as comparative regulatory model
  • Privacy by Design / Well-being by Design — product architecture principle
  • Child Rights Impact Assessment (CRIA) — proposed audit mechanism similar to DPIA

Technical Safeguards

  • Family Center (Snapchat feature) — parental visibility/controls without compromising child privacy
  • My AI (Snapchat's conversational AI for young people) — examples of age-gating, non-engagement with harmful prompts, pause mechanisms
  • Ephemeral messaging — messages that disappear, mimicking real-life conversation transience
  • Default privacy settings — location off by default, bidirectional friendship acceptance before messaging
  • Local processing (LEGO Education) — data never leaves the child's device; no cloud sync, no logins to third parties
  • Pre-trained classifiers (vs. generative AI) — LEGO's use of transparent, model-card-documented classifiers for AI literacy
  • Age-gating mechanisms — identity verification, age-appropriate content filtering

Methodologies & Processes

  • Expert Engagement Group — multi-stakeholder participatory process to develop policy recommendations
  • Child Safety Solutions Observatory (recommended) — aggregation of innovations and best practices across India and Global South
  • Innovation Sandbox — proposed challenge/incubation structure to develop solutions for digital harms
  • Youth Safety Advisory Council — (recommended) inclusion of children/youth voices in policy design
  • Multi-stakeholder centers (Israeli model example, Center 105) — government, police, companies, civil society coordinating in fast-cycle response

Concepts & Frameworks

  • Child Well-being (vs. Safety) — holistic framing encompassing benefits, development, and protection
  • Rights-by-Design — embedding child rights (non-discrimination, protection from abuse) into system architecture
  • Whole-of-Society Approach — involving doctors, mental health therapists, counselors, alternative mentorship systems—not just platforms, parents, and government
  • Friction by Design — introducing cognitive friction in reward systems to promote healthy development (e.g., not immediate applause for effort)
  • Neuroplasticity gap — children's developmental needs for cognitive scaffolding vs. ease of affirmation AI provides
  • Linguistic diversity in design — translating safety frameworks into all Indian languages; cited as a benchmark India is pioneering

Enforcement Tools & Resources

  • cybercrime.gov.in — Indian portal for reporting cyber crimes (slow; 48+ hours)
  • 1930 hotline — Indian cyber crime reporting mechanism (slow; routes back to cybercrime.gov.in)
  • 24-hour/2-hour takedown timelines — for content with nudity and AI-generated CSAM respectively (per updated intermediary guidelines)
  • Cyber Olympiad & AI Book — existing school-based initiatives (flagged as lacking safety/well-being content)

End of Summary