Information Integrity as Infrastructure: Empowering Youth in the AI Age
Contents
Executive Summary
This panel discusses information integrity as a foundational infrastructure for trust in digital ecosystems, particularly emphasizing the urgent need to protect youth who are growing up in environments saturated with misinformation, disinformation, and synthetic content. Speakers from international organizations (OECD, UNESCO, IEEE, Mastercard, and others) stress that no single actor can address these challenges alone—rather, coordinated governance frameworks, technical standards, policy harmonization, and inclusive global partnerships are essential to build trustworthy AI systems that serve humanity's interests.
Key Takeaways
-
Information integrity is infrastructure, not just content moderation — It requires multi-layered technical standards, policy harmonization, government accountability, and corporate liability frameworks working in concert.
-
Time is the critical constraint — Children are growing up now in compromised information ecosystems; waiting five years for perfect research or policy consensus is not an option. Experimentation, iteration, and rapid scaling of proven solutions are urgent.
-
Global cooperation without one-size-fits-all solutions — Organizations like IEEE, OECD, UNESCO, and AI Commons are building collaborative frameworks that allow diverse regions and cultures to develop contextually appropriate solutions while maintaining interoperability and shared standards.
-
Policy and technology must be synchronized — Standards, regulations, and technical deployment must be explicitly coordinated; treating them as parallel processes ensures continued misalignment and vulnerability.
-
Trust at scale requires embedding accountability at every layer — From digital signatures on financial transactions to watermarking on synthetic media to age-appropriate design in platforms, trust must be a deliberate architectural choice, not an afterthought.
Key Topics Covered
- Information integrity as critical infrastructure — foundational to democracy, decision-making, and societal trust
- Youth vulnerability and protection — generative AI's unique threats to children and young people lacking critical evaluation skills
- Misinformation, disinformation, and synthetic content — deep fakes, manipulated data, and AI-generated falsehoods at scale
- Policy-technology synchronization gap — misalignment between rapid AI deployment and slower regulatory frameworks
- Global trust challenge initiative — a coalition-based crowdsourced approach to developing scalable solutions
- Technical standards and infrastructure — contentauthentication, digital signatures, age-appropriate design, and provenance tracking
- Inclusive governance and partnerships — cross-disciplinary, multi-stakeholder collaboration across global north and south
- Digital divide and regional disparities — different challenges and capabilities in developing vs. developed nations
- Education and media literacy — building cognitive and emotional skills to navigate AI-driven information ecosystems
- India's digital infrastructure model — PKI, digital identity (Aadhaar), GST signatures, and UPI as trust-layer examples
Key Points & Insights
-
The misalignment problem is systemic: Regulators work on principle-based frameworks and standards (EU AI Act, NIST frameworks) while private companies deploy AI at accelerating speeds, creating a "two-speed" dynamic that leaves policy perpetually behind.
-
Youth are "AI natives" at risk: Children lack the cognitive and emotional tools to distinguish real from synthetic content, assess credibility, or maintain agency when systems are designed for engagement and profit—not their wellbeing.
-
Solutions already exist, but are not scaled: Content authentication tools, digital watermarking, age verification, and provenance tracking are technically mature but available only to large companies; they need to become interoperable, scalable, and globally accessible.
-
Trust is a societal-level cybersecurity problem: Just as cybersecurity protects systems from breaches, information integrity must protect democracies and citizens from manipulation. The infrastructure, governance mechanisms, and liability frameworks required are analogous.
-
No single approach works globally: Solutions cannot be translated directly from the global north to the global south due to different infrastructure, digital divides, cultural contexts, and trust relationships. Each region requires customized, locally-rooted interventions.
-
Voluntary corporate governance is insufficient: Relying on market self-correction or voluntary principles to manage misinformation, filter bubbles, and manipulation is inadequate; core governance requires government infrastructure, legal liability, and accountability mechanisms.
-
Youth must be part of the solution, not just the problem: Young people have experiential expertise in navigating AI systems and digital risks; including them in policy design and solution development is not tokenism—it is necessity.
-
The convergence of technologies amplifies risk: Generative AI combined with neurological data, emotional personalization, and anthropomorphized AI companions creates unprecedented psychological vulnerabilities, especially for children.
-
India's trust-layer approach is a replicable model: Building foundational infrastructure (PKI, digital signatures, verified identity) before deployment, then scaling to billions of transactions (GST, UPI) demonstrates how trust can be embedded at infrastructure level.
-
The Global Trust Challenge represents a shift from principles to prototypes: Moving from abstract frameworks to concrete pilot solutions that can be tested, proven, and replicated—emphasizing that "proof is in the pudding" when a solution moves from proposal to prototype to pilot stage.
Notable Quotes or Statements
-
Al Peshaw (IEEE): "No single actor can address these [challenges] alone... Sovereignty is not about closing yourself off to everyone. Sovereignty is the ability to make your own choices."
-
Gabriella Ramos (Task Force on Inequalities): "This is not a content problem. It's a system problem... we need to address it with the government's infrastructure."
-
Gabriella Ramos: "There is nothing more dangerous than giving something for granted. This applies to information, to institutions, to values, to human rights and democratic values."
-
Amir Karim Bonafatami (AI Commons / Cognizant): "The purpose is to sync [policy and innovation]... we never had this opportunity to solve this problem together."
-
Karen Peret (OECD): "No single actor can address these [governance challenges] alone... we need to all work together... through collaborative cross-disciplinary efforts that combine policy innovation with technical ingenuity."
-
Maria Grazia (UNESCO): "Youth is not a problem or a category to be dealt with but as part of the conversation and part of the solution."
-
Al Peshaw: "A standard is good when nobody's happy... you have a room full of people that want to kill each other, and yet at the end they all come to a conclusion they can live with."
-
Jill Fayad (MBRSG): "85% of people who read something don't think the information is accurate. Yet 8% will go and check... We breathe trust."
-
Tanya Pearl Mutter (Foundation Ambiona): "Trust is a fundamental human rights issue and we have to find solutions together."
Speakers & Organizations Mentioned
Panelists
- Al Peshaw — IEEE (Institute of Electrical and Electronics Engineers), Public Charity with 500,000+ technical engineers across 190 countries
- Gabriella Ramos — Task Force on Inequalities and Social-Related Financial Disclosure (formerly UNESCO Assistant Director General)
- Amir Karim Bonafatami — AI Commons and Cognizant
- Karen Peret — OECD (Organisation for Economic Co-operation and Development), Division on AI and Emerging Digital Technology
- Maria Grazia — UNESCO (United Nations Educational, Scientific and Cultural Organization), formerly led AI ethics recommendations
- Muhammad Miss Bahurin — Cedak (Centre for Development of Advanced Computing), Bangalore; Government of India cybersecurity & AI ethics perspective
- Yuko Harayama — GPI Tokyo Expert Center; formerly OECD
- Jill Fayad — Fellow, MBRSG (Mohammed Bin Rashid School of Government)
- Tanya Pearl Mutter — Foundation Ambiona; moderator for second panel
- Ui Stewart — Mastercard, Center for Inclusive Growth
- Moira Patterson — IEEE Standards Association; session moderator
Key Organizations & Initiatives
- IEEE (Institute of Electrical and Electronics Engineers) — Advancing standards for age-appropriate design, AI ethics, sociotechnical elements; 250+ AI initiatives
- OECD — Developing governance frameworks, G7 Hiroshima AI Process, reporting frameworks on information integrity
- UNESCO — UNESCO Recommendation on the Ethics of Artificial Intelligence (2021, adopted globally); work on misinformation, synthetic content, youth engagement as researchers
- AI Commons — International organization for AI governance cooperation
- Cedak — Government of India's center for AI ethics and trustworthiness; overseeing India AI mission
- Mastercard — Global payments ecosystem operating across 200 countries; fraud prevention and inclusive growth initiatives
- EU, UK, Australia, France, Spain — Referenced for AI governance, age restrictions (no smartphones under 16), and policy innovation
- Global Trust Challenge — Coalition initiative calling for crowdsourced solutions on information integrity (website: globaltruschchallenge.ai)
Technical Concepts & Resources
Standards & Frameworks
- EU AI Act — Regulatory framework for high-risk AI systems
- NIST AI Risk Framework — U.S. framework for managing AI risks
- UNESCO Recommendation on the Ethics of Artificial Intelligence (2021) — Global normative instrument covering misinformation, synthetic content, youth protection
- G7 Hiroshima AI Process (2023) — International cooperation mechanism on AI governance; reporting framework includes content authentication and provenance
Technical Solutions & Infrastructure
- Content authentication and provenance tools — Early-stage, mostly used by large companies; need scalability and interoperability
- Digital watermarking — Marking synthetic/generated content to distinguish from authentic material
- Age verification systems — Technical mechanisms to ensure age-appropriate access
- Public Key Infrastructure (PKI) — Trust layer used in India's digital infrastructure for signatures, authenticity
- Digital signatures (electronic signatures) — One-time public key infrastructure enabling non-tamperable documents; used in India's GST filing, millions of daily signatures
- Unified Payment Interface (UPI) — India's payment system processing billions of authenticated transactions monthly; demonstrates trust at population scale
India-Specific Infrastructure Models
- Aadhaar — World's largest unique identity database (1.46 billion population); verified digital identity enabling trustworthiness
- Information Technology Act (2000, amended 2008) — Enacts integrity and authenticity as core trust components
- Office of Controller of Certifying Authorities (CCA) — Highest root trust body for cybersecurity in India
- Goods and Services Tax (GST) digital signing — Monthly filings with pure integrity; non-tamperable, tamper-detectable documents
- DNS Security Center of Excellence (Cedak) — Processes 30-35 million daily queries; AI-enabled with PKI, blocks 3-4 million malicious domains
- India AI Mission / Safe & Trusted AI Pillar — Government initiative including "Youth AI" education (grades 8-12) with age-appropriate, trusted design
AI Models & Tools Referenced
- ChatGPT — Used in scientific conferences for paper writing; example of citation fabrication risks
- Claude — Alternative generative AI mentioned
- Generative AI broadly — Risks include synthetic content (text, images, video, data), filter bubbles, emotional manipulation, hallucination
Research & Evidence Gaps
- 80% of research on global north vs. global south — Highlighting regional disparity in understanding information integrity challenges
- Lack of long-term studies on AI's impact on child development — Emotional, cognitive, agency development unknown; requires urgent collaborative research
- Missing behavioral/psychological evidence — On synthetic content effects, human-AI interactions, anthropomorphization risks
Key Terminology
- Synthetic content — AI-generated or manipulated text, images, video, data
- Deep fakes — Manipulated media using generative AI
- Filter bubbles — Algorithmic personalization trapping users in narrow information ecosystems
- Information integrity — Whether information is real, authentic, valid, and trustworthy
- Misinformation vs. disinformation — Unintentional vs. intentional false/misleading information
- Proof of concept to pilot scaling — Moving from proposal → prototype → pilot to validate and replicate solutions
Document Type: Conference Panel Discussion
Format: Multi-speaker policy/technical forum
Audience: International AI and governance stakeholders
Primary Call-to-Action: Join the Global Trust Challenge; partner with coalition; submit solutions; hold governments and companies accountable
