All sessions

Ethical AI: Keeping Humanity in the Loop While Innovating

Contents

Executive Summary

This UNESCO-sponsored panel discussion challenges the false dichotomy between AI innovation and ethics, arguing they are mutually reinforcing rather than opposed. Panelists from industry, government, academia, and policy emphasize that ethical frameworks must be built into AI systems from design inception, supported by global multilateral cooperation, diverse stakeholder participation, and human-centered development that prioritizes collective intelligence over profit-driven AGI narratives.

Key Takeaways

  1. Stop Debating "Innovation vs. Ethics": This framing is false and counterproductive. Instead, ask: "What kinds of AI do we collectively want in society?" and "What AI use cases are genuinely harmful and should we prohibit them outright?"

  2. Build Oversight Early, Not Late: Shifting from post-hoc ethical review to ethics-by-design requires regulatory frameworks, industry standards, and education changes—but the ROI is higher trust, fewer scandals, and better products.

  3. Representation in Development Matters Urgently: Developers from marginalized communities, developing nations, and non-Western intellectual traditions must be included in AI design, not for feel-good diversity but because they identify problems and solutions the dominant group cannot see.

  4. AI Policy Is About Impact Management, Not Just Technology Design: Policy must address the full lifecycle: Why are we building this? Is it the best solution? Who benefits? Who is harmed? How do we evaluate impact? How do we adjust?

  5. Collective Intelligence, Not AGI, Is the Real Goal: Focusing on empowering diverse humans to collaborate—supported by well-designed AI tools—is more sustainable and effective than chasing superintelligent systems that few people understand or control.

Key Topics Covered

  • Innovation vs. Ethics (False Binary): Rejection of the narrative that regulation and ethical constraints hinder innovation; parallels drawn to pharmaceutical regulation
  • UNESCO's Global Framework: The 2021 UNESCO Recommendation on the Ethics of Artificial Intelligence adopted by 193 member states; principles of human rights, dignity, and fundamental freedoms
  • Operationalization of Ethics: Translating principles into practice through "ethics by design" (embedding oversight from initial development stages)
  • Regulatory Approaches: Risk-based regulation (EU AI Act) vs. post-hoc remediation; debate over which use cases should be prohibited entirely
  • Addressing Developer Diversity: The exclusion of voices from developing nations and marginalized communities in AI design; importance of inclusive development teams
  • Human Oversight & Accountability: Keeping humans at the center of decision-making; rejection of delegating ethical accountability to technology
  • Multilateral Cooperation: Global coordination on AI governance, particularly for existential risks and military applications
  • Private Sector Responsibility: How companies like Salesforce integrate ethical controls and accessibility features while maintaining commercial viability
  • Education & Skills: Reframing technical education to include humanities, social sciences, and context-specific problem-solving
  • Alternative Conceptualizations: Moving beyond Western (Cartesian) frameworks; challenging the AGI narrative in favor of collective intelligence models

Key Points & Insights

  1. Ethics and Innovation Reinforce Each Other: When ethical reflection is integrated into system design from inception, AI systems become more trustworthy, respected, and broadly deployed. Regulation does not necessarily hinder innovation; poor design decisions cause more friction.

  2. No "One-Size-Fits-All" Solution: Ethical AI frameworks must be contextualized; overarching principles exist (human oversight, non-discrimination, cultural diversity, environmental sustainability), but implementation varies by region, sector, and use case.

  3. "Ethics by Design" vs. Afterthought: The current industry default of identifying ethical problems post-launch is inefficient and causes harm. Oversight and standards must be built into every development stage, with sandbox testing and iterative evaluation before commercialization.

  4. Humans Remain Accountable, Not Technology: AI systems are not yet sufficiently intelligent to shoulder ethical responsibility. Accountability lies with the humans who design, deploy, and oversee these systems—this cannot be delegated to algorithms.

  5. Risk-Based Regulation Works: The EU's approach identifies high-risk domains (workforce, healthcare, justice, law enforcement) and prohibits certain use cases (predictive policing, emotional recognition in workplaces, manipulative subliminal techniques) without blocking all innovation.

  6. The "Development Luxury" Problem: Developed nations can afford to experiment broadly with AI tools. Developing nations need AI to solve urgent survival problems (malnutrition, farmer suicides, flood preparedness) and cannot afford to waste resources on uncertain ROI experiments.

  7. Inclusive Design = Better Products: The most commercially successful and technically accurate AI systems are those designed inclusively from the outset (accessibility features, multiple languages/accents, diverse use cases). Inclusion is not a cost; it's competitive advantage.

  8. AGI Narrative Obscures Collective Intelligence: The dominance of large-company AGI discourse creates a false impression that superintelligent systems will solve human problems. In reality, collective human intelligence (diverse teams, cross-disciplinary collaboration) is the genuine transformative force.

  9. AI Is Not Neutral: Technology embeds choices, values, data, and cultural assumptions. AI systems reflect the assumptions of their developers; the Cartesian/Western individualistic tradition dominates current AI, limiting alternatives and global applicability.

  10. Information Access = Empowerment: UNESCO's community radio example demonstrates that connectivity and information access are prerequisites for human-centered development. AI tools amplify this—they can transform lives only if paired with awareness, education, and decision-making capacity.


Notable Quotes or Statements

"I don't see personally an issue a contradiction between [innovation and ethics]. I see it more between innovation and regulation." — Dr. Tophik Jalassi, UNESCO ADG

"The biggest challenge we have is can we all the wisdom in this room say that we will be successful in aligning every single human on this planet to the same ethical values? The answer is no... So the accountability comes back to us." — Deb Jani Gosh, NITI Aayog

"Most people developing AI never experienced power cuts [or] broken roads." [Adapted as foundational critique] — Paraphrased by Professor Virginia Dignam

"We need a toolbox. We don't need only hammers." — Professor Virginia Dignam (metaphor for moving beyond narrow AI tool obsession)

"The most inclusively designed technology is going to be the one that's most successful." — Paula Goldman, Salesforce

"AI does not stand for artificial intelligence. AI stands for all inclusive." — Dr. Tophik Jalassi

"If whatever we do in the field transforms lives, then we are spot on." — Dr. Tophik Jalassi (on impact-driven AI)

"We need to go much more broader in understanding how this AGI is... AGI is called collective intelligence the moment that we work together we can do more than each one of us." — Professor Virginia Dignam

"Regulation is usually an afterthought. Oversight has to be built into the entire development process from design to commercialization." — Deb Jani Gosh


Speakers & Organizations Mentioned

SpeakerRole/Affiliation
Tim CurtisRegional Director, UNESCO South Asia
Dr. Tophik JalassiAssistant Director General for Communication and Information, UNESCO
Professor Virginia DignamDirector, AI Policy Lab, Umeå University; UNESCO AI Ethics Expert
Paula GoldmanChief Ethical and Humane Use Officer, Salesforce
Deb Jani GoshDistinguished Fellow, NITI Aayog (National Institution for Transforming India)
Brando BenifiMember, European Parliament
Dr. Maria GraziaChief, Executive Office, UNESCO Social and Human Sciences Sector (Moderator)

Organizations/Bodies:

  • UNESCO (United Nations Educational, Scientific and Cultural Organization)
  • Salesforce
  • NITI Aayog (Government of India think tank)
  • NASCOM (National Association of Software and Service Companies, India)
  • European Parliament / EU AI Act
  • UNESCO Business Council
  • UNESCO AI Ethics Experts without Borders

Technical Concepts & Resources

Frameworks & Policies

  • UNESCO Recommendation on the Ethics of Artificial Intelligence (2021): Global framework adopted by 193 UNESCO member states; core principles: human oversight, non-discrimination, respect for cultural diversity, environmental sustainability
  • EU AI Act: Risk-based regulatory approach identifying high-risk domains and prohibited use cases
  • AI Impact Commons (impactcommons.global): Online platform documenting AI application stories from 30+ countries, focusing on social and economic development

Conceptual Frameworks

  • Ethics by Design: Integrating ethical oversight into each stage of development (design → development → testing → commercialization)
  • Collective Intelligence: Human intelligence amplified through collaboration and diversity, contrasted with AGI (Artificial General Intelligence) narratives
  • Inclusive Design: Designing AI systems for accessibility, multiple languages, diverse accents, and varied use cases from inception
  • Risk-Based Regulation: Classifying AI systems by risk level and applying differential oversight (e.g., critical for healthcare, justice; prohibited for predictive policing)

Philosophical/Intellectual Traditions Referenced

  • Cartesian Tradition: Western individualistic philosophy (cogito ergo sum: "I think, therefore I am"); critique that current AI reflects this bias
  • Ubuntu Tradition: African philosophy alternative ("we are, therefore I am"; collective rather than individualistic)
  • Play-Doh Metaphor: Critique of AGI-as-"all data in one system" approaches; loss of interpretability and structure

Specific Prohibited AI Use Cases (EU)

  • Predictive policing
  • Emotional recognition in workplaces
  • Emotional recognition in study/educational settings
  • Manipulative subliminal techniques

Measurement/Evaluation Approaches

  • Sandbox testing before commercialization
  • Real-time accessibility correction (e.g., real-time code accessibility verification)
  • Impact assessment frameworks (identifying who gains/loses, what is gained/lost)
  • Developer diversity metrics (startup growth tracking in tier-2, tier-3 cities vs. tier-1 centers)

Additional Context

Key Themes Emerging:

  • The Global South perspective is repeatedly emphasized as distinct from Global North debates; developing nations cannot afford "experimentation luxury" and need practical, high-ROI AI solutions
  • Information access and connectivity are presented as prerequisites for ethical, human-centered AI deployment (UNESCO community radio example from Southern Africa)
  • Multistakeholder governance (government, private sector, academia, civil society, international bodies) is positioned as essential; no single actor can address AI ethics alone
  • Education reform is positioned as critical infrastructure—engineers need humanities training; social scientists need technical literacy

Event Context: UNESCO Global AI Summit, sponsored by Government of India; final panel discussion of a week-long summit involving seven working groups on AI impact.