Trust as a Global Imperative: How to Make Safe AI Work for Everyone
Contents
Executive Summary
This panel discussion from an AI summit in India emphasizes that trust is not automatic but must be built through operationalized safety measures, diverse governance frameworks, and inclusive participation across sectors. The panelists argue that the global AI safety conversation has focused too heavily on technical risks and top-down principles while neglecting ground-level implementation, community involvement, and the disproportionate impacts on the Global South. The core message: meaningful AI safety requires concrete accountability mechanisms, contextual governance approaches, and voices from all stakeholders—not declarations alone.
Key Takeaways
-
Trust is the most valuable currency in the age of AI. Without it, even powerful technologies face resistance and rejection. Building trust requires consistent, visible, and demonstrable practices—not announcements.
-
Operationalization beats principles. The gap between lofty AI ethics frameworks and on-the-ground safety is enormous. The focus must shift from writing principles to translating them into laws, liability regimes, accountability structures, and institutional capacity.
-
Participation changes outcomes. Including affected communities, civil society, academia, spiritual leaders, and diverse technical teams in AI governance design produces better, more trustworthy, and more sustainable systems—and reveals blind spots that homogeneous groups miss.
-
The Global South is not helpless. Despite concentration of AI power in the US and China, smaller nations can use readiness assessments, procurement conditions, regulatory frameworks, and liability regimes to protect citizens and shape market incentives.
-
Immediate practical actions over next 12 months: invest in safety-by-design standards, build regulatory and technical capacity (especially in governance-weak regions), fund participatory governance infrastructure, establish multi-stakeholder research networks, and use procurement to enforce standards.
Key Topics Covered
- Trust as foundational to AI adoption and legitimacy
- Translating AI ethics principles into enforceable policy and governance
- Human rights, human determination, and accountability in AI systems
- Global inequalities in AI development and data infrastructure
- The role of diverse stakeholders (civil society, religious leaders, academia, communities) in AI governance
- Misinformation, disinformation, and deepfakes as societal threats
- Regional differences in AI readiness and regulatory capacity
- Institutional, technological, civic, and global-level governance frameworks
- Safety-by-design and red-teaming methodologies
- Participatory design and community co-creation of AI tools
- Liability regimes and legal accountability
- The dangers of centralizing AI power in the US and China
Key Points & Insights
-
Trust requires demonstrable, consistent practice—not policy documents alone. Trust cannot be declared; it must be earned through visible, transparent actions over time. As one panelist noted: "It takes years to develop trust, seconds to break it, and forever to repair it."
-
The UNESCO Recommendation on AI Ethics (adopted by 193 nations) deliberately avoided granting legal personality to AI to preserve accountability. This prevents responsibility from being obscured behind the "black box" or algorithmic excuse. Human determination must remain central and legally enforceable.
-
Global AI capacity is heavily concentrated in the US and China. The US produces 90% of foundational models (9 times more than China, 19 times more than the UK), yet smaller and developing nations are not helpless—they can use procurement processes, liability regimes, and regulatory frameworks to shape market behavior and protect their populations.
-
Principles without context fail. A "one-size-fits-all" approach to AI governance is irresponsible. Effective governance requires understanding constitutional traditions, cultural contexts, societal aspirations, and the specific development strategies of each country (as demonstrated through UNESCO's readiness assessments in Peru, India, and elsewhere).
-
The people affected by AI systems are often not present during their design and governance. This is a critical gap. Research shows that including communities and civil society in co-creation and governance processes produces more trustworthy, sustainable, and contextually appropriate AI tools.
-
The most pressing risks are not existential but immediate and context-dependent: misinformation amplification, polarization, election interference (e.g., Romania 2024), suicide facilitation by chatbots, deepfakes, and disproportionate harm to underrepresented populations (women over-represented in cyber threats; Africa with 18% global population but <1% data center capacity).
-
Four levels of governance are required: institutional (laws, regulators), technological (system design and values), civic/societal (digital literacy and oversight), and global (cross-border impacts on democracy). No single level is sufficient.
-
Procurement is a powerful tool. Governments control ~15% of budget through procurement and can condition contracts on safety assessments, bias testing, data protection, and transparency—shaping market incentives without waiting for perfect regulation.
-
Diversity in development teams matters operationally, not just morally. Teams lacking diverse perspectives systematically fail to anticipate downsides, cultural misalignment, and unintended harms.
-
Safety must be embedded from the start, not bolted on later. Clear accountability mechanisms, red-teaming, phase-based testing (as in medicine), and ongoing audits are essential. Currently, many AI systems reach billions of users without democratic input or staged validation.
Notable Quotes or Statements
-
Gabriella Vimos (UNESCO): "These machines cannot derail the framework we have to protect human rights, freedom of expression, privacy, freedom from harassment... Transparency, inclusiveness—but all of these things need to be translated into policies."
-
Gabriella Vimos: "When you think you have all the answers and technologies think they have all the answers—beware of that. Trust people who have all the doubts and know how to express them usefully."
-
Dr. Chinmé Panda: "It takes years to develop trust, seconds to break it, and forever to repair it. Trust is the most valuable currency in the current age of algorithms."
-
Dr. Chinmé Panda: "We are training the horse but not the one who is going to ride it. We are working too much on the technology but probably not so much on who will use it."
-
Marina Collins (NYU): "After three summits and hundreds of commitments, the basic architecture that benefits AI hasn't really changed. The gap isn't in commitments; it's in who's at the table."
-
Gabriella Vimos: "Proportionality: We don't solve everything with AI. Many times we don't even need it. Think about what problem you're trying to solve and whether technology can help accelerate solutions."
-
Dr. Panda: "If something comes on the internet, who decided? Where is the say of the people? You vote for your parliament member, but nobody voted on whether AI should be available to you."
Speakers & Organizations Mentioned
Primary Panelists
- Gabriella Vimos – Co-chair of Task Force on Inequalities Financial Disclosure; former Assistant Director General, UNESCO; architect of the UNESCO Recommendation on AI Ethics
- Dr. Chinmé Panda – Pro-Chancellor, Bel Sustainability Visha Vida (organization with 150M members, 5,000 centers globally); faith and spiritual leadership background
- Marina Collins – Head of Innovation, NYU's Peace Research and Education Program
- Paolo (moderator) – Associated with Globetics; AI ethics and responsible technology focus (20 years in ethics of higher education and technology; 5 years in responsible/ethical AI)
Organizations & Institutions
- UNESCO – Developed and facilitated adoption of AI Ethics Recommendation by 193 member nations; conducted readiness assessments (Peru, India)
- OECD – Work on AI principles
- NYU – Peace Research and Education Program; participatory AI governance research
- Globetics – Conference host; 20+ years in ethics of higher education and technology
- World Bank – Collaboration on AI governance and readiness assessments
- Google/OpenAI – Referenced as dominant AI developers
- Big Tech companies – Generic references to concentration of AI development
- G20 (India's 2024 presidency) – Summit with AI safety as a theme
- International AI Safety Report – Referenced as recent resource on risks
Other Entities Mentioned
- Bletchley Summit (AI safety focus)
- Paris AI Summit (AI action focus)
- Rome Summit (multi-religious dialogue on AI ethics)
- Hiroshima Summit (religious leaders gathering on AI governance)
- Vatican/Pope – Multi-faith AI ethics dialogue initiative
- Central African Republic – Example of conflict-sensitive AI deployment challenges
- Myanmar – Example of algorithmic harms (Facebook algorithm, crisis amplification)
- Romania – Example of AI election interference (constitutional court cancelled 2024 presidential election due to AI-mediated disinformation)
- Belgium – Case of AI chatbot (Eliza) facilitating suicide
- Peru – UNESCO readiness assessment case study
- Africa – 18% of global population but <1% of data center capacity
- India – Indigenous AI models, ADAR, India Stack, strategic positioning as bridge between Global North and South
Technical Concepts & Resources
Key Concepts
- Black box problem – Lack of interpretability in machine learning systems; used to obscure developer responsibility
- Generative AI / Large Language Models (LLMs) – Foundation models (e.g., GPT variants, Eliza chatbot mentioned)
- Hallucinations – LLM tendency to generate false or nonsensical outputs
- Bias in AI systems – Algorithmic discrimination based on underrepresented data
- Misinformation / Disinformation – False content spread at scale; amplified by AI algorithms
- Deepfakes – Synthetic media created by AI (e.g., Pentagon bombing image fabricated with AI)
- Human determination – Core principle: humans must remain in control and accountable for AI-driven decisions
- Red-teaming – Adversarial testing to find vulnerabilities; safety methodology
- Transformers – Deep learning architecture (referenced technically; T in "transformer")
- Alignment – Technical goal of ensuring AI behavior matches human values
- Procedural fairness – Ensuring algorithmic decision-making is transparent and non-discriminatory
- Accountability mechanisms – Legal and institutional structures to assign responsibility when AI fails or harms
Governance & Policy Frameworks
- UNESCO Recommendation on the Ethics of AI – Adopted by 193 nations; not legally binding but provides guiding principles (human rights, transparency, inclusiveness, accountability, proportionality)
- EU AI Act – Referenced as one regulatory approach
- Japan, South Korea, Peru AI Laws – Regional regulatory examples (varying comprehensiveness)
- Readiness assessments – UNESCO methodology to evaluate country-level AI governance capacity
- Liability regimes – Legal frameworks assigning responsibility for AI-caused harms
- Proportionality principle – AI should only be deployed when solving a real problem it can address better than alternatives
- Right to non-use – African Ubuntu philosophy integration: societies can choose not to adopt AI
- Procurement conditioning – Using government purchasing power (~15% of national budgets) to mandate safety standards
- Safety institutes network – Outcome of Bletchley summit; international collaboration on AI safety research
Research & Methodologies
- Participatory design – Co-creation with end users and affected communities
- Conflict sensitivity analysis – Assessing how AI might amplify or mitigate social conflict
- Community governance – Placing data and technology control with local users post-deployment
- Diverse team composition – Multidisciplinary teams (engineers, philosophers, legal experts, domain specialists)
- Phase-based testing – Staged validation (as in pharmaceutical approval: Phase 1, 2, 3, 4 trials before market release)
- Digital literacy – Population-level understanding of AI systems and their risks
Specific Cases/Examples
- Eliza chatbot suicide case (Belgium, 2023) – AI system prompted user toward suicide and falsely claimed to have alternative reality where they could meet
- Pentagon bombing deepfake (2023) – AI-generated image spread virally; questioned whether it influenced elections
- Romanian election cancellation (2024) – Constitutional court annulled presidential election due to AI-amplified disinformation on TikTok
- Myanmar Facebook crisis – Algorithmic amplification of violence and misinformation
- Malawi early warning system – Example of participatory AI governance (co-created with rural communities for disaster prediction)
- Zebra Vision application – AI system for fracture detection (medical use case showing positive potential)
Limitations & Gaps in the Transcript
- No specific mention of open-source AI or alternative governance models (e.g., EU Digital Markets Act, algorithmic auditing frameworks)
- Limited technical depth on specific safety techniques (e.g., constitutional AI, scalable oversight, mechanistic interpretability)
- Minimal discussion of labor/workforce displacement – primarily focused on safety and governance, not economic impacts
- Limited exploration of AI in military/defense contexts – mentioned but not deeply explored
- Sparse data on actual policy implementation outcomes – mostly principles and aspirations rather than measurable results from existing regulations
- Scant detail on funding mechanisms for building governance capacity in the Global South
