Empowering Communities in the Age of Advanced AI
Contents
Executive Summary
This AI safety summit challenges the false dichotomy between AI safety and development benefits, arguing they are inseparable. Speakers across research, governance, policy, and implementation emphasize that genuine progress in the Global South requires operationalizing safety through community participation, localized governance, and accountability mechanisms—not treating safety as a constraint on innovation but as a prerequisite for it.
Key Takeaways
-
Safety enables access to benefits; it does not constrain them. The framing of speed/scale/safety as competing objectives obscures the truth: unsafely deployed AI loses societal trust, regulatory approval, and ultimately creates economic damage. Ask "safety for whom?" not "safety or development?"
-
Community participation must be real and consequential. Hollow consultation is worse than no consultation. Meaningful inclusion requires mechanisms for contestation, grievance redress, and actual community control over systems affecting their lives—especially for marginalized populations.
-
Western-centric definitions of safety and AI governance will colonize the Global South unless actively resisted. Safety protocols, evaluation benchmarks, and inclusion measures must be co-designed across cultures and contexts. "Global south" is not a monolith; India's caste concerns differ fundamentally from Kenya's needs.
-
Institutional capacity-building must precede or accompany deployment. Speed of technology adoption must match the speed of governance capacity, institutional accountability, and community trust-building—not the other way around.
-
Frontier AI models are dual-use technology requiring coordinated international governance. Diplomacy, trade leverage, and international standards bodies are essential tools alongside technical safety research. Isolated nations cannot protect themselves; coordinated pressure on companies and host governments is necessary.
Key Topics Covered
- AI Safety Beyond Technical Measures: Safety as a social, political, and ethical challenge requiring participation and agency
- Dual-Use Technology Risks: Terrorism, disinformation, deepfakes, and misuse of frontier AI systems
- Global Governance & Coordination: International regulatory frameworks, trade relations, and diplomacy as enforcement mechanisms
- AI for Development: Practical deployments in agriculture, education, healthcare, and public administration across India and the Global South
- Community Agency & Power: Participation, contestability, oversight, and labor considerations in AI systems
- Language & Localization: National language AI, cultural context, and avoiding Western-centric definitions of safety and inclusion
- Implementation at Scale: Trade-offs between speed, scale, and safety; capacity building and trust-building in institutions
- Data Ethics & Labor: Ethical sourcing of training data and recognition of workers building AI systems
Key Points & Insights
-
Safety is Not a Trade-Off with Benefits
- Stuart Russell argues safety enables benefits; lack of safety prevents benefits (e.g., nuclear power, Boeing 737 Max). The framing of "safety vs. development" is fundamentally false.
-
Frontier Models as Dual-Use Technology
- Demonstrated risks include: jailbroken chatbots aiding terrorist recruitment, convincing deepfakes affecting elections (Irish presidential election example), and AI-enabled automation of harmful tasks. Current models are "much more capable" than demonstrated, with agents potentially automating harmful workflows.
-
Economic Displacement & the "Flywheel Effect"
- Jaan Tallinn warns of a feedback loop: as humans become economically unnecessary, their purchasing power diminishes, reducing their value as consumers. Eventually, the economy decouples from serving human welfare entirely—particularly dangerous in the Global South without protective frameworks.
-
Global South Vulnerability & Agency
- Borders and isolation provide no protection; AI development transcends geopolitical boundaries. However, Global South nations retain diplomatic and trade leverage—policy makers must understand frontier AI and exert coordinated pressure on companies and host nations.
-
Agency Extends Beyond Choice
- Saray Natarajin's framework: agency = participation (meaningful), contestability (grievance redress), and oversight (actual community control). Agency must account for power dynamics and vulnerability, and extend to data workers and labor.
-
Safety is Contextual & Not Universalizable
- Kalika Bali critiques the "global south" framing as a fallacy and warns against translating Western-centric safety definitions. Safety means different things across cultures (e.g., caste discrimination in India vs. race in the West; fetal sex determination bans). One-size-fits-all protocols colonize non-Western contexts.
-
Practical Safety Architecture in Deployments
- Nakul Jen's three-principle approach: (a) safety by architecture (no PII leaving devices), (b) humility by default (model abstention when uncertain), (c) staged inclusion (small, representative datasets before jurisdiction-wide rollout).
-
Community Co-Development, Not Top-Down Deployment
- Digital Green's Farmer Chat: built with farmers, not for them. Continuous feedback, human-in-the-loop reinforcement learning with agronomists, and iterative feature refinement based on actual farmer needs and usage patterns.
-
Institutional & Systemic Change Required
- Robert Trager advocates for global governance institutions (ICAO, FATF models) that enable co-development, set standards, and enforce compliance through trade/diplomatic consequences—not speed-focused deregulation.
-
Safety Must Be Operationalized from Design, Not Patched Post-Hoc
- Aditya Gopalan argues current ML practice applies "bandaids" after harm occurs. Foundational safety requires: (a) defining unsafe behavior upfront, (b) understanding causal influence in model decisions, (c) attention to labor and local value systems in data collection.
Notable Quotes or Statements
"There is no such trade-off. We get the benefits only if we have safety." — Stuart Russell, University of California, Berkeley
"AI doesn't like or hate you, but you're made out of atoms that could be used otherwise." — Jaan Tallinn, Center for the Study of Existential Risk
"It's not about whether an institution can do AI. It's about what can AI do to seamlessly fit into the processes to enable them to deliver services and rights." — Pika Mishetti, Except Foundation
"We bring farmers into AI and we don't take just AI to the farmer." — Nadi Masin, Digital Green India
"Speed for whom? Scale for whom? Safety for whom? All of these questions need to go into the designing of systems." — Pika Mishetti, Except Foundation
"I would really strongly discourage people to think about global south versus global north. Global south is actually the majority of the world." — Kalika Bali, Microsoft Research India
"There will need to be iterations when this is done. There will be multiple failures before you start reaching any success." — Nakul Jen, Ardhani AI Global
Speakers & Organizations Mentioned
| Speaker | Organization/Title | Key Role |
|---|---|---|
| Adam Glee | Fari (co-founder & CEO) | Session organizer; framed false dichotomy between safety and inclusion |
| Jaan Tallinn | Center for the Study of Existential Risk (founder) | Discussed economic displacement and flywheel effect; advocated coordinated slowdown |
| Stuart Russell | UC Berkeley (distinguished professor) | Nuclear/aviation safety analogies; critiqued safety-vs-benefit framing |
| Robert Upadhyay | UNDP (chief digital officer) | Landscape assessments, capacity building, "Trust & Safety Reimagination" program (400+ entries, 17 selected teams) |
| Robert Trager | Oxford Martin AI Governance Initiative (co-director) | Geopolitical analysis; international institutional models (ICAO, FATF) |
| Saray Natarajin | Arty Institute (founder) | Framework: agency as participation, contestability, oversight; power & vulnerability |
| Anirudh Bakshi | Digital India (CEO, Bhashini division); National Language Translation Mission lead | 22→36 language support; 1.4B+ people reached; agricultural/governance use cases |
| Pika Mishetti | Except Foundation (chief of policy & partnerships) | Safety/accountability as individual + systemic; redefined safety vs. speed/scale framing |
| Nadi Masin | Digital Green India (CEO) | Farmer Chat (1M+ users, 45% women); co-design methodology; RLHF with agronomists |
| Nakul Jen | Ardhani AI Global (CEO & MD) | Oral reading fluency; three-principle safety approach (architecture, humility, staged inclusion) |
| Aditya Gopalan | IISc Bangalore (associate professor, Electrical Communications Engineering) | ML practice critique; foundational safety operationalization; labor & data ethics |
| Kalika Bali | Microsoft Research India (principal researcher, speech/NLP) | Critiqued "global south" framing; warned against translating Western safety definitions; caste vs. race example |
Co-hosts: Fari.ai & Park Dashuk (implied international partnership)
Technical Concepts & Resources
AI Models & Systems Referenced
- GPT-4.1 — Demonstrated jailbreak for extremist recruitment
- Chat GPT — Las Vegas Cyber Truck explosion planning; deepfake generation
- AlphaFold — Protein structure prediction; cited as non-LLM AI contribution to development
- Large Language Models (LLMs) — Critiqued as general-purpose human imitators; not required for most development use cases
- AI Agents — Risk of automated harmful workflows (e.g., mass email/social media posting for extremism)
Platforms & Initiatives
- National Language Technology Hub — APIs for 22 (→36) Indian languages; speech recognition, text-to-text translation, text-to-speech, OCR
- Farmer Chat (Digital Green) — 1M+ users; voice-to-voice end-to-end advisory system; RLHF with agronomists
- Oral Reading Fluency Solution (Ardhani AI) — Student reading assessment; privacy-by-architecture design
- Trust & Safety Reimagination Program (UNDP) — 400+ entries; 17 selected teams; local implementations (Trustweave, Ushahidi, Silverg Guard/Kenya for misinformation detection)
- AI Kosh — India data asset platform; under National AI Mission
- Hamburg Sustainability Conference Declaration on Responsible AI — Development actor alignment platform
Data & Evaluation Frameworks
- Reinforcement Learning from Human Feedback (RLHF) — Used for safety-critical systems (Farmer Chat with agronomist verification)
- Landscape Assessments — 20 countries completed, 10 more planned; ecosystem mapping, capacity evaluation
- Representative Datasets — Staged inclusion approach (smaller cohorts before jurisdiction-wide rollout)
- Causal Inference Metrics — Proposed for understanding AI system behavior and influence mechanisms
- Behavioral Safety Definitions — Aditya Gopalan argues these must precede implementation (not post-hoc patching)
Regulatory & Governance Models
- ICAO (International Civil Aviation Organization) — Cited model for global standards, co-development, and enforcement
- FATF (Financial Action Task Force) — Standards-setting + monitoring + consequence mechanisms
- Trade relations & diplomatic leverage — As enforcement tools for AI governance
- Coordinated slowdown — Advocated by Jaan Tallinn and AI company leaders as ideal (though difficult)
Labor & Social Considerations
- Data labor economy — Recognition that ML training depends on labor; context matters (local norms, ethics)
- Demographic dividend — Global South potential to provide data artifacts; infrastructure gap remains
- PII & privacy protection — Safety-by-architecture: no personally identifiable information leaving devices
- Humility by default — Model abstention when uncertainty exceeds thresholds (vs. forced decision-making)
Measured Outcomes (from Deployments)
- Farmer Chat (Kenya): 70% of users report using advice within 30 days
- Farmer Chat (India): 90% report confidence in advisory
- Agricultural Advisory (Maharashtra): 20M+ farmers using voice-based system in Marathi
- Reading Fluency Assessment: Bias detection across diverse student populations (accents, sentence structures, socioeconomic backgrounds)
Document prepared: Conference talk summary
Format: Structured markdown
Accuracy: Reflects transcript content; no invented claims
