AI Adoption in the Global South: Trust, Technology & Impact
Contents
Executive Summary
This panel discussion explores how trust across the AI value chain—encompassing developers, deployers, regulators, and civil society—is essential to enabling AI adoption globally, particularly in the Global South. The speakers argue that trust requires shared accountability, transparent governance frameworks, localized approaches, and cross-border coordination, rather than fragmented national regulations that slow progress and innovation.
Key Takeaways
-
Trust is built through shared, transparent accountability across the entire AI supply chain—not through fortress-like company practices or top-down regulation alone. Every actor (developer, deployer, infrastructure provider, regulator, civil society) must have clear responsibilities.
-
Localization is non-negotiable: Models and policies must be tested, benchmarked, and adapted to local languages, cultures, and regulatory contexts. Treating the Global South as a secondary market risks erosion of trust and poor outcomes.
-
Fragmented governance harms innovation and equity: Interoperable frameworks, regional collaboration (e.g., ASEAN), and international coordination on standards allow innovations to be trusted and scaled globally while respecting local sovereignty.
-
Outcome-focused governance is more effective than technology-focused regulation: Setting clear bars for customer fairness, market integrity, and fraud prevention allows flexibility in technical implementation and encourages innovation.
-
The window for building trust proactively is narrow: The Global South's high initial trust in AI can erode rapidly if safeguards fail or systems cause harm. Proactive investment in governance, capacity-building, and local partnerships now will prevent reactive crises later.
Key Topics Covered
- Trust as a foundation for AI adoption – Why trust, not just safety, is the key enabler for mass adoption in emerging markets
- Shared responsibility across the AI value chain – Defining accountability for developers, deployers, regulators, and civil society
- Governance frameworks and policy approaches – Lessons from Singapore's adaptive regulatory model; principles-based vs. outcome-focused governance
- Developer safeguards and constitutional AI – Technical and non-technical mechanisms (e.g., Claude's constitution, model context protocol)
- Financial services deployment at scale – JP Morgan Chase's approach to trust, auditability, and fairness in real-world AI systems
- Multilingual and multicultural localization – Addressing language and cultural diversity in AI models across Asia and the Global South
- Regional collaboration (ASEAN) – Balancing local sovereignty with cross-border coordination
- Open-source ecosystems and testing – Role of open models, AI Verify Foundation, and community-driven assurance
- Sovereignty vs. fragmentation – Clarifying what "AI sovereignty" means without enabling harmful fragmentation
- Workforce upskilling and institutional capacity – Building human and institutional readiness for safe AI adoption
Key Points & Insights
-
Trust as an adoption accelerator: The Global South often exhibits higher initial trust in AI technology than the Global North, but this trust can erode rapidly if systems fail or are misused. Proactive governance and transparency are essential to maintain and justify that trust.
-
Shared accountability requires concrete definitions: Moving from abstract "principles" to concrete, legally enforceable accountability is critical. Regulators, developers, and deployers must each have clearly defined roles, and the supply chain—including foundational model builders, fine-tuners, deployers, infrastructure providers, and civil society—must coordinate explicitly.
-
Adaptive governance over rigid regulation: Singapore's approach demonstrates that AI regulation must be agile and experimental because the technology evolves faster than traditional legal frameworks (which may remain stable for decades). Guidelines and standards can provide foundational guardrails while leaving room for innovation.
-
Localization is a technical and policy requirement, not an optional add-on: India, Southeast Asia, and other regions have thousands of languages and distinct cultural contexts. Models must be tested and benchmarked locally; multilingual support is not an inclusion feature but a functional necessity.
-
Open-source models drive trust verification and competition: Open models enable independent testing, community-driven improvement, and competitive innovation. Open-source ecosystems like the AI Verify Foundation create verifiable, transparent testing methodologies that benefit the entire ecosystem.
-
Governance should target outcomes, not just technologies: Rather than prescribing how AI systems must be built, effective governance sets outcome-oriented bars (e.g., fair treatment of customers, market integrity, fraud prevention) and allows multiple technical pathways to meet those goals.
-
Fragmentation vs. interoperability is a critical policy choice: National or sectoral siloing of AI governance undermines innovation and global deployments. Interoperable frameworks and international coordination are necessary so innovations developed in one region (e.g., India) can be trusted and scaled globally.
-
Deployers in regulated sectors (e.g., financial services) anchor trust in concrete practices: Auditability, explainability, adverse action capabilities, and fairness assessments embedded in real-world systems create accountability beyond abstract commitments. These practices must be the model for other sectors.
-
Voluntary commitments, technical standards, and regulatory guidance form a layered governance stack: Effective governance combines multi-stakeholder partnerships (e.g., Partnership on AI), technical standard-setting bodies (e.g., Linux Foundation for MCP), and government regulation—each playing a complementary role.
-
Capacity-building and institutional readiness in the Global South is critical: Many governments lack the technical expertise to regulate AI effectively. International partnerships, knowledge-sharing, and local workforce upskilling are necessary to prevent a two-tiered world where advanced economies govern AI while the Global South only consumes it.
Notable Quotes or Statements
-
John (JP Morgan Chase): "The best approach is to govern for outcomes, not just the technology itself. We should set the bars that matter: treating customers fairly, delivering better outcomes, and maintaining market integrity."
-
Denise Wong (IMDA, Singapore): "We have to be really humble because we don't know what the answers are. We have to be agile. We have to try and keep up with understanding what the technology is and how it's changing."
-
Denise Wong: "Concrete has set the same way for the last 50 years but AI changes every day. The regulatory approach has to be adaptive to that reality."
-
Tara Lions (JP Morgan Chase): "We are operating real systems, serving real customers, and managing real risk every day. Accountability must be shared and explicit... Builders and deployers must work together to optimize and use AI to enable solutions. That is the only way to build confidence required for mass adoption."
-
Rebecca Finlay (Partnership on AI): "Without trust you cannot have innovation."
-
Dr. Mangul Sarin (AI Safety Asia): "[The] good things about [the] global south is that the trust on technology and AI is usually higher than the global north. But we should not take it for granted because something happened trust can be eroded very quickly."
-
Dr. Mangul Sarin: "Not every country needs to build the frontier model... Sovereignty is about control—your procurement checklist, monitoring mechanisms, evaluation, and enforcement of accountability and transparency across value chains."
-
Ria Straer Galvvis (Anthropic): "As we saw this uptake [of Model Context Protocol], we ensured that this was completely open source and actually just a few weeks ago we handed off ownership of MCP to the Linux Foundation, ensuring that it was useful for everyone and not just one company."
Speakers & Organizations Mentioned
Individual Speakers:
- John (Opening remarks, JP Morgan Chase)
- Rebecca Finlay – CEO, Partnership on AI; moderator
- Denise Wong – Assistant Chief Executive, Data Innovation and Protection, IMDA (Infocom Media Development Authority); Ministry of Digital Development and Information, Singapore
- Tara Lions – Global Head of AI Policy, JP Morgan Chase
- Ria Straer Galvvis – International Policy, Special Projects Lead, Anthropic
- Dr. Mangul Sarin – Co-founder and Executive Director, AI Safety Asia
Organizations & Institutions:
- JP Morgan Chase – Global financial services deployer; 400+ AI use cases in production
- Partnership on AI – Global nonprofit; 140 partners in 18 countries; multistakeholder collaboration on responsible AI
- Infocom Media Development Authority (IMDA) – Singapore regulatory body
- Ministry of Digital Development and Information – Singapore
- Anthropic – AI model developer; recently opened office in Bangalore, India
- AI Safety Asia – Governance ecosystem builder in Southeast Asia
- ASEAN (Association of Southeast Asian Nations) – 11-member regional group leading AI governance working group
- AI Verify Foundation – Singapore-led open-source foundation for AI testing and governance
- The Linux Foundation – Now steward of Model Context Protocol (MCP)
- Partnership on AI – Hosts multistakeholder collaboration and upcoming reports on assurance ecosystems
Countries/Regions Referenced:
- Singapore (AI governance leader)
- India (major AI hub, talent center; 2nd largest country for Anthropic's Claude)
- ASEAN nations (Cambodia, and 10 others)
- Brazil (via Fernando from Internet Lab think tank)
- Japan (Anthropic office)
- Europe (Anthropic offices)
- Global South (primary focus)
- United States (Bay Area, Silicon Valley)
Technical Concepts & Resources
-
Claude's Constitution – Anthropic's living document of guidelines and explanations (4 categories) that inform Claude's responses on safety, ethics, helpfulness, and regulatory compliance (jailbreaking, cybersecurity, mental health)
-
Constitutional AI – Approach used by Anthropic to embed values and guidelines directly into model training and decision-making
-
Model Context Protocol (MCP) – Open technical standard for connecting chatbots to tools and data securely; released by Anthropic in 2024; now maintained by Linux Foundation; adopted industry-wide (ChatGPT, Claude, etc.)
-
AI Verify Foundation – Open-source testing and governance framework (Singapore-based, global membership) with:
- Frameworks and tools
- Benchmarks for real-world use cases
- AI Assurance Sandbox (matches deployers with global testers; generates transparent testing reports)
- Community of practice for testing methodologies
-
Model Governance Framework – Singapore's adaptive regulatory framework (initially 2019; updated for GenAI; extended to agentic AI)
-
Agentic AI – AI systems with autonomous agency to take actions; represents evolving complexity requiring updated governance frameworks
-
AI Supply Chain / Value Chain – Encompasses foundational model developers, fine-tuners, deployers, cloud/infrastructure providers, tooling providers, and civil society; requires coordinated accountability
-
Explainability & Auditability – Technical safeguards enabling audit trails, adverse action notification (e.g., credit denial reasons), and fairness assessments in regulated sectors
-
Multilingual & Multicultural Benchmarking – Testing models on Indic languages, local agricultural use cases, and region-specific risks to ensure trust and performance across diverse populations
-
Outcome-Focused Governance – Regulatory approach setting goals (fairness, market integrity, fraud prevention) rather than prescribing technical solutions
Additional Context & Framing
This session was held at an AI summit in India and reflects a significant moment in global AI governance: the shift from theoretical principles to practical, deployable frameworks. The emphasis on the Global South and Asia is deliberate—Asia represents ~60% of the world's population and will be the stress-test for all AI systems developed elsewhere. The panel articulates a unified message that trust is not a luxury but an economic accelerator for emerging markets, and that fragmentation—not innovation—is the real threat to equitable, global AI adoption.
