All sessions

AI DPI Sandbox: Co-Creating the Future

Contents

Executive Summary

This panel discussion at the India AI Impact Summit addresses the intersection of Digital Public Infrastructure (DPI) and artificial intelligence, focusing on sandboxes as controlled environments for safe experimentation and governance. The session brings together government leaders, development institutions, and researchers to explore how countries can responsibly integrate AI into foundational systems serving billions of citizens while building institutional trust and managing systematic risks at scale.

Key Takeaways

  1. Sandboxes are governance tools, not just tech tools. They are mechanisms for surfacing tradeoffs, testing safeguards, and generating evidence on rights, inclusion, and accountability before population-scale rollout. They build institutional learning capacity.

  2. Start with problem definition, not technology. Before designing a sandbox, governments must define the actual problem, whose problem it is (citizen-centered vs. internal), and what new harms might be created—not assume AI is the answer.

  3. Trust is the speed limit for digital transformation. Multiple speakers emphasized that citizen trust, civil society confidence, and multi-stakeholder coordination determine how quickly and sustainably AI-DPI can scale. Technical capability is necessary but insufficient.

  4. Context matters more than universal solutions. Effective AI-DPI integration requires adaptation to national data quality, infrastructure maturity, regulatory capacity, and cultural contexts. The "how" of sandboxing must be locally designed despite universal principles.

  5. Investment in regulatory capacity is as important as technology investment. Governments need to build capability to understand, oversee, and learn from AI experimentation. Sandboxes are partly about upskilling regulators alongside innovators.

Key Topics Covered

  • Digital Public Infrastructure (DPI) fundamentals — identity systems, payments, data exchange layers as foundational infrastructure
  • AI integration into DPI — opportunities and risks when AI becomes embedded in population-scale systems
  • Regulatory and operational sandboxes — controlled environments for testing AI solutions before large-scale deployment
  • Trust-building mechanisms — institutional and societal safeguards, transparency, and inclusion
  • Experimentation governance — structured approaches to learning, feedback loops, and iterative policy-making
  • Global DPI sandbox initiatives — mapping of 16 pioneering cases across identity, payments, and data exchange systems
  • Governance challenges — institutional capacity, regulatory uncertainty, data quality, bias, and accountability
  • Multi-stakeholder participation — role of civil society, academia, and community groups in sandbox design
  • Context-specific implementation — adapting AI-DPI solutions to national circumstances and citizen needs
  • Capacity building and institutional learning — developing regulatory expertise and cross-departmental coordination

Key Points & Insights

  1. DPI as infrastructure layer, AI as capability layer: AI is becoming embedded infrastructure within foundational DPI systems (identity, payments, data exchange), meaning risks and benefits scale simultaneously across entire populations. This fundamentally changes governance requirements.

  2. "Why, for whom, how?" framework: Governments must clarify three critical questions before deploying AI-DPI solutions: Why is this needed (what problem is it solving)? For whom is it being built (citizens or internal efficiency)? How does it change institutional relationships and what unintended harms might emerge?

  3. Sandboxes as "laboratories of trust": Effective sandboxes function not merely as technical testing grounds but as mechanisms for building citizen confidence and preventing "silent failures" where technically functional systems quietly cause exclusion or rights violations.

  4. Contextual testing over isolation: AI cannot be tested in vacuum; it must be tested within the actual governance, infrastructure, and regulatory context where it will operate. This reveals interoperability, performance, and compliance issues invisible in isolated pilots.

  5. Three structural functions of sandboxes (per India's framework):

    • Controlled environment for real-world testing with actual data
    • Risk anticipation and mitigation (identifying algorithmic bias before scale)
    • Iterative governance and feedback loops enabling inclusive solutions
  6. Foundational risks compound with AI: Biases embedded in existing DPI systems (e.g., identity data collection bias) are amplified when AI layers are added, making the challenge foundational rather than purely technical.

  7. Hybrid sandbox models emerging: Evidence shows increasing use of regulatory, operational, and hybrid sandboxes that combine government oversight with private sector innovation and civil society scrutiny.

  8. Capacity and institutional gaps are critical blockers: Regulatory agencies often lack technical literacy about emerging technologies; developers lack understanding of governance implications. Sandboxes must build bidirectional learning.

  9. "Acceptance of failure" as design principle: Experiments may fail; traditional regulatory culture penalizes failure. Successful sandbox ecosystems require explicit acceptance of failure as a learning mechanism.

  10. Institutional learning over one-off pilots: Sandboxes should not be temporary experiments but permanent institutional capabilities—labs for ongoing learning, not isolated proof-of-concepts.


Notable Quotes or Statements

  • Susant Kumar (Moderator, Kalpa Impact): "Digital transformation progresses at the speed of trust. If citizens, governments, civil society do not coordinate and there's no trust, digital transformation could stall or even be a failure."

  • Kavita Bhartya (Chief Operating Officer, India AI Mission): "When AI is embedded into the DPI, the implications—systematic bias, opacity, data governance, risk, accountability—can scale as quickly as benefits. This is why experimentation is not a luxury, it's a governance necessity."

  • Lorraine Gudgeon (Datasphere Initiative, video message): "Sandboxes can be laboratories of trust. When done responsibly, they serve not only as technical testing grounds but as levers for trust by testing solutions and preventing silent failures."

  • Alexander Opruneno (UNDP Asia-Pacific): "When I ask governments 'What is your use case?' I often draw blank faces. There's something missing in the conversation about AI and DPI... The question is: For whom is this being built?"

  • Dr. Nakundi Moses (ICT Commissioner, Tanzania): "[On challenges] Providing digital skills to everyone... digital security and trust... The issue of privacy and consumer protection... These are challenges that we are working on."

  • Adesh Kartka (Joint Secretary, Ministry of Technology, Nepal): "We need to improve regulatory capability and introduce acceptance of failure. These experiments may not work, and regulators need vision that innovation might fail and how to accept it."

  • Moren (Africa Sandboxes Forum, Datasphere Initiative): "Sandboxes create an environment where government is not just experimenting with service providers, but can invite civil society, community groups, academia—building the structured way to invite others to interrogate innovations and build trust."

  • Dr. Vina (open data.ch, Switzerland): "As soon as AI enters infrastructure, AI becomes infrastructure. The central question must be: how are we considering societal considerations, not just moving models to market?"


Speakers & Organizations Mentioned

Government/Policy Leaders:

  • Kavita Bhartya — Scientist G, Ministry of Electronics and Information Technology, India; Chief Operating Officer, India AI Mission
  • Dr. Nakundi Moses — ICT Commissioner, Tanzania
  • Adesh Kartka — Joint Secretary, Ministry of Technology, Nepal

Development & Civil Society:

  • Susant Kumar — Founder & CEO, Kalpa Impact (moderator)
  • Alexander Opruneno — Team Leader, Innovation and Digital, UNDP Asia-Pacific Regional Bureau
  • Moren — Africa Sandboxes Forum Lead, Datasphere Initiative

Research & Standards:

  • Lorraine Gudgeon — Datasphere Initiative (video message)
  • Dr. Vena — Co-CEO, open data.ch; Program Lead, Prototype Fund Switzerland

Institutions & Initiatives:

  • Datasphere Initiative
  • India AI Mission
  • UNDP (United Nations Development Programme)
  • Kalpa Impact
  • open data.ch
  • Africa Sandboxes Forum
  • Ministry of Electronics and Information Technology (India)
  • Tanzania ICT Commission
  • Nepal Ministry of Technology

Technical Concepts & Resources

Core Frameworks & Models:

  • DPI Sandbox definition — initiatives specifically designed to test technologies or governance arrangements within three core layers: identity, payments, data exchange
  • Three functions of sandboxes:
    1. Controlled environment (real data testing)
    2. Risk anticipation and mitigation (bias detection)
    3. Iterative governance (feedback loops)
  • Sandbox taxonomy — regulatory, operational, and hybrid models
  • Three layers of transformation:
    1. Digitization (data creation)
    2. Digitalization (system deployment)
    3. Digital transformation (economy-wide adoption)

DPI Systems Referenced:

  • India: Aadhaar (digital identity), UPI (unified payments), India Stack architecture, Jami Stack (Tanzania equivalent)
  • Tanzania: Jami number (national ID), Jami Bus (interoperability platform), instant payment system
  • Nepal: National ID system, emerging data access platform
  • Global: European digital identity wallet (EIDAS), other emerging initiatives

Key Concepts:

  • Digital sovereignty — balancing global market access with national control of digital infrastructure
  • Interoperability — systems' ability to communicate and share data seamlessly (800+ government systems in Tanzania example)
  • Algorithmic bias — systematic errors when AI models inherit biases from training data or foundational DPI systems
  • Silent failures — technically functional systems that quietly exclude or harm marginalized groups
  • Contextual testing — experimentation within actual governance and infrastructure context, not in isolation
  • Institutional learning — permanent organizational capacity to absorb lessons from experimentation

Governance Elements Requiring Testing:

  • Privacy and data protection
  • Transparency and explainability
  • Fairness and inclusion
  • Safety and oversight
  • Decision rights and accountability
  • Cross-border data flows and digital sovereignty

Report & Resources:

  • "Sandboxes for DPI: Co-Creating the Blocks of Digital Trust" — Datasphere Initiative report launched during session; global inventory of 16 DPI sandbox cases identified
  • Global map of 16 DPI sandbox initiatives — mapping reveals prevalence in identity systems, followed by data exchange and payments
  • Key findings from report:
    • Feedback loops and institutional learning characterize successful DPI sandboxes
    • Hybrid sandbox models emerging
    • Balancing opportunities and risks of global markets and digital sovereignty

Actionable Implications

For Governments:

  1. Before designing an AI-DPI sandbox, answer: What problem? Whose problem? What new harms might emerge? Who is most harmed?
  2. Invest equally in regulatory capacity and technical capacity
  3. Treat sandboxes as permanent institutional capabilities, not one-off pilots
  4. Design for multi-stakeholder participation (civil society, academia, community groups) from inception
  5. Establish "acceptance of failure" as explicit principle

For Development Partners (UNDP, World Bank, etc.):

  1. Move beyond "use case" conversations to "outcomes and harm" conversations
  2. Provide structured handholding for sandbox design, execution, and evaluation
  3. Help governments contextualize solutions to local circumstances
  4. Document and share learning across countries

For Researchers & Data Scientists:

  1. Test AI systems within actual governance and infrastructure contexts, not in isolation
  2. Collaborate with civil society and community groups on bias and inclusion assessment
  3. Contribute to risk assessment frameworks and methodologies that complement India's AI governance framework
  4. Focus on foundational issues (inherited biases in DPI) not just model-level issues

For Civil Society:

  1. Engage early in sandbox design and governance
  2. Interrogate whose problem is being solved and who benefits
  3. Monitor for silent failures and rights violations in "technically functional" systems
  4. Build community feedback mechanisms into evaluation