Building Trustworthy AI in Digital Public Infrastructure
Contents
Executive Summary
This session convened high-level government officials and civil society experts to address how AI can be safely deployed in digital public infrastructure (DPI) systems while protecting human rights, maintaining transparency, and building public trust. The core consensus is that technology must serve people, not the other way around, and that responsible AI governance requires multistakeholder collaboration anchored in international human rights law, algorithmic transparency, and meaningful human oversight—particularly critical as many Global South nations rapidly scale DPI systems at scale.
Key Takeaways
-
AI in Public Services is Fundamentally a Governance Challenge, Not Just a Technical One — Success requires clear legal frameworks, human rights grounding, civil society participation, and institutional capacity—not algorithm sophistication alone.
-
Rights-Respecting AI is a Practical Necessity for Public Trust — Transparency, accountability, inclusive design, and error-correction mechanisms are not luxury add-ons; they are foundational to legitimacy, especially as DPI scales rapidly in Global South contexts.
-
Learn from Global South Innovation and Failures — Countries in Latin America, Asia, and Africa are demonstrating new models of resilience and inclusion. Their experiences (both successes and cautionary tales like the Dutch welfare audit system) should inform global AI governance—the Global South is not a testing ground but a source of leadership.
-
Explainability and Auditability Require Multistakeholder Problem-Solving — Neither government nor private sector alone can solve the challenge of making complex AI systems auditable and explainable to diverse stakeholders. Sustained collaboration, capacity building, and shared learning across borders are necessary.
-
Multilateral Coordination Prevents a Fragmented Global Landscape — Platforms like the Freedom Online Coalition create "safe spaces" for like-minded democracies to align principles, share effective practices, and coordinate standards—preventing a race to the bottom or a patchwork of incompatible regulations.
Key Topics Covered
- Algorithmic Transparency & Explainability — The need for AI systems in public services to be understandable and auditable by users, civil society, and oversight bodies
- Digital Public Infrastructure (DPI) — How AI is embedded in government systems (education, healthcare, justice, welfare, digital identity) and the governance frameworks needed
- Human Rights & Rights-Respecting AI — International human rights law as the foundation for AI governance, including privacy, non-discrimination, and freedom of expression
- Accountability & Redress Mechanisms — Grievance resolution, impact assessments, and remedies for algorithmic harms in public systems
- Multistakeholder Governance — The necessity of collaboration between governments, civil society, academia, private sector, and affected communities
- Cyber Security & Digital Resilience — Building secure, interoperable infrastructure resilient to threats
- Global South Perspectives — Lessons from Latin America, Asia, and Africa on implementing rights-respecting DPI under resource constraints
- Procurement & System Design — Due process in government procurement, participatory design, and community involvement before deployment
- AI Literacy & Public Understanding — Educating citizens and public servants to understand and govern AI systems responsibly
- International Coordination & Standards — Role of multilateral bodies (Freedom Online Coalition, Council of Europe, UN) in setting consistent safeguards
Key Points & Insights
-
Technology is Not Neutral — As Pratik Wagger noted, technology "takes on the shape of the system in which it's deployed." Historical discrimination, power asymmetries, regulatory capture, and political economy issues embedded in institutions will be reflected in AI systems unless consciously addressed.
-
Pre-Deployment Assessment is Essential — Before deploying AI in DPI, governments must clarify: (a) whether they're solving a real problem or papering over institutional cracks; (b) local/regional context and capacity; (c) whether regulatory frameworks, data protection regimes, and redress mechanisms exist; (d) conducting mandatory algorithmic impact assessments and human rights impact assessments.
-
Grievance Redress Mechanisms are Critical — Real-world failures (e.g., elderly welfare recipients marked "deceased" and trapped in bureaucratic loops) demonstrate that systems will make mistakes. Functional, accessible grievance mechanisms are not optional but foundational to public trust and legitimacy.
-
Transparency Must Extend Beyond Algorithms to Data Infrastructure — Latin American research showed that focusing only on algorithmic transparency is insufficient without transparency into the underlying data systems and the political decisions driving deployment. Enforcement of transparency regulations remains a major gap.
-
Exclusion Often Happens Before Algorithms — Discrimination is frequently rooted in political decisions made before technical systems are built, reflecting lack of involvement by affected communities. Civil society and affected community participation in design (not just deployment) is essential to prevent embedded exclusion.
-
Public Trust is Built Through Clarity and Responsiveness, Not Perfection — Estonia's experience shows that trust emerges when: (a) governments are open about when/why/how AI is used; (b) systems allow for explanation and review; (c) mechanisms exist to correct errors. Trust is fragile when systems grow quickly without these safeguards.
-
Balancing Innovation and Responsibility Requires Bold AND Responsible Approaches — Google's framing ("bold and responsible together") emphasizes that speed and safety are not opposites: responsible development includes transparency in governance, transparent user-facing design (e.g., disclaiming hallucinations in Gemini), and ongoing monitoring post-deployment alongside civil society feedback.
-
International Cooperation Prevents Fragmentation and Builds Capacity — Many Global South countries lack resources to develop safeguards independently and may rely on external vendors. Multilateral platforms like the Freedom Online Coalition enable knowledge-sharing, prevent a "patchwork" of incompatible standards, and support capacity-building in low-resource contexts.
-
Multilateral Frameworks Translate Principles into Practice — The Freedom Online Coalition's 2025 DPI Principles and 2024 Joint Statement on Responsible Government AI Practices provide concrete, binding guidance (for member states) on transparency, accountability, inclusive design, and human rights impact assessments—moving beyond abstract ideals to operational guidance.
-
Explainability & Auditability Remain Technically and Institutionally Challenging — Despite consensus that these are necessary, real barriers exist: varying stakeholder technical expertise, difficulty explaining complex models to diverse audiences, and the gap between what is technically possible in a lab versus what is deployable and understandable at population scale. Collaboration across sectors is needed to develop practical solutions.
Notable Quotes or Statements
"Technology must serve people, not the other way around." — Recurring theme across multiple speakers (President Alar Karis, Estonia; Bernard Mesén, Switzerland; others)
"Algorithmic transparency and responsible governance are not optional additions to digital public infrastructure. They are essential conditions for its legitimacy." — President Alar Karis, Estonia
"Trust is not built through perfection, but through clarity, responsiveness, and the ability to correct cause. People need to know when AI is being used, what is its purpose and where responsibility ultimately lies." — President Alar Karis, Estonia
"We see digital public infrastructure more than systems and code. We see it as social contract technology—infrastructure that must be transparent, contestable and governed in the public interest." — Ambassador Harry Boise, Netherlands
"Technology is neither good nor bad nor neutral—it takes on the shape of the system in which it's deployed. If you have historical patterns of discrimination, limited state capacity, or power asymmetries, that's something to keep in mind." — Pratik Wagger, Tech Global Institute
"Exclusion often doesn't happen within the algorithmic systems; it happens before, in political decisions and the lack of involvement of affected communities in how systems are designed." — Juan Carlos Lara, Derechos Digitales (Latin America)
"We talk about being bold and responsible together—bold in our innovations, responsible in development and deployment, and together because we need to engage with government, civil society, academia, and researchers." — Alex Walden, Google
"The Freedom Online Coalition is a safe space for like-minded governments to talk about how to bring our principles and values into the modern world of technology." — Norman Schultz, Germany
Speakers & Organizations Mentioned
High-Level Government Speakers:
- President Alar Karis — Republic of Estonia (former director, University of Tartu; molecular geneticist)
- Bernard Mesén — State Secretary, Federal Office of Communications (Ofcom), Switzerland; Current Chair, Freedom Online Coalition (2026)
- Taras Balis — Vice Minister of Foreign Affairs, Lithuania; Governor, International Atomic Energy Agency Board
- Ambassador Harry Boise — Ambassador at Large for AI, Special Envoy on AI, Kingdom of the Netherlands
Panel Moderator:
- Zach Lampel — Senior Legal Adviser and Coordinator, Rights Programming, National Assembly; coordinates work on freedom of expression and right to privacy
Panelists:
- Pratik Wagger — Head of Programs and Partnerships, Tech Global Institute
- Juan Carlos Lara — Executive Director, Derechos Digitales (Latin America)
- Norman Schultz — Deputy Head of Unit, AI and Digital Technologies, Foreign Policy Office, Federal Foreign Office of Germany
- Alex Walden — Global Head of Human Rights, Google
Organizations & Bodies Referenced:
- Freedom Online Coalition — Partnership of 41–42 governments (now 2026 chair: Switzerland) committed to protecting digital rights and freedoms online
- Task Force on AI and Human Rights — Within FOC; brings together states, civil society, academics, industry
- Council of Europe — Developed Framework Convention on AI (Vilnius Convention), world's first legally binding treaty on AI
- EU — Referenced for cyber security capacity and digital public service maturity standards
- OECD, UNESCO — International cooperation bodies for AI governance
- Global Dialogue on AI Governance — Co-facilitated by Estonia and El Salvador; multistakeholder platform established by UN General Assembly
Technical Concepts & Resources
Frameworks & Policies:
- Freedom Online Coalition DPI Principles (December 2025) — Rights-respecting principles emphasizing transparency, explanability, accountability, privacy, inclusive design, and civil society participation
- FOC Joint Statement on Responsible Government Practices for AI Technologies (2024) — Commitments on impact assessments, procurement due process, and safeguards for government AI use
- Council of Europe's Framework Convention on AI (Vilnius Convention) — First legally binding international treaty on AI; sets baseline standards for human rights, democracy, rule of law
- Human Rights Impact Assessments — Mandatory before public sector AI procurement/deployment
- Algorithmic Impact Assessments — Pre-deployment reviews to identify potential harms
Technical Practices & Tools:
- Algorithmic Transparency/Explainability — Systems must provide understandable reasoning to users and oversight bodies; examples include disclosure of model limitations (e.g., hallucinations), source citation in responses
- Auditability — Systems must allow independent review and verification of decision-making processes
- Transparency Registers — E.g., Netherlands' national algorithm register (1,350+ AI systems/algorithms from 320+ public authorities; publicly accessible)
- Participatory Design & Red-Teaming — Pre-deployment testing involving affected communities, civil society, and researchers to identify edge cases and boundary conditions
- Grievance Redress Mechanisms — Formal processes for citizens to contest or appeal algorithmic decisions
Estonian DPI Systems Referenced:
- E-health platform (eax)
- E-administration systems
- Qualified e-signature (used by 90%+ of adults)
- E-residency program
- Interoperable state registers (population, property, business, address)
- AI literacy in educational systems
Lithuanian DPI Systems Referenced:
- National cyber security center (threat monitoring, incident response)
- Hybrid state cloud (enhanced security and interoperability)
- AI center (national hub: compute capacity, public data, interdisciplinary expertise)
Dutch DPI Systems Referenced:
- National algorithm register (transparency tool)
- COVID response app (multistakeholder design)
- Welfare audit system (cautionary example of algorithmic harms without adequate safeguards)
Google Products & Practices:
- Gemini — AI assistant with transparency features: disclaimers on potential errors/hallucinations; source citations for factual claims
- Responsible AI Report — Annual public disclosure of governance and responsible AI practices
Global South Context Challenges:
- Limited state capacity
- Reliance on external vendors
- Resource constraints in enforcement
- Historical discrimination patterns
- Power asymmetries and regulatory capture
Document Type: Conference Panel Discussion & Opening Remarks
Event: India AI Impact Summit (location: India)
Primary Focus: Governing Safe and Responsible AI in Digital Public Infrastructure
Key Organizations: Freedom Online Coalition, Council of Europe, various national governments
Timeframe: Discussions reference 2024–2026 initiatives and ongoing developments
