Ethical AI as Digital Public Infrastructure
Contents
Executive Summary
This panel discussion explores how algorithmic exclusion and AI-driven discrimination threaten vulnerable populations in welfare, healthcare, education, finance, and governance systems—particularly in the Global South. The speakers argue that ethical AI principles remain unenforced and performative, and that meaningful progress requires moving from "feedback" to "feed-forward" design, genuine community participation in AI governance, and accountability mechanisms that prevent harm before deployment rather than remedying it afterward.
Key Takeaways
-
Ethics without enforcement and community oversight is just theater. Non-binding principles and corporate self-regulation have failed; meaningful protection requires enforceable standards, multi-stakeholder governance tables, and mandatory impact assessments before deployment.
-
Inclusion-by-design must start by mapping exclusion. Rather than assuming "digital inclusion" is always positive, systematically identify who is excluded, why, and what the cost to their life is. This reframes the conversation from growth metrics to justice outcomes.
-
Communities must move from being data sources to co-designers. Civil society organizations, grassroots communities, and affected populations must be funded, coordinated, and seated at AI governance tables from the very beginning of system design—not consulted retroactively.
-
Safety institutes and AI evaluation capacity must be localized and funded in the Global South. Evaluating and verifying AI system behavior remains an unsolved scientific problem. Establishing regional AI safety institutes (like India's nascent efforts) and building academic-government-civil society partnerships is urgent.
-
Feed-forward beats feedback: prevent harm by design, not after deployment. Before launching any AI system into welfare, healthcare, or governance, conduct participatory processes to understand minimum acceptable risk, test with affected communities, and ensure the system cannot exclude vulnerable populations by design.
Key Topics Covered
- Algorithmic exclusion and invisible harms in AI-enabled public systems
- Digital public infrastructure (DPI) design and the risks of forced digital transitions
- Ethics-in-practice gap: Why ethical frameworks fail to protect communities
- Black-box AI systems and the limits of transparency in machine learning
- Data sovereignty and the difference between data extraction vs. co-creation
- Gender and intersectional exclusion in AI systems (healthcare, hiring, biometric authentication)
- Community participation in AI governance: barriers and solutions
- Regulatory gaps in the Global South and enforcement mechanisms
- Responsible product development: language inclusivity, feedback loops, and civil society partnerships
- Feed-forward vs. feedback approaches: designing for inclusion from the outset
Key Points & Insights
-
Exclusion is preemptive and invisible: Most algorithmic harms occur before they are tangible or measurable, making them difficult to contest. Citizens cannot see, debate, or challenge the systems that determine their access to services.
-
DPI must be affordable, accessible, and available at the doorstep: True digital public infrastructure should function like physical infrastructure (buses, water, electricity)—not require a 10,000 rupee device or 5km travel to access. Currently, digital systems are "inaccessible, not cheap, and especially unavailable to those who live far away."
-
The "machine learning blackbox" problem is intensifying: Even the creators of deep neural networks (Yoshua Bengio, Geoffrey Hinton, Yann LeCun) cannot fully explain how their systems work. This creates civilizational risk when these systems are embedded in critical infrastructure and their behavior cannot be verified.
-
Ethics frameworks are non-binding and unenforceable: Charters on AI ethics from companies, UNESCO, OECD, and others lack mandatory compliance mechanisms. Without enforcement, ethics become procedural compliance theater rather than protection.
-
Power asymmetries determine whose data is used, who builds models, and who benefits: Companies and governments—not communities—decide how AI is deployed. This creates extractive rather than co-creative relationships with data providers.
-
Gender and intersectional biases are systematized in AI systems: Medical AI trained predominantly on male data; biometric authentication systems fail for agricultural workers with worn fingerprints; informal sector workers (invisible in datasets) are excluded from consideration in system design.
-
Civil society organizations lack resources, coordination, and seats at the table: The organizations closest to impacted communities are underfunded, fragmented, lack technical AI knowledge, and are rarely invited to AI governance discussions—creating a critical gap in accountability.
-
Feedback loops alone are insufficient; systems must be designed with exclusion-prevention as the primary outcome: "Feed-forward" design means asking before deployment: "Who cannot use this, and what is their life like?" rather than launching and then asking for feedback on harms.
-
Context and data sovereignty are critical: Systems cannot claim neutrality without understanding local values, ethics, and contextual definitions of concepts like "poverty." Relevant, context-appropriate systems are more valuable than "neutral" ones.
-
The commercialization motive creates extractive incentives: Companies profit from user data and broad distribution; this structural incentive conflicts with community welfare unless explicitly counterbalanced by governance, regulation, and accountability.
Notable Quotes or Statements
Osama Manzar (Digital Empowerment Foundation):
"Digital is saying I want to become your infrastructure... but this digital infrastructure has to be available at the doorsteps... not 5 kilometers away, not at a cost, not in a language I don't know... Digital public infrastructure has to be affordable, available and in a format that people can consume."
"Data of the people or data for the people—what do you want? Every AI system is now extracting the data of the people rather than extracting the data that has to serve the people."
"If AI can help, fair enough. If AI cannot help, thank you very much. We don't need AI. It's very simple."
Nicholas Irobi (AI Safety Connect):
"Neither Joshua Bengio nor Geoffrey Hinton nor Yann LeCun can tell you today that they understand how the system operates... We are building a civilization on top of black boxes which are gaining in power and strength in influence in capital in a way that those who design them do not understand well."
"Either you're at the table or you're on the menu... Those who have access to super intelligent systems will get too much of the pie."
Paola Villarreal (Globetics):
"Ethics were not enforceable... all these charters on ethics, the ROM call, even technology companies have published charters on ethics, but all these are non-binding they're not mandatory, so what happens when this is not complied? In fact, nothing, unfortunately."
"When there is one ethical failure in DPI this becomes systemic."
Dr. Bhavani Rao:
"A system can never claim to be neutral... It requires 13 billion parameters to train large language models. That kind of data we do not even have for the kind of people who are who are left."
"Data sovereignty becomes very critical for us because our context is so different... the way we define poverty, the way we define a lot of these things that are intangible, these are all critical considerations that are very contextual to us."
"Human beings have to become ethical people... The source is human ethics, and that's where I would put my focus on."
Osama Manzar (closing remarks):
"Anything that we are developing first you calculate how many people are left behind for not using this and what is their life... DPI when we are talking about, we must talk that it is excluding or not excluding on the basis of that, if you make it a public infrastructure it will always be positive."
"Public infrastructure means things for the masses, not masses for some things."
Speakers & Organizations Mentioned
Panel Members:
- Osama Manzar — Founder and Director, Digital Empowerment Foundation (DEF)
- Nicholas Irobi — Co-founder, AI Safety Connect (Paris-based startup doing frontier AI safety evaluation)
- Paola Villarreal — AI Ethics Manager, Globetics; also contributed to "Pathways to Inclusion" report on civil society in AI governance
- Dr. Bhavani Rao — Professor, Amrita University; Co-chair, UNESCO Women for Ethical AI South Asia chapter
- Ram Papetla — Managing Director, Asia-Pacific Region Trust and Safety, Google India
- Jesh Ranjan S — Special Chief Secretary to Chief Minister's Office, Government of Telangana (mentioned as absent due to flight medical emergency; noted as instrumental in Telangana's IT infrastructure)
Organizing Organizations:
- Digital Empowerment Foundation (DEF) — Delhi-based nonprofit civil society organization; operates 2,400+ community information resource centers across India
- Globetics — Geneva-based ethics organization
- AI Safety Connect — Paris-based AI safety research organization
- UNESCO Women for Ethical AI South Asia chapter — Inclusion and equity partner
- Global Governance Networks — Partner organization
Partner & Referenced Organizations:
- Google India — Product development, language inclusivity initiatives (Gemini, Notebook LM, AI Overviews in Indian languages)
- Center for Responsible AI, IIT Madras — Feedback partner for algorithm transparency
- Vedwani AI — Grassroots feedback partner for education initiatives
- India AI Safety Institute — Nascent government initiative mentioned as needing strengthening
- Government of India, Department of Telecommunications — Partner on Samriddh Gram pilot (DPI-enabled village project in India)
- Government of Telangana — Referenced for IT infrastructure development
- Meta — Mentioned as organizing sessions on technology policy
- OECD — AI principles referenced
Case Studies & Geographic References:
- Peru (2019): Rural woman denied food subsidy when biometric authentication failed to recognize worn agricultural worker fingerprints—despite Peru having data protection law and OECD AI principles
- Netherlands: Childcare benefits scandal—algorithm flagged families (primarily migrant backgrounds) as fraud risks based on discriminatory assumptions in training data
- LATAM GPT — Language model developed in Latin America to preserve indigenous languages and cultural heritage
Technical Concepts & Resources
AI Safety & Evaluation:
- Safety institutes — Non-regulatory public organizations conducting AI behavioral evaluation (established movement from Bletchley Park spreading globally)
- Behavior elicitation technologies — Tools developed by AI Safety Connect to understand behavioral dynamics of black-box systems
- Large language models (LLMs) — Require ~13 billion parameters; training data limitations exclude underrepresented populations
- Deep neural networks with trillions of parameters — Opacity problem; creators cannot fully explain system behavior
- Machine learning blackbox — Core challenge: systems lack interpretability for deployment in critical infrastructure
Algorithmic Tools & Products Mentioned:
- Gemini — Google's LLM available in multiple Indian languages
- Notebook LM — Google product for education
- AI Overviews — Google search feature in multiple Indian languages
- Synth ID — Watermarking technology to identify AI-generated content; developed with Indian media/journalism community input
- ChatGPT — Referenced as example of systems becoming economically necessary despite governance gaps
- OpenAI — Clocking $50 billion in revenue (as of talk date)
- Anthropic — Clocking $15 billion in revenue (as of talk date)
Data & Infrastructure Concepts:
- Data sovereignty — Localized control over data; critical for Global South contexts with different values and definitions
- Biometric authentication systems — Risks of exclusion (e.g., worn fingerprints in agricultural workers)
- QR codes — Example of infrastructure requiring device access, creating exclusion
- Digital public infrastructure (DPI) — Aspirational framework for equitable, accessible digital systems
- Samriddh Gram pilot — DEF/Government of India project modeling what a DPI-enabled village looks like
- Community information resource centers (CIRCs) — DEF operates 2,400+ across India
Governance & Regulatory Frameworks:
- UNESCO Recommendation on the Ethics of AI — Non-binding framework (mentioned as lacking enforcement)
- OECD AI Principles — Referenced; also non-binding
- India's HGI (Harmonized Governance Initiative/framework) — Mentioned as positive recent development; needs strengthening
- Data protection laws — Exist in places like Peru but insufficient without enforcement and implementation
- Multi-stakeholder governance tables — Proposed mechanism to define minimum acceptable risk before deployment
Reports & Publications:
- "Pathways to Inclusion: Advancing the Role of Civil Society in AI Governance" — Paola Villarreal's document analyzing barriers to civil society participation (published; three main barriers identified: funding, coordination, technical knowledge)
- UNESCO Women for Ethical AI South Asia chapter report — Dr. Bhavani Rao's recent publication being launched at this event; focuses on gender exclusion in AI
- Data sovereignty white paper — Dr. Krishna Shri publishing on South Asian context (mentioned as forthcoming)
Academic & Research Institutions:
- IIT Madras — AI safety research partnerships
- IIT Chennai — AI safety institute work
- Amrita University — Dr. Bhavani Rao's institutional affiliation
Historical/Conceptual References:
- Bletchley Park — Historic location of origin for modern safety institute movement
- Colonialism and feudalism — Historical context for understanding embedded biases in current systems (referenced by Dr. Rao)
Critical Gaps & Future Work Indicated
- Localizing AI evaluation science: India and Global South need resourced, multistakeholder safety institutes
- Civil society coordination: Fragmented CSOs need funding and platforms to collectively represent grassroots voices
- Enforcement mechanisms: Moving from non-binding ethics frameworks to mandatory, pre-deployment impact assessments
- Data sovereignty frameworks: Developing contextual, values-based approaches to data governance specific to South Asian contexts
- Feed-forward design methodologies: Systematizing exclusion mapping and participatory design before AI system launch
- Technical capacity building: Helping civil society and government understand AI system risks and governance options
