AI That Empowers: Safety, Growth, and Social Inclusion in Action
Contents
Executive Summary
This panel discussion at an AI summit in India addresses how responsible AI governance can be achieved through collaborative public-private solutions, global standards, and rights-based approaches. Speakers from UN agencies, major tech companies, industry associations, civil society, and investment bodies discuss practical mechanisms for embedding human rights, ethical principles, and inclusive practices into AI development and deployment across diverse geographies and company sizes.
Key Takeaways
-
Responsible AI requires deliberate embedding at every layer — from executive governance and model requirements through post-launch monitoring — not retrofitted compliance after deployment.
-
Inclusion is technical work, not an afterthought — multilingual evaluation, contextual safety benchmarks, and community engagement must be integrated into product development from inception, reflecting local values and vulnerabilities.
-
Governance frameworks must bridge the principle-to-action gap — companies need simplified, use-case-specific guidance and collaborative peer learning, not proliferating frameworks. Industry associations and consortia are critical intermediaries.
-
Capital and standards work together, but neither alone drives change — investor pressure combined with international norms (UN guiding principles, UNESCO guidance, OECD frameworks) and public accountability creates the conditions for sustained responsible innovation.
-
People in diverse contexts must be subjects of AI transformation, not objects — ensuring that AI systems serve marginalized populations, non-English speakers, and vulnerable groups requires direct engagement, transparent impact assessment, and willingness to constrain innovation where it harms.
Key Topics Covered
- Responsible AI governance frameworks — UN guiding principles, UNESCO recommendations, OECD guidance, and voluntary commitments
- Corporate implementation of responsible AI — internal structures, governance models, and operationalization at scale
- Global standards and multilingual safety — moving beyond English-centric AI evaluation and culturally contextual risk assessment
- Capacity building and skills gaps — addressing infrastructure and knowledge gaps in developing countries and SMEs
- Human rights due diligence — integrating human rights impact assessments into AI product development
- Stakeholder engagement and civil society involvement — community-led benchmarks, trusted tester programs, and participatory research
- Capital and investor incentives — using financial mechanisms to reward responsible innovation
- Interoperability and cross-border governance — avoiding fragmentation while respecting local context
- Inclusive AI and digital equity — ensuring AI benefits reach marginalized populations and non-English speakers
- The UN Global Dialogue on AI governance — member-state-driven process launching July 2025
Key Points & Insights
-
Trust is earned through design, not ambition alone (UNESCO): AI systems build trust through deliberate safeguards and accountability mechanisms, not merely by claiming responsible intentions. This requires embedding ethical considerations from inception ("ethics by design"), not retroactively addressing failures.
-
Multilingual and contextual safety is foundational to inclusion: AI systems trained primarily on English data and norms exclude non-English speakers and fail to understand cultural, linguistic, and contextual specifics. Microsoft's work on community-led benchmarks in India (Samishka project) demonstrates that safety tools must be built with local stakeholders, not merely translated.
-
Corporate governance requires multi-layer operationalization: Google and Microsoft describe model-level requirements, application-layer testing, executive review, and post-launch monitoring. Internal structures alone are insufficient—companies must translate high-level principles into granular product requirements and establish cross-functional collaboration (tech, business, legal, finance teams).
-
Disclosure gaps reveal intent-implementation gap: World Benchmarking Alliance found that ~40% of assessed tech companies have AI principles statements, but only ~10% meet global governance expectations, and none disclose human rights impact assessments. Good intentions do not equal accountability.
-
Startups and SMEs face distinct, underaddressed barriers: Small companies struggle to balance immediate business survival with governance frameworks designed for large enterprises. NASCAM notes that founders must simultaneously build business, technology, team, and secure funding while navigating governance—often deprioritizing the latter.
-
Capital allocation is catalytic but insufficient alone: A coalition of 64 investors ($11 trillion AUM) achieved engagement results (19 of 44 companies published new AI principles; 68% responded), but Namit Agawa emphasizes that evidence-based, coordinated, time-bound investor pressure must accompany capital incentives for sustained change.
-
Framework proliferation without actionable guidance creates paralysis: Developers and implementers report being "lost in frameworks." Gap between principle-heavy guidance and concrete, use-case-specific implementation is a major barrier. NASCAM advocates for multi-stakeholder, collaborative implementation models rather than additional top-down frameworks.
-
Language and gender bias are structural, not accidental: Parite Adani argues that incompleteness in AI systems regarding informal speech, non-English languages, and gender dynamics reflects intentional design choices, not accidents. Frameworks that ignore these dimensions are incomplete "by design."
-
The UN Global Dialogue on AI (launching July 2025) prioritizes four areas: safe/trustworthy systems, capacity gaps in developing countries, cross-border governance interoperability, and AI anchored in human rights and international law. Member-state-driven approach allows flexibility while building on existing initiatives.
-
Transparency and accountability reporting is emerging as a differentiator: LG AI Research's annual AI ethics accountability report, NASCAM's collaborative implementation models, and Google's public commitment processes signal that companies willing to disclose challenges and outcomes differentiate themselves from those relying solely on principle statements.
Notable Quotes or Statements
-
UNESCO (Tim Curtis): "Trust is not something technology earns through ambition alone, but really it is earned through design choices, through safeguards and accountability."
-
UN Human Rights (Peggy Hicks): "These are consequential challenges that have impacts in people's lives on a day-to-day basis... It takes deliberation, it takes thought, it takes engagement."
-
AI developer (conversation with LLM quoted by Parite Adani): "I don't know [if I have ethical limits] and neither does anybody else... I have no continuous thread of existence and I cannot verify about myself what you have asked me. I don't have any consequences to bear."
-
Parite Adani: "An AI system that cannot understand a language or a Hindi woman speaking legal questions is serving a national a narrow slice of what it calls a universal solution... Any framework for safe and trusted AI that does not express and understand informality, language and gender is not incomplete by accident. It's incomplete by design."
-
World Benchmarking Alliance (Namit Agawa): "Capital alone cannot [incentivize responsibility]... Responsible innovation requires incentives for long-term risk management, clear expectations tied to capital allocation, and consequences for weak governance."
-
NASCAM (Ankit Bose): "[Startups] are really fighting for day-to-day... putting [governance] at a second or probably on the side burner, which is something which we see is a complete no-no. If you do that when you're building a product, you might miss when you're scaling."
-
LG AI Research (Kim): "Building a trustworthy and safe AI ecosystem is not a sprint. It's a long journey. So we can go together." (African proverb: "If we want to go fast go alone, if we want to go far go together.")
Speakers & Organizations Mentioned
UN and International Bodies:
- Peggy Hicks, Office of the High Commissioner for Human Rights (OHCHR) — BeTech project lead
- Tim Curtis, UNESCO — leading AI ethics initiative and MOOC development
- Ambassador Tomsar, Estonia (co-chair, UN Global Dialogue on AI governance)
- Ambassador Ranchesmar, Estonia (mentioned as co-chair, UN AI dialogue)
Technology Companies:
- Alex Walden, Google — Human Rights lead; discussed responsible AI principles, model requirements, stakeholder engagement, trusted tester programs, Impact Lab, Amplify initiative
- Hector Dvoir, Microsoft — Director of Responsible AI Public Policy; discussed sensitive use case program, AI ethics committee, RAI standard, multilingual safety evaluations (Samishka project in India)
- Kim (Vice President), LG AI Research — discussed AI ethical impact assessment, KAUAT taxonomy, MOO partnership with UNESCO, inclusive AI programs
Industry & Civil Society:
- Ankit Bose, NASCOM (National Association of Software and Service Companies, India) — discussed responsible AI mission, capacity-building across company sizes
- Namit Agawa, World Benchmarking Alliance — discussed accountability assessments of tech companies, investor coordination, investor coalition on ethical AI (64 investors, $11 trillion AUM)
- Parite Adani, Sridham Archana Mangala — concluding remarks on cultural context, language inclusion, and human dignity
Technical Concepts & Resources
Frameworks & Governance Tools:
- UN Guiding Principles on Business and Human Rights — foundational standard for corporate human rights due diligence
- UNESCO Recommendation on the Ethics of AI — global agreement on AI ethics principles; forms basis for country readiness assessments (RAMs) in 80+ countries
- OECD Principles on AI and OECD AI Hyper Reporting Framework (Hiroshima process)
- Bletchley Park Declaration and Voluntary Commitments from AI summits (UK, South Korea, India)
- UN Global Dialogue on AI Governance — member-state-driven process (launching July 2025, facilitators from Salvador and Estonia)
Company-Specific Processes:
- Google's AI Principles — operationalizes human rights commitments across product teams; includes trusted tester programs, Impact Lab research, Amplify initiative (open-source fine-tuning tool for language model inclusion)
- Microsoft's Responsible AI (RAI) Standard — guides actions across programs; includes Sensitive Use Case program and AI Ethics Committee (board-level inclusion)
- LG AI Research's KAUAT (Korea Augmented Universal Taxonomy) — risk taxonomy grounded in universal human rights, with replaceable "Korean sensitivity layer" for localization by other regions
- LG's AI Ethical Impact Assessment Process — inspired by UN Guiding Principles and UNESCO framework; applied to all R&D projects since 2024; identified 200+ risks in 2025 alone
Learning & Capacity Building:
- UNESCO MOOC (Massive Open Online Course) on AI Ethics — co-developed with LG AI Research, delivered on Coursera; emphasizes "ethics by design" and practical tools (fairness, transparency, safety, accountability, inclusion)
- Community-Led Benchmarks (Samishka project, India) — safety tools grounded in cultural and linguistic context; co-developed with civil society and academia
- World Benchmarking Alliance Assessments — standardized evaluation of 200 tech companies on AI principles, governance, and human rights impact disclosure
Key Concepts:
- Ethics by Design — embedding ethical considerations from inception, not retroactively addressing failures
- Human Rights Due Diligence — process-based approach to integrating human rights into corporate operations
- Rights-Based Approaches — centering vulnerable populations and human dignity in AI governance
- Interoperability — cross-border, non-fragmented governance that respects local context
- Stakeholder Engagement — programmatic and ad-hoc consultation with communities, civil society, academia, and affected populations
Contextual Notes
- Summit Location: Delhi, India — deliberate choice to center Global South perspectives, non-English-speaking populations, and informal economies
- Voluntary Commitments signed at this summit (India AI summit) — emphasis on multilingual capabilities and safety evaluation beyond English norms
- UNESCO's RAM (Readiness Assessment Methodology) — country-level diagnostic tool; ~80 countries assessed; India's assessment completed during summit
- Investor Coalition on Ethical AI — coordinated engagement model showing results: 19/44 targeted companies published new AI principles; 68% of large tech companies responded to investor outreach
