All sessions

AI for ALL Challenge & Panel on Leveraging AI for Development in the Global South

Contents

Executive Summary

This transcript captures the India Impact AI Summit, a comprehensive event showcasing 20 AI teams competing in the "AI for ALL Challenge" alongside expert panel discussions on AI in financial services. The summit demonstrates how AI is being leveraged to address critical challenges in agriculture, healthcare, disaster resilience, education, climate adaptation, and financial inclusion across the Global South—with a particular emphasis on India's role as both an innovator and implementer of responsible, trustworthy AI systems.

Key Takeaways

  1. Trust is infrastructure, not feature: Trustworthy AI for the Global South requires region-specific validation, language support, bias audits, and explainability—not port-and-deploy approaches from developed markets.

  2. Data access and transparency unlock inclusion: India's UPI data, satellite imagery, and account aggregator framework are underutilized assets; open APIs and regulatory sandboxes are critical policy levers for innovation.

  3. Humans remain accountable in agentic systems: Liability frameworks must clearly assign responsibility across the AI supply chain; explicit digital consent and human oversight in high-consequence decisions are non-negotiable before scale.

  4. Language and voice are equity multipliers: Edge-deployable Indian language models and voice-based interfaces can extend AI's reach to feature phones and low-literacy populations—a unique advantage in India not replicated elsewhere.

  5. Responsible governance accelerates adoption: Principles-based, technology-neutral regulation with human-in-the-loop requirements and audit clarity actually enables institutional confidence and innovation—not constraints.

Summit Talk Summary


Key Topics Covered

AI for Development Applications (Team Presentations)

  • Soil health & agriculture: Biomakers (microbiome-based soil intelligence)
  • Natural disaster resilience: Resilience 360 (climate risk management tool)
  • Energy optimization: Element Circle (AI-powered smart meter intelligence for demand response)
  • Medical diagnostics: Forest Health (retinopathy of prematurity detection via AI), Carb Inc. (antibiotic resistance mitigation through microbiological image interpretation)
  • Mental health: Wisa (neurosymbolic AI for mental health), InfyHealth (AI mental health companion)
  • Agricultural markets: Intellabs (AI-powered fruit quality grading and transparent pricing)
  • Governance & policy access: Sager (semantic AI for accessible rights through machine-readable government portals)
  • Climate adaptation: CoreStack (digital public infrastructure for water security and landscape stewardship)
  • Disability inclusion: Torched Electronics (Joti smart glass for visually impaired)
  • Clinical trials decentralization: Infuse Health (digital patient twins for remote trial participation)
  • Cybersecurity: SecureTech (AI-driven threat detection and response)

Financial Services AI Panel Discussion

  • AI use cases in banking and NBFCs (underwriting, fraud detection, customer service)
  • Liability frameworks and accountability in agentic AI
  • Trust, explainability, and bias mitigation in AI models
  • Global regulatory fragmentation vs. harmonization
  • Data governance and accessibility for AI innovation
  • Inclusion through language models and assistive technology

Key Points & Insights

1. AI as Inclusion Multiplier, Not Replacement

Multiple presenters emphasized that AI's highest value in developing economies lies in extending human capability rather than replacing workers. Examples: Bank of Baroda's fraud detection systems augment human analysts; Wisa augments human therapists; Forest Health's ROP detection guides—rather than replaces—clinician decision-making. Dr. Ravindran (IIT Madras) stated explicitly: "agents must act legally on behalf of humans" and require explicit consent frameworks.

2. Trustworthiness is Infrastructure, Not an Afterthought

Dr. Ravindran introduced the concept of a "Trusted AI Commons"—a repository of benchmarks, tools, and processes for building trustworthy AI systems tailored to Global South contexts. Trust in financial systems (and healthcare, governance) cannot rely on models validated only in Western contexts; systems must be tested for bias, explainability, and robustness in regional languages, demographic contexts, and operating constraints.

3. Data as the Foundational Bottleneck and Opportunity

  • Underutilized advantage: India's UPI transaction data, satellite imagery, mobile penetration, and account aggregator framework provide unparalleled granularity for credit underwriting that previously data-poor populations lack in other regions (Dr. Banerjee, L&T Finance).
  • Clean data gap: Semantic AI's Sager team highlighted that government websites exist but are machine-unreadable; the problem is interpretation without context, not absence of data.
  • Access challenge: Regulatory sandboxes and labeled, anonymized datasets at sector level are needed for experimentation (Dr. Ravindran's recommendation).

4. Language & Voice as Critical Accessibility Vectors

  • Bank of Baroda deployed "Bob Sambad"—a communication AI model enabling customers to interact in any language, with employees in different languages receiving translated summaries.
  • Dr. Ravindran: Indian language models now run on edge devices (feature phones); entirely IVR-based tutoring for reading fluency is viable without displays.
  • Hilo's mental health companion operates in 93 languages; accessibility through voice critical for populations without smartphone literacy.

5. Regulatory Clarity as Enabler, Not Constraint

The RBI's seven-principle framework (adopted by Government of India for national AI policy) emphasizes innovation over restraint, not innovation despite constraint. Dr. Banerjee and Dr. Chan both noted that responsible governance with guardrails (human-in-the-loop, audit trails, explainability requirements) actually accelerates institutional adoption by reducing liability uncertainty. Fragmented global regulations (e.g., EU AI Act, US state-level approaches) create patchwork compliance burdens for global institutions like JP Morgan Chase.

6. Agentic AI Requires New Liability Frameworks

Terra Leons (JP Morgan Chase) emphasized a "shared accountability framework" spanning the entire AI supply chain—end users (banks), cloud vendors, model developers, and upstream suppliers must be clearly assigned responsibility. Current liability frameworks inadequate for agents; digital consent mechanisms and clear boundaries on agent authority must be established before deployment at scale.

7. Human-in-the-Loop is Non-Negotiable in High-Risk Contexts

  • Low-risk, low-ticket: End-to-end AI processing acceptable (e.g., routine transaction fraud detection).
  • High-ticket, high-consequence: Human must remain in control and approve final decisions (e.g., large loan approvals, mental health crisis interventions). Multiple systems showed this principle: Wisa has escalation pathways to human therapists; Forest Health's ROP guidance requires clinician confirmation; Intellabs' quality grading informs—not replaces—human pricing decisions.

8. Bias, Hallucination, and Deepfakes Remain Persistent Challenges

Dr. Ravindran: Deepfake detection in Indian languages is still unsolved; code-mixed languages (Hindi-English, Tamil-English) pose additional complexity. Dr. Banerjee (L&T): Addressing bias starts upstream—variable selection must not encode historical discrimination—and requires independent model risk management teams to audit for aging, mathematical errors, and unintended feature interactions. Hallucination in LLMs remains unresolved for high-stakes applications (e.g., Wisa's choice to revert high-risk cases to symbolic architectures).

9. Democratization of Expertise Requires Strategic Distribution Channels

  • Carb Inc.: AI-assisted ground stain interpretation extends microbiologist expertise to labs in tier-2/3 cities lacking trained technicians; B2B subscription model (no upfront capital) suits resource-constrained healthcare facilities.
  • CoreStack: Digital public infrastructure for landscape stewardship + open APIs allows NGOs and community volunteers to build on top of shared geospatial data layers.
  • Torched Electronics: Distributed smart glass + offline capability brings assistive AI to visually impaired students regardless of connectivity.

10. Impact Measurement and ROI Clarity Are Underexplored

While many teams reported metrics (e.g., Intellabs: 70,000 metric tons sorted, 8–20% farmer income uplift; Forest Health: 40,000+ lives empowered; Wisa: 1 million users, 2 million conversations), rigorous, independent evaluation of causal impact remains sparse. Wisa noted 40% reduction in depression/anxiety symptoms (clinical trials), but most teams lacked peer-reviewed, externally validated outcome studies. This gap risks inflated claims and regulator skepticism.


Notable Quotes or Statements

On Trust & Agentic AI

"We need to start having frameworks that are established in terms of some kind of a non-viable contract that a client is going to have to enter into to allow an agent to act and do banking on their behalf." — Dr. Ravindran (IIT Madras/RBI Committee)

"Shared accountability framework [spanning] end users like banks and other model deployers being held responsible for their actions in deployment but in the context of a much longer AI supply chain which incorporates upstream vendors and suppliers." — Terra Leons (JP Morgan Chase)

On Inclusion & Language

"Better soil is better food and better life." — Biomakers

"Quality is not standardized means no fair price, no trust, no efficiency. Once you digitize quality, you digitize the entire market." — Intellabs

"We are learning a lot on the ground...AI for one [person] ended up being AI for all." — Wisa (Mental Health)

On Data as Foundation

"All AI ultimately stands on data...India has fantastic young population, bright and strong...we need better equitation at STEM levels." — Dr. Ravindran

"Our credit cost is coming down to almost one-third of what it used to be [via Cyclops platform using alternate data]." — Dr. Debbraja Banerjee (L&T Finance)

On Human-in-the-Loop

"AI can assist, AI can support, but the decision would be that of the human being." — Dr. Debbda Chan (Bank of Baroda)

"AI should not depress specialist expertise but extend their capabilities and push beyond their limits." — Masa Nakajima (Carb Inc.)

On Policy & Innovation

"Innovation over restraint...institutions should take the innovation route, experiment, build controls and compensatory checks within product approval processes, and then roll out." — RBI Moderator (Fintech Department)


Speakers & Organizations Mentioned

Government & Regulators

  • Reserve Bank of India (RBI) — Fintech Department; constituted AI committee with seven-principle governance framework
  • Government of India — Adopted RBI's seven sutras for national AI policy; Ministry of Power (Andhra Pradesh grid optimization); Ministry of Home Affairs/National Disaster Management Authority (recognized Resilience 360)
  • Ministry of Social Justice & Empowerment, Government of Maharashtra — Partnering with Wisa for mental health in schools

Academic Institutions

  • IIT Madras — Wadwani School of Data Sciences (Dr. Ravindran); research park hosts Element Circle case study
  • IIT Delhi — Computer Science faculty (CoreStack founder)
  • IIT Bombay — Clinical psychologists/psychiatrists on Hilo team
  • Stanford University — Infuse Health's origins; collaborated on rare genetic disease clinical trials

Financial Institutions & fintech

  • Bank of Baroda (Bank of Baroda) — 50 AI use cases; Aditi humanoid virtual relationship manager; Bob Sambad multilingual communication model
  • L&T Finance — Cyclops credit underwriting platform using alternate data (video, geospatial, device metadata)
  • JP Morgan Chase — Global AI policy (Terra Leons); operates 100+ countries
  • Andhra Pradesh Discoms — Grid demand response partnerships with Element Circle

AI/Tech Companies & Startups (Summit Competitors)

  1. Biomakers — Soil microbiome intelligence; 10 years R&D; global partner network
  2. Resilience 360 (Resilience AI) — Climate risk at 96% confidence; 84 villages in India; UN deployment
  3. Element Circle — Smart meter AI + thermal storage (300,000 L); 42 million savings in single year; 35% cost reduction
  4. Forest Health — ROP detection (retinopathy of prematurity); 16 years in preventable blindness; 60 countries
  5. Carb Inc. — Beta microbiological image interpretation (PMDA-approved Japan); 25+ hospitals deployed
  6. Intellabs — Fruit quality grading; 70,000 metric tons sorted; 8–20% farmer income uplift; patents (5 granted)
  7. Sager (VHA Global) — Semantic AI for government portal accessibility; Tamil Nadu government pilots
  8. CoreStack (Common Tech Foundation) — Digital public infrastructure for water/landscape stewardship; 800 villages; open APIs
  9. Infuse Health — Digital patient twins for clinical trials; 23 studies, 5 indications; $1.3M ARR trajectory
  10. SecureTech (Percept CEM) — Cybersecurity; 23 AI models; contextualizes data from third-party tools
  11. Torched Electronics (Joti) — Smart glass for visually impaired; OCR multilingual; 40,000+ lives; 30+ blind schools
  12. Wisa — Mental health companion; 1 million users; neurosymbolic AI; 93 languages; NHS 40% coverage
  13. Hilo AI (InfyHealth Tech) — Mental health companion; 2 million conversations; clinically validated at AIIMS Delhi
  14. Square Tech IT Solutions — Cybersecurity platform

Global Partners & Investors

  • Google Launchpad, NASMON AI for Good, Nvidia Inception (Intellabs)
  • Anthropic — Grid infrastructure statements
  • Mozilla Foundation — Privacy/safety evaluation (Wisa)
  • Wellabs, Atri (CoreStack's environment partners)
  • Mahindra (Resilience 360 deployment)
  • GAVI/WHO, Welcome Trust — Health/vaccine partnerships

Technical Concepts & Resources

AI/ML Architectures & Approaches

  • Neurosymbolic AI — Combines neural (empathy, richness) + symbolic (protocol adherence) for mental health (Wisa)
  • Visual Language Models (VLMs) — Understand landscape photography changes, vegetation, degradation over time (Resilience 360)
  • Multi-level Deep Learning — Handles polymicrobial samples, variable staining, constrained networks (Carb Inc.'s Beta)
  • Transformer-based models — LLM explainability; fine-tuning on therapy datasets (Hilo); code-mixing challenges
  • Agentic AI / Generative AI (GenAI) — Risk prioritization, automated playbooks, orchestration across systems (SecureTech, Bank of Baroda's enterprise use cases)
  • Supervised/Unsupervised Learning — Feature engineering, contextual creation (SecureTech's 23 models)
  • LLMs & SLMs — Large and small language models; hallucination mitigation; privacy filtering at input/output

Data & Modeling Techniques

  • Account Aggregator Framework — API-enabled secure banking data sharing (India; enables credit underwriting per L&T)
  • Geospatial Modeling & Remote Sensing — Heat gain analysis, satellite data for climate/agriculture (Element Circle, CoreStack, Resilience 360)
  • Optical Character Recognition (OCR) — Multilingual, literacy-independent (Torched Electronics' Joti)
  • Wet Bulb Temperature, Rainfall Clustering — Climatic science for disaster modeling (Resilience 360)
  • Soil Microbiome DNA Data — Functional/ecological predictions; microbial metabolic pathways (Biomakers)
  • Computer Vision / Object Detection — Fruit defects, size, color; 96% confidence in damage prediction (Intellabs)
  • Alternate Credit Data — Transaction patterns, device metadata, geospatial indicators, household video, behavioral signals (L&T Finance's Cyclops)

Governance & Safety Frameworks

  • RBI's Seven Principles (adopted by GoI):

    1. Innovation over restraint
    2. Trust as foundation
    3. Data minimization & privacy
    4. Explainability & interpretability
    5. Human oversight
    6. Risk-based, principles-based regulation (not technology-specific)
    7. Inclusive access
  • Responsible AI Benchmarks — Trusted AI Commons (under development); benchmarking, bias audits, regional language testing

  • Human-in-the-Loop Architecture — Triage mechanisms, escalation pathways, clinician