All sessions

How nonprofits are using AI-based innovations to scale their impact

Contents

Executive Summary

This panel discussion examines a four-month AI cohort program launched by Project Tech for Dev and the Agency Fund, which brought together seven nonprofits to build AI-powered solutions addressing education and health challenges in India. The program emphasized responsible AI design from inception, mentorship support, knowledge partnerships, and cross-organizational collaboration—yielding practical insights that challenge nonprofits to solve genuine pain points rather than chase technology trends.

Key Takeaways

  1. Build from Pain, Not Hype: Start with documented operational inefficiencies—grant writing, code generation, teacher feedback bottlenecks—rather than treating AI as inherently transformative. Only pursue AI if it demonstrably solves a real problem faster or better than alternatives.

  2. Design Responsibly From Day One: Embed responsible AI principles (bias mitigation, harm prevention, output validation) into prototypes, not post-mortems. Use existing guardrails, audit frameworks, and safety plugins; don't defer ethics to later stages.

  3. Pool Resources, Don't Duplicate: Leverage cohort models, knowledge partnerships, and open-source ecosystems to access specialist expertise (engineers, product managers, behavior scientists) without hiring full-time staff. Seek existing solutions before building.

  4. Measure in Layers: Don't skip straight to impact evaluation. Validate AI system reliability → product usability → user behavior change → outcome impact, in sequence. Each layer informs the next and manages risk.

  5. Create Friction for Collaboration: Deliberately convene organizations working on parallel problems. Shared frameworks, cross-organizational workshops, and ecosystem brokers (like Project Tech for Dev) surface opportunities to standardize, integrate, or jointly develop solutions—multiplying impact while conserving resources.

Key Topics Covered

  • Cohort-based learning models for nonprofit AI adoption
  • Responsible AI and AI safety integration into early-stage product development
  • Education technology use cases (chatbots for teacher support, adaptive learning, student feedback systems)
  • Product evaluation frameworks with four distinct assessment levels
  • Barriers to nonprofit AI adoption (lack of engineering resources, funding constraints, leadership buy-in)
  • Cross-organizational collaboration and knowledge-sharing mechanisms
  • Behavior change measurement in social sector AI applications
  • Practical deployment challenges (hallucinations, user onboarding, guardrails)
  • Responsible AI principles (reducing bias, preventing harmful outputs, ethical deployment)
  • Open-source integration versus custom development trade-offs

Key Points & Insights

  1. Problem-First, Not Technology-First: Multiple panelists emphasized that nonprofits should identify existing pain points first, then evaluate whether AI is the appropriate solution—not reverse-engineer use cases to justify AI adoption. One speaker noted that "don't build what is sexy, build what is needed."

  2. Cohort Model Accelerates Learning: The peer-learning structure allowed organizations working on similar problems to discover overlaps (e.g., two health nonprofits unknowingly building parallel pregnancy risk prediction models), avoiding duplicated effort and enabling collaborative solutions.

  3. Responsible AI Must Be Baked In Early: Integrating responsible AI and AI safety frameworks from project inception—rather than as post-hoc considerations—reduced downstream complications. Knowledge partners (Digital Future Labs, Tattletail) provided guardrails plugins and slur-detection lists that plugged directly into products.

  4. Four-Level Evaluation Framework Mirrors Product Maturity:

    • Level 1: AI model/system evaluation (reliability, safety, hallucination reduction)
    • Level 2: Product evaluation (user activation, engagement, retention rates)
    • Level 3: User evaluation (behavior/belief change via survey data)
    • Level 4: Impact evaluation (health/education/livelihood outcomes at scale)
  5. Engineering Resource Scarcity is a Real Barrier: Nonprofits lack dedicated AI engineers. The cohort model pooled technical staff (including product managers) and mentorship, allowing organizations to access fractional expertise rather than hire full-time specialists prematurely.

  6. User Behavior Often Defies Designer Assumptions: Simple Education Foundation discovered teachers ignored onboarding instructions (e.g., starting with "hi" instead of jumping straight to problems), requiring product redesigns and guardrails to handle unexpected interaction patterns.

  7. LLM Hallucinations Present Discipline-Specific Challenges: Avanti Fellows encountered hallucinations with reversed numerical comparisons ("decreased" instead of "increased")—semantically subtle but practically dangerous for teacher-student mentorship conversations. Prompt engineering and model fine-tuning remain exploratory.

  8. Existing Solutions Often Suffice: Organizations should evaluate open-source platforms (e.g., Superset for dashboards, open-source LLM integrations) and off-the-shelf LLMs (Claude, Gemini) before building custom tools. Integrating existing solutions reduces time-to-value.

  9. Mentorship + Knowledge Partners > Isolated Development: Pairing each nonprofit with dedicated mentors and embedding expertise from responsible AI specialists created rapid iteration cycles and prevented analysis paralysis around competing problem dimensions.

  10. Sector-Level Collaboration Prevents Fragmentation: Civil society organizations risk deploying multiple solutions to the same population (e.g., five education nonprofits each building Asha worker apps). Ecosystem players like Project Tech for Dev facilitate discovery and joint development to reduce user cognitive load.


Notable Quotes or Statements

  • Tina Madon (Agency Fund): "The technology is actually easy…what is difficult is to fit the technology to the pain points we all experience in life and build a product that achieves social impact."

  • Steven Sutting (Quest Alliance): "When you start thinking like that [emulating human behavior], it stops becoming a software problem. It started becoming a behavior science problem."

  • Min Roy (Simple Education Foundation): "Don't build what is sexy, build what is needed."

  • Priam Sukumar (Avanti Fellows): "Stop looking at use cases for AI but look at like pain points and troubles we already have and see if AI is a good fit there."

  • Erica Arya (Project Tech for Dev): "Even as a tech organization who's building platforms we're not building things from scratch. We are integrating with other tools which are meeting the needs."

  • Steven Sutting (Quest Alliance): "The dimension of problems can be fairly large…analysis paralysis can happen…but having frameworks helps you chunk things into where is this problem best suited."


Speakers & Organizations Mentioned

Panelists:

  • Manohar (Sri Kantth) – Partner & CTO, Sata Consulting (moderator)
  • Erica Arya – CEO, Project Tech for Dev
  • Tina Madon – Co-founder, Agency Fund
  • Min Roy – Co-founder & CEO, Simple Education Foundation
  • Steven Sutting – Director of Technology & Product, Quest Alliance
  • Priam Sukumar – Technology & Research Lead, Avanti Fellows

Organizations & Initiatives:

  • Project Tech for Dev – Develops open-source tech platforms and provides advisory to 200+ nonprofits
  • Agency Fund – Funds nonprofits integrating AI for global development (runs year-long accelerator)
  • Simple Education Foundation – Builds AI-powered teacher support tools (WhatsApp-based chatbot, "Simple Teacher Buddy")
  • Quest Alliance – AI-powered digital learning platforms for youth (grades 8–12, TVET learners)
  • Avanti Fellows – Uses AI to generate student mentorship scripts; reaches 200,000 online learners
  • Dasra – Social sector organization running cohort-based programs (mentioned as collaborator)
  • Digital Future Labs – Knowledge partner providing responsible AI integration frameworks
  • Tattletail – Knowledge partner specializing in AI safety (slur lists, guardrails plugins)

Funding & Community:

  • Y Combinator (referenced as cohort model precedent for startups)
  • South Park Commons – Venture capital community and tech ecosystem player

Technical Concepts & Resources

AI/ML Models & Platforms:

  • Large Language Models (LLMs): OpenAI (integrated via Glyphic chatbot), Claude, Gemini
  • Chatbots & Conversational AI: WhatsApp-based implementations, guardrailed systems
  • Retrieval-Augmented Generation (RAG): Mentioned as an LLM architecture pattern for knowledge base integration
  • Hallucinations: Specific issue with numerical reversals in teacher-student mentorship scripts; requires prompt engineering and fine-tuning

Tools & Frameworks:

  • Superset – Open-source data visualization/dashboard tool (integrated by Project Tech for Dev)
  • Guardrails plugins – Safety modules from Llama and specialized AI safety vendors
  • Slur-detection lists – Crowdsourced word filters to prevent harmful outputs
  • Glyphic Chatbot – Platform used for LLM integration
  • Golden Dataset – Curated, high-quality data used to validate and align AI outputs with organizational intent

Evaluation & Measurement:

  • Four-Level Evaluation Framework (Agency Fund):
    • Model/system evaluation (safety, reliability)
    • Product evaluation (activation, engagement, retention)
    • User evaluation (behavior/belief change via surveys)
    • Impact evaluation (health/education/livelihood outcomes)
  • Monitoring & Evaluation (M&E) Systems: Standard practice in nonprofit deployments; less common in for-profit sector

Methodologies:

  • Prompt engineering – Iterative refinement of text inputs to improve LLM outputs
  • Behavior science integration – Embedded from early stages of product design
  • Responsible AI principles: Bias mitigation, harm prevention, output validation, ethical deployment

Resources & Documentation:

  • Agency Fund evaluation framework (search term: "Agency Fund evaluation framework")
  • Blogs and published case studies from Project Tech for Dev and participating nonprofits

Document prepared from conference talk transcript. Date of talk: Not explicitly stated in transcript. Venue: AI Summit (specific location not identified in transcript).