All sessions

Global Cooperation for Ethical and Sustainable AI in Healthcare

Contents

Executive Summary

This panel discussion explored practical frameworks for building equitable, inclusive AI systems in healthcare through participatory co-design approaches. Rather than theoretical ethics discussions, panelists focused on concrete implementation strategies, emphasizing that successful healthcare AI requires centering patients and providers, building diverse teams, and harmonizing global regulatory approaches while respecting local contexts.

Key Takeaways

  1. Shift from principle to practice: Move beyond high-minded ethics statements to concrete implementation checklists, case studies, and playbooks. Practitioners need answers to "What do I do on Monday morning?"

  2. Centers users, not datasets: Start with lived experiences and actual needs, not with available data. Data availability drives research direction even when it doesn't address the most important questions.

  3. Global collaboration requires common language without identical rules: Align on taxonomy, risk frameworks, and evaluation standards across borders. Accept local variation in implementation. Avoid regulatory fragmentation that locks out lower-income countries.

  4. Diverse, empowered teams with lived experience are non-negotiable: Team composition matters; who holds power matters more. Marginalized communities' expertise must shape decisions, not just participate in them.

  5. Make adoption inevitable through usability and value: Don't ask providers to adopt AI out of duty. Demonstrate clear wins (reduced infant/maternal mortality, less cognitive load). Respect existing workflows and hierarchies. Open-source and collaborative models accelerate global reach and reduce reinvention.

Key Topics Covered

  • Participatory co-design and inclusive AI development – Moving beyond theory to practical implementation
  • The "Eight Tenets" framework – A co-designed, iterative approach to responsible AI development across the AI lifecycle
  • Addressing healthcare access barriers – Language, disability, gender, digital literacy, location, socioeconomic status
  • Team composition and epistemic diversity – Why diverse, empowered teams catch bias and build better systems
  • Global regulatory alignment vs. fragmentation – Harmonizing policies without stifling innovation across borders
  • Real-world case studies – Examples from India, Australia, UK, and Microsoft's global practice
  • AI assurance frameworks – UK's approach to evaluating, measuring, and communicating AI risk mitigation
  • Federated learning and open-source models – Enabling global collaboration without violating data localization laws
  • Provider adoption challenges – Overcoming clinician resistance by making AI tools indispensable, not threatening
  • The "Curbcut Effect" – Centering disabled populations leads to innovation that benefits everyone

Key Points & Insights

  1. Don't default to AI: Often the best solutions are non-AI or hybrid. Begin with the problem, not the technology. Ask: "What is missing? What do patients struggle with today?" before deciding if AI is the answer.

  2. The Eight Tenets framework – Developed through two-year co-design study with experts from Australia and India – provides iterative, flexible guidance applicable across the entire AI lifecycle (design, deployment, monitoring). Not a checklist; a set of principles to revisit continuously.

  3. People-first approach requires embedding lived experience: Build with patients, not for them. Treat end-users as experts. Include people with disabilities, different ages, genders, disease states, and cultural contexts in design from the start. Use tight feedback loops and explicitly show communities how their feedback shaped the final system.

  4. Diverse teams catch bias; epistemic diversity matters more than demographics: Homogeneous teams build homogeneous systems. Critical is who holds power within teams, whose lived experience is valued, and reducing tokenization. Interdisciplinary teams (legal, technical, philosophy, ethics) excel at translating across incentive structures.

  5. Global alignment without harmonization: AI is cross-border by default, but countries have wildly different regulatory frameworks (GDPR, DPDP, sectoral vs. horizontal approaches). Solution: align on shared taxonomy, risk definitions, benchmarking standards, and evaluation methods—not impose one-size-fits-all rules. Take the "harshest case scenario" from multiple jurisdictions to ensure compliance.

  6. Federated learning enables global collaboration: Share models, not data. Models can travel globally, train on local data, and become specialized to local contexts while maintaining central oversight and privacy compliance—addresses data localization concerns.

  7. Open-source publication has outsized impact: Publishing research—even when minimally cited—can catalyze global deployment. Example: A smartphone-based corneal topographer published by Microsoft India researchers led to adoption by hospitals globally and a regulated medical device (InstaKC) launched in India and the US.

  8. AI assurance is the practical bridge between principle and implementation: The UK's framework defines assurance as how we evaluate, measure, and communicate AI risk mitigation. AI assurance ecosystem employs 12,000 people and worth £1.6B (projected to grow 6x in a decade). Includes system model cards, auditing, and stress-testing before deployment.

  9. Provider resistance is real and won't be overcome by force: Clinicians fear job displacement and loss of autonomy. Solution: make AI so effortless and valuable that it becomes stupid to work without it. Example: widespread Digitation adoption at airports because the UX made it inevitable. Overburdened systems (like Indian healthcare) won't adopt tools that add cognitive load.

  10. Vulnerable populations must be centered from the outset: The "curbcut effect"—designing for disabled people benefits everyone. Closed captioning (fought for by deaf community) is now mainstream. Ableism and socioeconomic inequality are under-represented in AI bias literature but cause massive harm in healthcare contexts.


Notable Quotes or Statements

  • Dr. Mahima Kala (University of Melbourne): "While interest in digital health technologies is growing rapidly around the world, very few solutions actually successfully translate into routine care. And even when they do, they can sometimes unintentionally harm or exclude certain populations."

  • Unidentified speaker (appears to be Dr. Kritika Girdhar or similar): "Begin with patients. Put patients at the center of it... Don't always default to AI. Sometimes the best solution could be something which is totally non-AI and that might the cheapest way of doing it."

  • Tess (Tech UK): "Diverse teams catch bias early on... Homogeneous teams build homogeneous systems." Also: "Nothing about us without us" (deaf community motto on curbcut effect).

  • Mohammed Zooi (Microsoft India): "At the core of most responsible AI principles is preventing harm from AI... The other way is to make it inclusive AI so the benefits are not limited to certain demography."

  • Kaneka (WHO): "AI is cross-border by default... We cannot risk fragmentation... There has to be alignment, not harmonization" (harmonization is "a dangerous word").

  • Dr. Kritika Girdhar (responding to provider adoption resistance): "How do you enter an airport? I think we see more people adopting digital payments because it's so easy. You can't force anybody to do anything, especially in an overburdened system like Indian healthcare. You have to show clinicians why it makes their life easier."


Speakers & Organizations Mentioned

NameRoleOrganization
Shambui (implied)Project lead / Context-setterLaw (with University of Melbourne, NASA University of Law)
Mahima KalaDigital health equity researcherCenter for Digital Transformation of Health, University of Melbourne
Dr. Kritika GirdharAssociate Professor, Radiologist, Data ScientistDelhi (AIIMS implied)
KanekaTechnical OfficerWHO AI and Digital Health Division
TessSenior Program Manager, Ethics & AI LeadTech UK
Mohammed ZooiPrincipal ResearcherMicrosoft India
Ruta (implied)Law/project organizerLaw, University of Melbourne
Dr. Mahima KalaParticipant (video message)University of Melbourne, Validatron Lab
Moderator (unclear name)ModeratorLaw / Summit organizer

Institutions & Programs Referenced:

  • Law (legal/policy research org)
  • University of Melbourne
  • WHO (World Health Organization)
  • Microsoft India
  • Tech UK
  • IIT (Indian Institutes of Technology)
  • ISC (implied Indian organization)
  • ICMR (Indian Council of Medical Research)
  • MIDAS (ICMR data initiative)
  • IBIA (DBT Imaging BioBank Initiative)
  • AI Kosh (data release platform)
  • CMIE (bio incubator within AIIMS)
  • Sankara Eye Hospital (Bangalore)
  • Center for Digital Transformation of Health (University of Melbourne)
  • Validatron Lab (simulation-based research facility)
  • Global Initiative on Health (WHO working group)
  • WHO Global Initiative on Health (bringing WHO, ITU, WIPO together)

Government Bodies & Regional Initiatives:

  • UK government (AI assurance initiative, trusted third party roadmap)
  • Indian government (TSI, biotech pillar)
  • Maharashtra public health department
  • GDPR (EU regulatory framework)
  • DPDP (Data Protection law — India)
  • FDA / Remedico (regulatory approval bodies)

Technical Concepts & Resources

Frameworks & Methodologies

  • Eight Tenets Framework – Co-designed iterative principles for responsible AI across full AI lifecycle (design, deployment, development, monitoring). Not prescriptive steps; flexible and revisable.
  • Participatory/Co-Design Approach – Involving stakeholders (patients, providers, disabled communities) in design and decision-making from inception.
  • Epistemic Diversity – Valuing multiple ways of knowing; ensuring decision-making power is distributed among people with different expertise and lived experience.
  • Qualitative Assessment – Testing AI for inclusivity and cultural fit before deployment, using simulation and real-world testing environments.

Technical Approaches

  • Federated Learning – Training models on distributed data without centralizing it; model travels to data, not vice versa. Addresses data localization laws and privacy concerns.
  • Physics/Health-Informed Neural Networks – Incorporating domain expertise (radiologist knowledge, medical textbook taxonomy, biological constraints) directly into neural network architecture.
  • Simulation-Based Research – Using clinical simulation facilities with realistic home, primary, secondary care settings to identify workflow, technical, and equity risks before real-world deployment.
  • Privacy-Enhancing Technologies (PETs) – Anonymization, differential privacy, and other techniques to enable data sharing while protecting privacy.
  • System Model Cards – "Nutrition label" style documentation of AI system risk mitigation, transparency, and performance across populations.

Datasets & Data Initiatives

  • IBIA (Imaging BioBank Initiative) – DBT public dataset repository for medical imaging; benchmarked, gold-standard datasets with taxonomy documentation.
  • MIDAS (Medical Imaging, Diagnostic and Intervention Data Architecture and Search) – ICMR/ISC effort for gold-standard datasets and benchmarking.
  • AI Kosh – Data release platform for Indian AI health initiatives.
  • Kaggle Competitions – Benchmark contests for dataset development (mentioned as upcoming initiative).

Real-World Examples & Case Studies

  • Corneal Topography App (Smartphone-based) – Developed by Microsoft India + Sankara Eye Hospital; diagnoses keratoconus (leading cause of blindness in Indian teenagers). Open-sourced → adopted by institutions globally → regulated medical device (InstaKC) now approved in India and US.
  • Cough Against TB App (Vadwani AI) – ML model showed gender bias (higher accuracy for men); team reworked model to harmonize results even though overall accuracy decreased, prioritizing fairness.
  • Sign It (Game for Deaf/Hard of Hearing) – Generates sign language data while users play; tight feedback loops; users positioned as experts, not subjects.
  • High-Risk Pregnancy App (Pradesh) – 2018–2019 deployment; showed resistance from ASHA workers who feared being replaced; despite this, achieved significant reductions in infant and maternal mortality.
  • Retinal Scanning AI Example – Illustrative case of incorporating low-vision users, different age groups, genders, disease states, and rural vs. urban contexts into design.

Regulatory & Policy Frameworks

  • EU AI Act – Horizontal/sectoral regulatory approach.
  • GDPR (General Data Protection Regulation) – EU data protection standard.
  • DPDP (Digital Personal Data Protection Act) – India's data protection law; differs from GDPR, creating complexity for multinational deployment.
  • AI Assurance Ecosystem (UK) – Employs 12,000; worth £1.6B; projected 6x growth. Includes auditors, responsible AI leads, third-party assurance firms. Focus on healthcare, financial services, justice, emergency services (sectors with existing ethics frameworks).
  • Trusted Third-Party Roadmap (UK) – Four key areas: challenge for responsible AI practitioners; information and data access; skills/competency framework; innovation funding.
  • WHO Regulatory Considerations Working Group – Landscape analysis covering 95% of world population; identifying need for alignment on taxonomy, risk definitions, benchmarking, and postmarket surveillance.
  • India-UK Vision 2035 – Bilateral agreement endorsed by both prime ministers; TSI (Technology Security and Innovation) includes biotech as sixth of seven pillars.

Organizations & Standards

  • Validatron Lab (University of Melbourne) – Clinical simulation facility for co-designing and testing digital health tools in realistic but controlled environments. Includes home, primary, secondary care spaces and digital ecosystem sandbox.
  • Tech UK – UK industry association; 4+ dedicated healthcare members.
  • WHO Global Initiative on Health (GIH) – Multistakeholder working group including ITU (International Telecommunication Union), WIPO (World Intellectual Property Organization).
  • Microsoft's CLA Division (Compliance and Legal Affairs) – Internal entity that approves critical deployments; takes "harshest case scenario" from multiple jurisdictions (e.g., US GDPR + India DPDP) and applies the stricter standard.

Other Notable Mentions

  • Curbcut Effect – Design principle: designing for disabled people benefits everyone (e.g., closed captioning). Credited to disabled community innovation. Applies to AI—inclusive design yields systemic benefits.
  • Disability-Forward Bias Research – Most AI bias literature focuses on sex and race (post-2018); ableism and socioeconomic inequality under-represented despite causing widespread harm in healthcare.
  • Functional vs. Small Datasets – Royal Society (UK) report suggests using diverse, smaller datasets or functional data to mitigate bias, as large datasets often represent dominant populations.

End of Summary