Expert Dialogue on AI for Health Systems
Contents
Executive Summary
This panel discussion at an AI summit brought together a global ecosystem of stakeholders—researchers, industry leaders, ethicists, policy makers, and funders—to address the gap between AI innovation and health system scale. The central argument: scaling AI in health requires integrated governance, ethical design, rigorous evaluation, strategic partnerships, and coordinated investment rather than isolated pilots. Key themes include the role of intellectual property as an enabler (not a barrier), the criticality of ethics-by-design, evaluation frameworks beyond RCTs, data accessibility, and government-led governance structures.
Key Takeaways
-
Governance is the orphan baby in AI health—and it must become the foundation. Scale is impossible without aligned governance structures, regulation, procurement, workforce training, and infrastructure. Every implementer should audit their governance readiness before deploying.
-
Ethics-by-design is a requirement, not a nice-to-have. WHO's training exists; what's missing is adoption. Developers and programmers must embed ethical principles in code and design, and funders should make this mandatory.
-
Data is not your problem—usability and mandates are. Global and national datasets exist; what's missing is standardization, interoperability enforcement, and policy-level mandates with consequences. Investment in data infrastructure (ABDM, interoperability) will unlock AI development.
-
Partnerships must be real, early, and institutionalized. Bring together government, research, industry, and funders before you pilot. Identify case studies that have succeeded (e.g., Andhra Pradesh's expert advisory group model) and replicate the structures, not just the solutions.
-
"Operationally real" beats perfect. Solutions that work in crowded clinics, on frontline workers' devices, with existing health infrastructure, and aligned with government budgets will scale. Technical perfection without operational fit will remain a pilot.
Key Topics Covered
- Intellectual Property & Innovation Ecosystems — Patents, copyrights, and IP as incentive mechanisms rather than barriers to access; WIPO's role in calibrating IP policy
- Ethics in AI Development — WHO's ethical principles for AI in health; ethics-by-design vs. bolted-on compliance; training for developers and programmers
- Evaluation Frameworks — Moving beyond RCTs to pragmatic trials, AI testbeds, clinical trial units, and centers of excellence; benchmarking and assessment across multiple dimensions
- Data Accessibility & Standardization — Challenges in accessing localized, India-centric datasets; role of ABDM (Ayushman Bharat Digital Mission) and interoperability; data mandates from government
- Governance & Regulation — National policy frameworks, procurement mechanisms, health system readiness, workforce training, and patient trust
- Scaling from Pilots to Implementation — The "pilotitis" problem; operationally real solutions that work in PHCs and frontline settings; alignment with foundational infrastructure
- Cross-Sector Partnerships — Collaboration between governments, research institutions, industry, investors, and international organizations
- Industry Perspectives — Safety-by-design, constitutional AI, administrative task automation, and the need for benchmarks from domain experts
Key Points & Insights
-
IP as Ecosystem Enabler, Not Barrier
- IP mechanisms (patents, copyrights) incentivize investment and provide legal certainty for licensing and scaling.
- Requiring disclosure (patenting) keeps knowledge in the public domain rather than as trade secrets; this is critical for AI where keeping data proprietary blocks rollout.
- IP must be "calibrated by access"—it works best when paired with mechanisms that ensure equitable access and licensing.
-
Ethics-by-Design Must Be Foundational
- WHO has published ethical principles and guidance (2021, 2024), but translation to practice remains the bottleneck.
- Ethics cannot be bolted on as post-hoc compliance; it must be embedded in algorithm design, training, and programming from the start.
- WHO offers online training for designers and programmers on how to integrate ethics—but industry adoption and institutional accountability remain underdeveloped.
-
Evaluation Beyond RCTs: Pragmatic Trials & Testbeds
- Randomized controlled trials are too slow for the pace of AI deployment; pragmatic trials, AI testbeds, and clinical trial units are needed for rapid feedback loops.
- Evaluation must span clinical validity, cost-effectiveness, health system integration, equity outcomes (do benefits reach all population subgroups?), model drift/stationarity, and behavioral impact on clinicians.
- Centers of excellence and evaluation networks across regions enable cross-learning and prevent reinvestment in the same problems (e.g., Rwanda's 300 simultaneous pilots on one topic).
-
Data Accessibility as a Structural Bottleneck
- India has data but it is not "usable"—fragmented across Apollo, Manipal, state health systems, biobanks, but not standardized or available for model training.
- Global datasets exist and can be adapted for India and the Global South; an expert committee could identify which datasets are transferable rather than starting from scratch.
- Data mandates at the policy level are critical: ABDM provides infrastructure, but actual compliance and data sharing require central-level enforcement with consequences for non-compliance.
- Consumer awareness about data consent and anonymization is extremely low (~5% of Indians understand what they're consenting to); building trust requires transparency.
-
Governance as the Foundation for Scale
- Governance includes regulation, ethics integration, procurement mechanisms, health system readiness, and patient trust—not just compliance frameworks.
- Government must build in AI-related budget line items, establish regulatory oversight, and create institutional architecture (e.g., computational health units) before deploying AI tools.
- Investments should be contingent on alignment with ethics, regulation, and governance standards, not on technical prowess alone.
-
"Operationally Real" Solutions Win Scale
- AI tools must function in real-world contexts: crowded PHCs, loud environments, resource-constrained settings—e.g., voice transcription for doctors writing clinical notes in noisy clinics.
- Solutions must align with existing foundational infrastructure (ABDM, interoperability standards) and be governmentally procurable.
- Pilot projects that remain pilots fail to scale; scale requires alignment with government systems, budgets, training capacity, and health worker buy-in.
-
Partnerships & Cross-Sector Coordination Are Non-Negotiable
- Effective implementation requires simultaneous alignment of research institutions, government health agencies, tech companies, civil society, and international organizations.
- Funders (Gates, Wellcome) increasingly support institutional capacity building and convening rather than isolated grants; they fund governments to set up expert advisory groups, leverage networks, and co-design solutions.
- Knowledge translation happens through networks, communities of practice, and shared learning agendas—not through one-off publications or pilots.
-
Patient Trust Is the Accelerant
- As patient trust in AI grows, adoption follows organically; it is not something the industry or government must "push."
- Building trust requires transparency, governance demonstrating safety, and clear communication of data usage and anonymization.
Notable Quotes or Statements
-
Samir Pujari (WHO): "Governance is the key to scale... if we can make that happen and shape the investments and resources towards that I think we would have achieved something and made the mankind a better place."
-
Urike (WIPO): "IP is not IP versus access... IP is a powerful tool but of course it needs to be calibrated and it needs to be calibrated by access."
-
Andreas (WHO): "Ethics cannot be an afterthought; it has to be embedded from the start... ethics is not just compliance, it's a core part of the algorithm."
-
Dr. Mona (ICMR): "You have to have the right partners at every stage... a small, well-done project with the right partnerships will take it to scale."
-
Sharendra (Gates Foundation): "Governance and having that foresight to include [AI tools] as part of the [government budget] line item is very critical... and the most important is: are patients ready and trusting AI enough?"
-
Dene (Google): "We try and influence without power... we have the right set of relationships with policy makers but we would love for them to also want to work with us."
-
Somia (Anthropic): "In the most optimistic version of the world... we have basically solved or cured diseases like all diseases by 2030–2035... there are so many force multipliers for coordination to happen much more smoothly."
Speakers & Organizations Mentioned
| Name | Affiliation | Role |
|---|---|---|
| Samir Pujari | WHO HQ (Geneva) | Moderator; Lead on AI at WHO |
| Dr. Mona | ICMR | Director of center; Focal point on AI & digital health; Co-chair on evaluations |
| Emily Müller | Wellcome Trust | Technology Manager; AI health research & innovation in Africa; Chair, Deep Learning in Dha (2024) |
| Somia | Anthropic (Applied AI Research) | Researcher in mental health AI; suicide ideation detection; former head of model quality at OpenPipe |
| Dene | Google Health (India Lead) | 6+ years at Google; formerly head of product at Telenor Health, Bangladesh |
| Andreas | WHO (Health Ethics & Governance Unit) | Co-lead; Public health expert; Medical doctor; Focus on ethics in AI, surveillance, big data |
| Urike | World Intellectual Property Organization (WIPO) | Director of AI Policy; IP lawyer; PhD Chemistry; MBA Oxford |
| Sharendra | Gates Foundation (Delhi) | Senior Program Officer; Physician & public health specialist; 19+ years in health program management |
Other entities mentioned:
- ICMR (Indian Council of Medical Research)
- ABDM (Ayushman Bharat Digital Mission, India)
- Apollo, Manipal (health systems/biobanks)
- Government of Andhra Pradesh (digital health infrastructure pilot)
- NITO (mentioned in context of evaluation partnerships)
- Imperial College London (Emily's affiliation for urban health)
- Anthropic, Google, OpenPipe (industry partners)
Technical Concepts & Resources
- Constitutional AI — Anthropic's framework for guiding AI model training through ethical principles before deployment
- Ethics-by-Design — Embedding ethical principles and safety considerations into algorithm and code design from inception, not as post-hoc compliance
- Pragmatic Trials / AI Testbeds / Clinical Trial Units — Alternative evaluation approaches to RCTs; designed for rapid feedback and real-world deployment contexts
- ABDM (Ayushman Bharat Digital Mission) — India's foundational health data infrastructure; enables interoperability and standardization
- Digital Twins — Emerging technology for simulation and evaluation of AI solutions in health systems
- RCTs (Randomized Controlled Trials) — Traditional gold-standard evaluation; noted as too slow for pace of AI deployment
- WHO Ethical Principles for AI in Health — Published 2021 (ethics & governance guidance); expanded 2024 (LLMs guidance)
- ICMR Guidelines on Ethical Usage of AI — National governance framework referenced for responsible AI procurement and implementation
- Responsible AI Network — Global network for cross-learning on AI governance
- Voice Transcription for Clinical Notes — Example of operationally real AI use case: automating data entry for busy clinicians in crowded settings
- Model Drift/Stationarity — Key evaluation dimension; assesses whether AI model performance degrades over time in deployment
This summary is based on a transcript of a panel discussion with partial audio quality issues and redundancies; some names and specific project details were difficult to parse with complete certainty, but all major themes, arguments, and organizational affiliations are preserved.
