Building Safe and Trusted AI: Ethics, Governance & Accountability
Contents
Executive Summary
This AI summit panel discussion addresses the urgent need to integrate ethics, governance, and accountability into AI deployment across multiple sectors—from academic publishing to healthcare, law, and meteorology. The speakers emphasize that responsible AI requires human oversight, transparency, and sector-specific ethical frameworks, while acknowledging that current regulatory structures globally remain incomplete in addressing AI-specific harms like discrimination and hallucination.
Key Takeaways
-
Never assume AI output is accurate. Hallucinations, fictional citations, and plausible falsehoods are endemic. Every use of AI in professional, academic, legal, or medical contexts requires rigorous human verification—there is no substitute for professional scrutiny.
-
Data uploaded to AI platforms becomes public property. Confidential manuscripts, client information, and proprietary data lose protection the moment they enter commercial AI systems. Treat uploaded data as permanently exposed.
-
Responsibility and liability cannot be transferred to AI. Whether you are a lawyer, doctor, researcher, or publisher, using AI amplifies your professional obligations, not diminishes them. The person deploying the system bears accountability for its outcomes.
-
Risk categorization must be application-specific, not sector-wide. A generic "healthcare AI regulation" is too coarse. The same sector requires different safeguards depending on whether AI is assisting a diagnosis or autonomously controlling surgery.
-
Governance frameworks must address the Global South explicitly. Western-centric data, algorithms, and deployment strategies systematically fail outside high-income contexts. Responsible AI governance requires localized, physics-informed models and mandatory inclusion of affected populations in oversight.
Key Topics Covered
- Ethics in Academic Publishing & AI Use – Policies for authors, editors, and researchers using generative AI; data privacy and confidentiality breaches
- AI Governance Frameworks – Multi-stage deployment models categorizing systems by risk level (high/medium/low); the role of human-in-the-loop oversight
- Data Bias & Fairness – Western-centric datasets and their impact on Global South deployments; training data limitations in regions like India and Thailand
- Human Rights & Cultural Context – Privacy protection in diverse governance contexts; differences between universal human rights frameworks and domestic interpretations
- Medical Ethics & Healthcare AI – ICMR guidelines; lifecycle oversight from development through deployment and monitoring
- Legal Liability & Professional Responsibility – Accountability of lawyers, judges, firms, and AI developers; hallucination and citation errors in legal tech
- Extreme Weather Prediction & Physics-Constrained AI – Failure modes of statistical models; physics-based constraints for Black Swan events
- Regulatory Gaps – Sector-specific risk identification; duty of care and mandatory disclosure in AI deployment
Key Points & Insights
-
Data Privacy Breach Through Unconscious Uploading: Many researchers and professionals unknowingly violate confidentiality by uploading manuscripts and reports to AI tools. Once uploaded, data becomes part of training datasets, making it "public" and exposing proprietary information to potential patent or copyright theft.
-
Hallucination & Citation Fabrication: AI models produce plausible-sounding but false citations and fictional references with high confidence. Users cannot rely on the apparent logical flow of AI output; human verification of every claim is mandatory, especially in academic, legal, and medical contexts.
-
Sector vs. Application Risk Classification: The appropriate governance model categorizes AI risk by application use case rather than sector alone. For example, an AI diagnostic assistant analyzing pre-existing pathology tests may be lower-risk than AI-enabled robotic surgery—both in healthcare but vastly different in risk profile.
-
Human Oversight is Non-Negotiable Across Risk Levels:
- High-risk systems: humans must have complete control and command
- Medium-risk systems: humans verify validity before final delivery
- Low/no-risk systems: human oversight can recede, but never disappear
- Fundamental principle: accountability always rests with humans, never with the AI system itself.
-
Western Data Dominance Undermines Global South Applications: Most AI models are trained on datasets from mid-latitude, Western countries. Tropical and Global South regions lack sufficient satellite coverage and ground sensors. Simply imposing Western models on these regions causes failure; localized, physics-constrained models are essential.
-
Data Quality & Robustness Exceed Model Architecture: An advanced algorithm with poor data produces skewed results. Continuous testing, bias auditing, and data validation must occur throughout development, not merely at deployment. Training data must be representative of the populations affected.
-
Transparency & Declaration Are Ethical Minimums: Users must declare when and how they have used AI. Some publishers require disclosure of specific prompts and sections enhanced by AI. Editing and minor polish are often acceptable; wholesale paragraph or manuscript generation is not.
-
Liability is Distributed but Responsibility is Clear:
- Lawyers cannot delegate ethical responsibility to AI; using AI adds layers of responsibility
- AI developers may not be primarily liable for hallucinations but must patch known issues or liability emerges
- Law firms face vicarious liability for tool misuse by their staff
- The U.S. American Bar Association (Opinion 512) recognizes human oversight as essential and billable time
-
Institutional Trust Deficits Compound AI Governance Challenges: In countries like Thailand, centralized state authority and hierarchical cultural norms create resistance to transparency and public scrutiny. Privacy protection requires civil society involvement and open mechanisms, not just top-down policy implementation.
-
Inclusive, Affordable AI vs. Elite-Only Access: AI governance frameworks must ensure that responsible AI reaches underserved populations (rural, non-urban, lower-income) rather than remaining a tool for urban elites. This requires affordability, localization, and cultural relevance—not just technical soundness.
Notable Quotes or Statements
-
Dr. Sachin Sharma (Director General): "AI should be like UPI—not like nuclear technology reserved only for the selected people or only the urban elite."
-
Dr. Gita (Director, National Centre for Science Communication and Policy Research): "Data privacy is very critical. Please ensure that you don't upload things onto those platforms. Ultimately, accountability rests with the humans. You cannot say the AI gave this product, so I am not responsible."
-
Dr. Morti (Former Head, R&D, Ministry of Electronics & Information Technology): "When you use an AI system, we should understand that it is a system developed by humans. It may have some inherent errors or miscalculations. That's why these issues will come on the trust issue."
-
Justice Nagaratna (quoted by Dr. Nitu Rajam): "Artificial intelligence is different, but natural intelligence doing things artificially is something which we cannot condone."
-
Dr. Nitu Rajam (National Law University): "The lawyer takes responsibility completely of the legal advice, examining any kind of advice by scrutinizing documents before courts, not to mention advice given to clients."
-
Sorab Kapil (Co-founder, Biosky): "AI training is built on statistical assumptions that the future will statistically replicate the past. But for extreme weather events, physics-constrained models outperform purely statistical approaches."
-
Dr. Tul (Director, Center for Human Rights, Thailand): "The mechanism of the state should interpret rights strictly, not just in a catch-all form or take authority to authorize everything themselves. This is one of the main challenges in Thailand."
Speakers & Organizations Mentioned
| Role/Title | Name | Organization |
|---|---|---|
| Moderator/Director | Dr. Amit | (Unnamed summit) |
| Director General | Dr. Sachin Sharma | (Unnamed AI research institution; "Voice of Global South" initiative with 157 think tanks across 90 countries) |
| Publication Ethics Expert | Dr. Gita | National Centre for Science Communication and Policy Research; publishes 15 research journals |
| AI Governance & Policy | Dr. Morti | Former Head, R&D Unit, Ministry of Electronics & Information Technology (India); Dean of Entrepreneurship, Amity University |
| Human Rights & Privacy | Dr. Tul Fak Dinavich | Center for Human Rights, Thailand |
| Bioethics & Medical AI | Dr. Roi Matur | Head, Bioethics Unit, Indian Council of Medical Research (ICMR) |
| Legal AI & Ethics | Dr. Keshinas K. Ravish Shinas | Adjunct Professor, National Law University; Associate Faculty Fellow, Center for Responsible AI, IIT Madras |
| Weather & Extreme Events AI | Sorab Kapil | Co-founder and Director, Biosky Space Innovations (IIT Delhi Research Park); focuses on extreme weather modeling for power and critical infrastructure |
| Legal Liability & AI | Dr. Nitu Rajam | National Law University; focus on AI liability and professional responsibility |
| AI Regulation & Compliance | Dr. Nup Chadri | Center for Study for Law and Governance |
Technical Concepts & Resources
AI Models & Approaches
- Neural Networks (1992–1996 doctoral research by Dr. Morti on large language models)
- Large Language Models (LLMs) – Chief source of hallucinations and citation errors
- Generative AI – Focal point of ethical and legal discussions
- Physics-Constrained Models – Superior to purely statistical models for extreme weather prediction; account for physical laws rather than only historical data patterns
Data & Training Issues
- Data Hallucinations – AI-generated false information presented with confidence
- Data Bias – Training data non-representative of populations affected; Western/mid-latitude dominance
- Data Poisoning – Intentional or unintentional introduction of misrepresentative or manipulated data
- Data Validation & Audit – Required throughout AI lifecycle, not only at deployment
Governance & Frameworks
- ICMR Guidelines 2023 – Indian Council of Medical Research lifecycle guidelines for AI in healthcare, including 10 principles: autonomy, beneficence, non-maleficence, justice, inclusivity, privacy/confidentiality, transparency, explainability, accountability, liability
- EU AI Act – Referenced as sector-specific risk framework; establishes duty of care language
- Personal Data Protection Act (PDPA) 2019 (Thailand) – Privacy protection framework; implementation challenges
- Digital Personal Data Protection Act (India) – Assigns responsibility by scale of operations (significant data fiduciaries) but not by sector
- American Bar Association Opinion 512 (2023) – Guideline (non-binding) requiring human scrutiny of AI-generated legal content; recognizes billable time for verification
Risk Categorization Framework
- High-Risk Applications – Require complete human control; examples: robotic surgery, critical infrastructure decisions
- Medium-Risk Applications – Require human verification before deployment; examples: AI-assisted diagnosis on pre-existing pathology tests
- Low/No-Risk Applications – Reduced human oversight; examples: editing, grammar checking
Regulatory & Accountability Concepts
- Duty of Care – Professional obligation not explicitly codified in current Indian law but foundational to ethics
- Vicarious Liability – Law firms responsible for staff misuse of AI tools
- Mandatory Public Disclosure – Information about AI systems should be disclosed for audit, analogous to financial regulation (SEBI model)
- Incident Response Teams – Required for medium/high-risk AI system deployment
- Human-in-the-Loop – Mandatory oversight at different levels depending on risk classification
- Hallucination Mitigation – Checklisting, citation verification, and cross-referencing protocols
Sector-Specific Ethical Standards
- Academic Publishing – Transparency on AI use; no wholesale manuscript generation; data confidentiality; proper citation
- Legal Profession – Professional responsibility under Advocate Act; ethical guidelines updated globally; bias against fake citations
- Healthcare – Lifecycle ethics oversight; patient autonomy; informed consent for data use; validation across diverse populations
- Meteorology & Extreme Events – Physics-constrained modeling; hyperlocalized datasets; probabilistic output (e.g., "80% chance") rather than false precision
Tools & Detection Methods
- Plagiarism Detection Tools – May misidentify non-plagiarized content (e.g., U.S. Constitution marked as AI-generated)
- AI Content Detection Software – Emerging tools attempting to identify AI-generated text; reliability uncertain
- Grammarly – Acceptable for editing; not acceptable for wholesale content generation
- ChatGPT, Open AI Systems – Cited as examples of systems with knowledge cutoffs; cannot access real-time data
Methodological Notes & Limitations
- The transcript contains significant audio degradation and repetition (likely OCR artifacts), which may affect precision of certain quoted phrases.
- Dr. Morti's risk categorization framework (high/medium/low-risk by application, not sector) is explained clearly but contradicts Dr. Nup Chadri's assertion that sectors should be the trigger for responsibility—Dr. Morti's rebuttal in closing partially clarifies this apparent tension.
- The ICMR 2023 guidelines are referenced but not fully detailed in the transcript.
- The American Bar Association Opinion 512 is cited as a guideline without binding authority; its applicability outside U.S. legal practice is not discussed.
