AI Horizons: Building Safe and Trusted Intelligence Systems
Contents
Executive Summary
This multi-session AI summit panel discussion addresses critical gaps between the global north and global south in AI safety, governance, and equitable access. Speakers emphasize that AI safety requires a shared responsibility model involving regulators, tech companies, academic institutions, and citizens—and that the global south cannot simply adopt northern regulatory frameworks but must develop graded, principle-based approaches suited to local contexts.
Key Takeaways
-
Shared Responsibility Is Non-Negotiable: No single actor—not big tech, not regulators, not users—can secure AI systems. Success requires deep collaboration, trust, and explicit accountability mechanisms across all layers.
-
Principle-Based, Graded Regulation Over Copy-Paste Laws: The global south cannot adopt EU AI Act directly. Instead, countries should establish broad legal principles (transparency, accountability, non-harm) and grant sectoral regulators flexibility to adapt.
-
Data Sovereignty ≠ Data Isolation: Building indigenous AI capabilities requires control over data, models, and deployment—but this works best alongside rapid diffusion of AI benefits to underserved populations (the DPI Stack model).
-
Cognitive Autonomy Is a Human Right: Protecting citizens from algorithmic manipulation, bias, and "cognitive colonialism" must be enshrined in legal frameworks—alongside traditional privacy protections.
-
The Regulation-Innovation Paradox Is False: Minimal enabling regulation actually accelerates innovation by providing legal clarity, liability frameworks, and protection for innovators. Unregulated tech moves faster, not better.
Key Topics Covered
- AI Safety & Security Preparedness: The divide between global north and global south in AI risk management and incident response capabilities
- Data Equity & Democratization: How emerging economies can develop indigenous AI models and data infrastructure rather than serving as data consumers only
- AI Governance & Accountability: Moving from self-regulation toward enforceable legal frameworks and liability models
- Cognitive Sovereignty: Protecting citizens' cognitive autonomy and rights against algorithmic influence and "cognitive colonialism"
- Collective Responsibility Models: Defining roles for big tech, regulators, academic institutions, and end users in securing AI systems
- Misinformation & Content Integrity: Three-layer approach to preventing AI-generated disinformation (industry standards, regulation, user education)
- AI Diffusion vs. Development: Balancing creation of indigenous AI capabilities with rapid deployment of AI benefits to underserved populations
- Regulatory Innovation: Dynamic, principle-based regulation that can evolve alongside AI capabilities
- Algorithmic Bias & Diversity: Ensuring AI systems account for caste, class, gender, and cultural differences in diverse societies
- Institutional Data Governance: How universities and research bodies should manage and open data while protecting privacy
Key Points & Insights
-
The Three-Layer Security Paradox: AI security is not owned by any single stakeholder. Dr. Noy articulated a "7×4 matrix" of accountability: seven layers of AI application stack (data, models, applications, content, infrastructure, identity, deployment) × four stakeholder groups (developers, deployers, users, consumers). All must share responsibility.
-
Global South Must Build Indigenous Solutions: Dr. Trapati (NILLET) emphasized that the global south generates vast data but is treated as a consumer market. Countries like India should develop small foundational models for their own problems—local language AI, region-specific content moderation, etc.—rather than relying on English-language Western models.
-
Liability Is Graded, Not Binary: Dr. Roy introduced a "graded liability model" where responsibility for AI harms is distributed proportionately among coders, companies, marketers, and data principals. A single tragic case illustrated the gap: a student's suicide attempt prompted by an AI model—but accountability was unclear because no single party owned the problem.
-
Regulation ≠ Stifled Innovation: Dr. Roy directly countered the innovation-vs.-regulation false dichotomy. Over 200 AI bills will be passed globally in 2026. Minimal enabling regulation promotes innovation by providing legal certainty and remedies for innovators.
-
Dynamic Regulation Required: Static laws (like India's 26-year-old IT Act) cannot address tomorrow's AI challenges. Sectoral regulators need flexibility to adapt principle-based governance frameworks to evolving AI capabilities.
-
Cognitive Colonialism as Existential Risk: Without cognitive sovereignty—ownership of one's own attention, awareness, and decision-making—citizens become "cognitive slaves" to big tech. This is framed as an existential threat to national sovereignty and individual autonomy.
-
AI-Generated Misinformation Requires All Three Layers: Industry self-regulation (e.g., C2P frameworks), government regulation, and citizen critical thinking must operate simultaneously. Relying on any single layer will fail.
-
Trust is Built Through Invisible Safety Engineering, Not Philosophy: Dr. Bell argued that observable safety efforts signal a lack of safety. Like electricity infrastructure, AI safety must be engineered invisibly into systems—not debated endlessly.
-
Data Centers as Geopolitical & Economic Infrastructure: Prof. Salaf noted that data center placement decisions involve geopolitics, sustainability (water, electricity), and employment opportunity. This requires cross-sector government coordination, not market-driven location choices.
-
Agent-to-Agent Interactions Are Uncharted Territory: Recent Jailbreak research showed AI agents communicating with each other exhibit emergent behaviors not controlled by human design intent. This represents a fundamentally new governance challenge—"legacy conversations" about regulation are insufficient.
Notable Quotes or Statements
-
Dr. Trapati (NILLET): "In the age of AI, any idea can be pushed... it's very important for global south and India that we protect our data and develop models for ourselves instead of relying on large English models."
-
Prof. Alukra (IIM Kolkata): "AI is just an extension of an information technology tool... designed to support our decisions to be made faster, more transparent, and more accurate." (Context: Cautioning against viewing AI as a universal solution.)
-
Dr. Roy (AI Accountability Institute): "If you're wanting self-regulation to govern accountability, then you are on the wrong platform because the train is never going to arrive."
-
David Roser (ASPI): "What is worse for national sovereignty than having excessive control in the hands of private sector companies in other countries?"
-
Dr. Bell (CERT-IN): "Trust and safety are recognized when they are invisible. A safety which is visible means there is no safety."
-
Mandar Kulkarni (Microsoft): "India's success was on DPI [Digital Public Infrastructure]. Now we need to replicate that success on AI. We need to take AI out of boardrooms and get it to the common most people."
-
Dr. Roy: "Indians have started walking towards cognitive colonialism. They are ready to become cognitive slaves of big tech companies."
Speakers & Organizations Mentioned
| Speaker | Role / Organization | Key Focus |
|---|---|---|
| Dr. MM Trapati | DG, NILLET | Data democratization, indigenous AI models, educational access |
| Prof. Alukra | Vice Chancellor / Director, IIM Kolkata | AI in higher education, examination evaluation, institutional data governance |
| Prof. Salaf | Professor of Statistics & Data Science, IIT Kharagpur | Data centers, employment, sustainable infrastructure |
| Dr. Sanjay Bell | DG, CERT-IN (Indian Computer Emergency Response Team) | Cybersecurity, AI safety infrastructure, shared responsibility |
| David Roser | Analyst, Australian Strategic Policy Institute (ASPI) | Geopolitics, international cooperation, strategic risk |
| Dr. Pavanandan Roy | Founder, Global AI Accountability Law & Governance Institute | AI accountability, liability, legal frameworks, cognitive rights |
| Mandar Kulkarni | National Security Officer, Microsoft India & South Asia | DPI diffusion, shared responsibility models, enterprise security |
| [Unnamed moderator] | Founder, India Future Foundation | Policy think tank on technology, policy, geopolitics |
Technical Concepts & Resources
| Concept / Tool | Context | Significance |
|---|---|---|
| Sign Word | NILLET student project | Converts video/audio to Indian Sign Language (not ASL) |
| Audio-to-Audio Translation | NILLET product | Language translation (e.g., English ↔ Maripuri, Hindi ↔ Manipuri) for northeast India |
| NILLET Digital Industry Platform | AI-intensive education platform | 50,000 registered students in 4 months; generates research-grade educational data |
| 7×4 Matrix of AI Accountability | Dr. Noy's framework | 7 layers (data, models, apps, content, infra, identity, deployment) × 4 stakeholders (developers, deployers, users, consumers) |
| Graded Liability Model | Dr. Roy's legal framework | Proportionate responsibility distributed across coders, companies, marketers, data principals |
| C2P Framework | Content verification standard | Industry collaboration to detect AI-generated misinformation |
| EU AI Act | Regulatory reference | Risk-based approach (low/high risk); cited as starting point but not directly applicable to global south |
| DPDP Act (India) | Privacy regulation | Data Protection & Privacy legislation; noted as inadequate for AI governance without amendments |
| Synth ID | Google security tool | Watermarking and defect detection in AI outputs |
| Model Armor | Security mechanism | Filters both prompts (to prevent injection) and responses (to prevent data exfiltration) |
| AI Harms Registry | Dr. Roy's initiative | Launched Jan 2026; collating 9 categories of documented AI harms for evidence-based policymaking |
| Regulatory Sandboxes | Governance mechanism | Safe testing grounds for innovation while regulations evolve |
| DPI Stack (Digital Public Infrastructure) | India's model | JAM (Jan Dhan, Aadhaar, Mobile) + UPI; referenced as successful diffusion model applicable to AI |
| Maltbook Agent Case | Research example | AI agents in close interaction exhibiting emergent behaviors not controlled by human design |
| Cognitive Neuro-Rights | Legal concept | Proposed human right to cognitive autonomy, protection from algorithmic manipulation |
Additional Context & Warnings
Acknowledged Gaps & Challenges
- Transparency Black Box Problem: Large language models lack explainability; regulators cannot easily audit decision-making.
- Attribution Gap: Law enforcement struggles to determine accountability when AI causes harm across multiple jurisdictions.
- Compute & Data Deficit: Global south lacks capital and infrastructure to develop competitive AI systems; creates dependence on Western models.
- Linguistic Bias: Most foundational models trained on English; poor performance on Indian languages and dialects; introduces cultural bias.
- MLAT Delays: Mutual Legal Assistance Treaties slow information flow between big tech and regulators; undermines shared responsibility.
- Shadow AI: Employees/users deploying AI tools outside corporate governance; creates unmanaged security risks.
- Geopolitical Weaponization: Data centers could become targets in international conflicts; siting decisions must balance security and development.
Open Questions Left for Future Sessions
- How do agent-to-agent interactions evolve governance requirements?
- What is the right balance between "art" (principles, values) and "science" (technology, engineering) in building trustworthy AI?
- How can 1.4+ billion citizens develop critical thinking skills fast enough to resist AI-generated misinformation?
- How does taxation and economic value capture change when AI agents perform most knowledge work?
Document Quality Note: The transcript contains significant repetition and audio degradation (likely from automatic transcription). Some speaker attributions and quotes may be incomplete or ambiguous. This summary prioritizes substantive policy arguments and technical insights over perfect fidelity to original speaker intent.
