Scaling Enterprise-Grade Responsible AI Across the Global South
Contents
Executive Summary
This panel discussion from the AI Impact Summit in Delhi examines how organizations in India and the Global South can build trustworthy, scalable AI systems while balancing innovation with responsible governance. The panelists emphasize that responsible AI requires integrated design across infrastructure, models, regulation, and human oversight—and that the Global South has a unique opportunity to leapfrog legacy architectures and build sovereign AI systems suited to regional needs, languages, and regulatory contexts.
Key Takeaways
-
Responsible AI is a systems problem, not a model problem: It spans infrastructure (hardware, energy, cooling), governance (human oversight, agent verification), data (quality, privacy, localization), and policy (regulatory frameworks). Siloing these creates fragile systems.
-
The Global South cannot and should not copy Silicon Valley's playbook: Working with noisy data, resource constraints, and multilingual populations requires different technical approaches (synthetic data, federated learning, domain-specific models, modular infrastructure) and different governance models (regulatory sandboxes, public compute, local sovereignty).
-
Sovereignty is non-negotiable for regulated and consumer-facing AI: Organizations controlling critical decisions in banking, healthcare, government, and e-commerce need to own or deeply customize their models. Translation of intent across generic LLMs loses precision in high-stakes domains.
-
India is uniquely positioned to leapfrog: With 300,000+ summit participants, 500M+ e-commerce users, strong startup ecosystems, government backing (60,000 GPU procurements), and research talent, India can build indigenous software systems and products at world scale—not just services.
-
Trust is built through transparency and design choices, not just disclaimers: Opting in by default, clear disclosure of agent-vs-human interactions, and fair pricing/quality across regions are trust fundamentals that must be baked into product and infrastructure design from the start.
Key Topics Covered
- Guardrails and Trust Frameworks: How organizations build safety mechanisms into AI systems without over-regulating or under-regulating
- AI Infrastructure & Sustainability: Data center design, energy efficiency, cooling technologies, and the role of infrastructure in responsible AI
- Sovereignty and Domain-Specific Models: Building region-specific, sector-specific language models that maintain data control and regulatory compliance
- Responsible AI at Scale: Operating AI systems for 500M+ users while maintaining fairness, transparency, and personalization
- Research-Industry Collaboration: Bridging the gap between academic AI research and production deployment in regulated sectors
- Agent Systems and Interoperability: Identity verification, agent-to-agent communication, and multi-agent orchestration frameworks
- Global South Challenges: Working with heterogeneous data, intermittent compute access, multilingual environments, and resource constraints
- Policy and Regulation: Creating regulatory sandboxes, public AI infrastructure, and balanced governance frameworks
- Transparency and User Consent: Disclosure practices, opt-in vs. opt-out defaults, and building consumer trust
Key Points & Insights
-
Guardrails Are Not One-Size-Fits-All: Babak (Cognizant) emphasizes avoiding both extremes—neither "magic pixie dust" over-automation nor rubber-stamping every decision. Instead, responsible AI requires layered approaches: human-in-the-loop, agent-checking-agent mechanisms, uncertainty quantification, and error correction through redundancy.
-
Infrastructure Design Must Precede Model Design: Amod (Sabur) argues that sustainable responsible AI starts with modular, flexible data center architecture that can accommodate rapidly evolving chip generations (every 2 years). Retrofitting old designs is costly; early design choices around cooling and modularity deliver 30%+ energy savings and near-zero IT failure rates.
-
Global South Data Challenges Require Synthetic and Localized Solutions: Anupam (academic researcher) highlights that clean-data assumptions fail in the Global South. Solutions include synthetic data generation with tunable noise, federated learning for privacy-preserving model merging, and defake detection trained on noisy, real-world conditions rather than laboratory datasets.
-
Domain-Specific Models (DSMs) vs. Foundation Models: Tanvi (Ekatech) and Balaji (Flipkart) both stress that trillion-parameter LLMs work for broad intent-recognition but fail at domain specificity (pricing, regional variations, compliance). The pattern emerging: use LLMs at funnel top, orchestrate to smaller language models (SLMs) for specificity. This architecture gives organizations control over critical decisions.
-
Sovereign AI Means Control Over Cognition, Not Just Infrastructure: Tanvi frames sovereignty beyond infrastructure ownership—it's about controlling the models, data, and intelligence that drive regulatory-critical and consumer-facing decisions. This is non-negotiable for banking, healthcare, and government sectors.
-
Transparency and Consent by Default (Opt-In, Not Opt-Out): Balaji describes Flipkart's practice of defaulting to opt-in for agent-based customer service, with clear disclaimers. This contrasts with industry norms (Apple, Google) of opt-out defaults—a key trust differentiator in regulated and consumer-facing applications.
-
Agentic Identity and Standards Are Still Emerging: Babak flags that agent-to-agent authentication and identity verification lack well-established standards, creating security risks as multi-party agent ecosystems scale. Google and others are working on standards (A2A), but this remains an open problem.
-
Industry-Academia-Government Collaboration Is Essential: Anupam references Singapore's AI.sg model—a single-window consortium spanning research, innovation, technology transfer, commercialization, and regulation. This prevents academics from shipping weak systems to production without enterprise and policy oversight.
-
Public Computing Infrastructure Is a Strategic Asset: Babak recommends governments create shared GPU/processing capacity available to students, startups, and researchers—not just private companies. This attracts talent, decentralizes innovation, and builds indigenous AI capability.
-
Regulatory Sandboxes Beat Front-Running Regulation: Rather than betting on perfect regulation upfront, Babak proposes safe, controlled sandbox environments where startups, academia, and regulators experiment together, learn, and iteratively build frameworks suited to local contexts.
Notable Quotes or Statements
"AI is real and both the promise and the risk is real, so guardrails are needed. We can't fall off either ledge of trusting AI or mistrusting it to the point where we debilitate it."
— Babak (Chief AI Officer, Cognizant)
"There's no one word model that can fix everything. One size doesn't fit all, especially in regulated industries."
— Tanvi (Ekatech, formerly Palantir/OpenAI)
"If you are going to have a conversation with an agent, your UX teams have to look at how the customer understands who they're talking to. We have a disclaimer saying you might be talking to a machine here, and if you do not want that conversation, you can opt out. But by default, you have to opt in rather than opt out."
— Balaji (Flipkart)
"Control on cognition and intelligence is as important as control on infrastructure. That's what's paramount."
— Tanvi (Ekatech)
"I don't want American babies or Chinese babies. I want Indian babies to the world—that's what domain models do."
— Tanvi (paraphrasing sentiment about sovereignty)
"We do not have a software brand in India that sells on a worldwide scale. This opportunity provides India to leapfrog because we have the scale, we have the people, we have the intelligence, we have the ability to think differently at a price point nobody can imagine."
— Balaji (Flipkart)
"Academic research needs to stay grounded and test waters with real-world scenarios, not mature in isolation."
— Anupam (Academic researcher)
Speakers & Organizations Mentioned
| Speaker | Role/Organization | Key Focus |
|---|---|---|
| Sunita Mi | Managing Director, Primus Partners | Moderator, conference organizer |
| Babak | Chief AI Officer, Cognizant | Enterprise AI guardrails, reliability, agentic systems |
| Anupam | Academic Researcher (Singapore/NUS context) | Defake detection, robust AI for Global South, academia-industry collaboration |
| Amod | (Sabur - infrastructure company) | Data center design, cooling, sustainability, modularity |
| Tanvi | Founder/CEO, Ekatech; formerly Palantir, OpenAI, UBS | Sovereign LLMs, domain-specific models, Vatican/NYC partnerships |
| Balaji | (Flipkart) | E-commerce at scale (500M users), fairness, SLM orchestration, transparency |
| Prime Minister | Government of India | Real-time AI translation systems (referenced) |
| Ministry Team | Government of India | AI Summit coordination, public GPU infrastructure (60,000 GPUs) |
Key Organizations Referenced:
- Cognizant, Flipkart, Palantir, OpenAI, Google, Microsoft, Nvidia
- Government of India (AI Mission), Ministry-level coordination
- Ekatech (startup building sovereign LLMs)
- Sabur (data center infrastructure)
- Singapore's AI.sg (consortium model)
Technical Concepts & Resources
Key Architectural Patterns
- Agentic Orchestration Framework: Dynamic routing to task-specific SLMs based on intent; agents decide which model to invoke
- Mixture of Experts (MoE): Multiple specialized models (LLMs for broad context, SLMs for domain specificity) composed into a single system
- Federated Learning: Privacy-preserving model merging where organizations train locally without sharing raw models or training data
- Human-in-the-Loop / Human-on-the-Loop: Layered oversight where humans review agent decisions and uncertainty estimates
- Agent-to-Agent Verification: Identity standards for multi-party agent communication (emerging; Google working on A2A standards)
Data & Model Approaches
- Synthetic Data Generation: Tunable noise injection to simulate real-world conditions (defake detection case study)
- Domain-Specific Models (DSMs): Sector/region-specific LLMs trained on internal/native data (not open internet); maintains regulatory control
- Smaller Language Models (SLMs): Task-specific or domain-specific models (alternative to trillion-parameter LLMs) for precision and cost
- Multi-Lingual Models: Critical for Global South where translation adds latency and loss of nuance
Infrastructure & Operations
- Liquid Cooling Technologies: Reduce energy overhead, enable higher chip density, improve sustainability
- Modular Data Center Design: Accommodate evolving chip generations (Nvidia releases 3-4 generations every 2 years); avoid early obsolescence
- Energy Metrics: KPIs around energy consumption per token, water consumption per token
- Resilience Through Redundancy: Error correction, reliability validation (zero IT failures over 3+ years cited)
Governance & Safety
- Regulatory Sandbox: Controlled environment where startups, academia, and regulators jointly experiment before broader rollout
- Uncertainty Quantification: Agents output confidence estimates; high uncertainty triggers human escalation
- Access Controls & Encryption: Data at rest and in motion; role-based access; secure inter-service data exchange
- Defake Detection: Real-time image/audio authentication using fact-checking, news source verification
- Model Risk Management: Established banking framework (extended to AI) for governance and accountability
- Opt-In by Default: Transparency design pattern—users must actively consent to agent interaction, not auto-enrolled
Emerging Research Areas
- Cyber Security of AI: Hallucinations, jailbreaking, data extraction vulnerabilities
- Robust AI for Noisy Environments: Training under intermittent compute, heterogeneous data, poor connectivity
- AI Alignment & Jailbreak Resilience: Preventing unintended outputs and adversarial attacks
- Infrastructure as Code for Sustainability: Designing data centers modularly and sustainably from blueprint phase
Tools & Platforms Referenced
- Palantir: Customizable AI/ML stack for enterprises (enterprise play beyond US defense)
- OpenAI APIs: Large-scale data access for regulatory/compliance use cases
- Ekatech Platform: Scalable, customizable, domain-oriented generative AI platform with multi-layer guardrails
- Flipkart's Agentic Framework: Real-time customer service, image-to-listing generation, regional pricing optimization
Policy & Strategic Implications
- Government Role: Create ecosystem conditions (public compute, sandboxes, regulation) rather than build AI systems directly
- Public GPU Infrastructure: 60,000 GPUs procured and distributed by Indian government to states and institutions (strategic asset)
- Regulation Timing: Balanced approach—neither front-running with perfect regulation nor ignoring safety. Sandboxes enable learning.
- Leapfrogging Opportunity: India can skip SaaS/Web 2.0 legacy patterns and build indigenous, sovereign, world-scale software/products
- India's Unique Position: 1.4B people, 500M+ internet users, strong startup ecosystem, government backing, research talent—conditions for world-leading AI infrastructure and products
Note on Sourcing & Accuracy: This summary is derived directly from the transcript. No external sources were consulted; all claims and attributions derive from speaker statements in the recorded panel discussion. Where speaker names are unavailable or unclear in the transcript, organizational roles and context are used to identify them.
