Scaling AI for Billions: Building Digital Public Infrastructure
Contents
Executive Summary
This panel discussion examines the dual nature of AI in cybersecurity—as both an unprecedented opportunity to manage security at scale and a profound new risk surface that enterprises and nation-states are unprepared to address. Speakers emphasize that while AI adoption is accelerating rapidly across critical infrastructure, foundations remain fragile, creating an urgent need for new assessment frameworks, governance models, and a fundamental rethinking of how organizations approach security, trust, and resilience.
Key Takeaways
-
There is an urgent foundational crisis: Before deploying AI agents, enterprises must first secure fragile, legacy digital infrastructure—you cannot "build a skyscraper on a bungalow foundation." Assessment frameworks and institutional oversight are necessary prerequisites.
-
Governance architecture matters more than model choice: Rather than debating LLM A vs. LLM B, organizations should focus on building AI Operating Systems with explicit trust, governance, and control layers that ensure agents act only within defined boundaries.
-
The cyber-security/AI divide will replicate the digital divide: Just as there's a cyber security maturity gap across sectors (financial vs. health), there will be an "AI divide" across enterprises. Capacity building and standard assessment frameworks are urgent national priorities.
-
AI-native business disruption is coming within 5 years: New companies built around AI-first models will disrupt incumbents the way Uber, Booking.com, and fintech disrupted traditional industries. Organizations without strategic foresight on this risk will miss the window.
-
Shift from "compliance risk" to "strategic risk" framing: Boards must move beyond checkbox compliance (GDPR, sectoral rules) to quantifying operational and strategic AI risk in financial terms—reputation impact, service provider dependency, model reliability—and communicate this to stakeholders.
Key Topics Covered
- AI as dual-edged technology: Opportunities in security automation vs. emerging vulnerabilities and attack vectors
- Infrastructure fragility: Existing digital infrastructure in enterprises already compromised; AI amplification of these weaknesses
- Model security & data poisoning: Protecting AI models from jailbreaking, confidential information leakage, and adversarial data manipulation
- Nation-state and adversarial AI: Asymmetric threat landscape where adversaries have greater motivation and resources to weaponize AI
- Rapid adoption vs. maturity gap: 90% of large enterprises want to deploy AI agents, but only 67% have data governance; only 33% understand AI threats
- The control plane problem: Unlike traditional systems, AI systems have no separate control plane—data itself becomes the control mechanism
- Agentic systems & autonomous decision-making: Risks of AI agents taking unsupervised actions without human oversight
- Model drift & determinism: AI systems become non-deterministic over time; difficulty distinguishing cyber failures from design flaws
- Digital Public Infrastructure (DPI) security: Special risks when AI is embedded in healthcare, telecom, financial, and power sector systems
- Trust and governance frameworks: Need for "AI Operating Systems" with explicit trust, governance, and control layers
- Cyber security talent and opportunities: India positioned as a global hub for AI-security talent development
- Strategic risk assessment: Boards need frameworks to quantify AI-related risks in financial terms (compliance, operational, and strategic risk lenses)
Key Points & Insights
-
The ambition-reality gap is critical: 90% of enterprises want AI agents, but only ~20% have the foundational maturity (data governance, compute capacity, threat understanding, innovation capability) to deploy them safely. This gap represents systemic risk.
-
AI shifts the asymmetry in cyber attacks: Historically, defenders must protect everything while attackers need one success. AI creates a "level playing field"—defenders now have AI-enabled SOCs with agentic automation that can detect threats at unprecedented scale and speed.
-
Data poisoning replaces network intrusions as primary vector: Unlike traditional systems with separate control and data planes, AI systems have no administrative control plane—the data itself IS the control mechanism. Model drift and poisoning happen through inputs, making detection and prevention fundamentally harder.
-
Infrastructure will multiply vulnerabilities 100x: Adding AI to already-fragile enterprise digital infrastructure (islands of OEM technologies, unsecured operational technology) doesn't add risk linearly—it exponentially compounds fragility. East-west traffic and edge inferencing create massive new attack surfaces and strain.
-
Silicon to software requires complete rethinking: New applications must handle probabilistic models in contexts requiring deterministic outputs (financial transactions, healthcare, citizen services). Every layer—silicon, systems, applications—must be redesigned for AI's exponential performance demands and uncertainty.
-
Distributed mesh security replaces perimeter defense: Security cannot be bolted on as appliances anymore. It must be infused throughout the network fabric as virtual, distributed instances that move with policy requirements, not fixed hardware locations.
-
AI Operating Systems are essential, not optional: Organizations need platforms with context layers, agentic layers, AND explicit trust/governance layers that control what agents can/cannot do. Single LLM comparisons miss the point—governance architecture is what matters.
-
National security implications demand proactive frameworks: Nations adopting AI gain competitive advantage; those that don't fall behind. But capacity gaps exist (cyber security maturity varies by sector). Assessment frameworks, sandboxes, and institutional oversight (CERT, sectoral regulators) are essential guardrails.
-
Deep fakes and social engineering are industrialized at scale: AI doesn't just help defenders—it industrializes attacks. Spear-phishing, identity attacks, and manipulation now happen at unprecedented volume and sophistication, potentially impacting organizational reputation and customer trust measurably.
-
AI scales decisions, not just transactions: Previous tech revolutions (cloud, internet) helped companies scale transactions. AI will scale decisions—requiring entirely new organizational paradigms, culture, talent models, and risk frameworks that most companies haven't begun to adopt.
Notable Quotes or Statements
-
Daisy (Cisco): "The good news is we are as ready as everybody else. The bad news is maybe we're not that ready as we think we are."
-
Laxmi (Tata Communications): "I don't think people have woken up to the fact that they are fast running towards the cliff."
-
Laxmi (Tata Communications): "You can't build a skyscraper with a foundation of a bungalow, which is what they're trying to do."
-
Narins (Government official): "The adversarial part is that nation states or big enterprises use AI as a tool with much greater motivation and thought process than those using AI for productivity gains."
-
Laxmi (Tata Communications): "AI will scale decisions and when you're scaling decisions, you need a different paradigm altogether."
-
Richard: "With AI, this is becoming a big issue because how you can distinguish a scam from a real communication when the scam communication looks exactly like the real communication."
-
Dashan (Cyber Security Company): "Cyber security has been a very asymmetric equation—intruders need to get one thing right, we need to get everything right. With AI, we are now at a level playing field."
-
Praeep: "AI is quietly reshaping the risk equation within the enterprise right now."
Speakers & Organizations Mentioned
| Role/Affiliation | Identifier | Key Focus |
|---|---|---|
| Panelist | Daisy | Cisco; AI readiness index; network security and DPI resilience |
| Panelist | Narins | Government official; national security implications; technology adoption timelines |
| Panelist | Laxmi (Lakshmi) Sar | Tata Communications; critical infrastructure fragility; AI Operating Systems; 5-year outlook |
| Panelist | Richard | Resilience and human factors; deepfakes; agent autonomy risks |
| Panelist | Dashan | Cyber security company; CXO/board perspectives; hope vs. fear in AI adoption |
| Panelist | Praeep | Strategic risk and trust frameworks; board-level risk quantification |
| Panelist | [Name unclear] | Government/CERT official; DPI, sectoral regulation, assessment frameworks, sandboxing |
| Moderator | Samrad (inferred) | Panel facilitation |
Organizations/Entities Referenced:
- Cisco
- Tata Communications (Tatacom)
- RBI (Reserve Bank of India) – sandbox regulations
- CERT India
- CIPC (Cyber, Information & Critical Infrastructure Protection)
- Department of DRD (likely Department of Research & Development)
- Financial sector regulators
- Telecom sector regulators
- Microsoft (security copilot mentioned)
- EU AI Act
- GDPR / DPDP (Data Protection Directive/Protocol)
Technical Concepts & Resources
| Concept | Definition / Context |
|---|---|
| Data poisoning | Adversarial manipulation of input data to cause model drift and unpredictable behavior over time |
| Model drift | Degradation of model performance due to data distribution shifts or adversarial inputs; non-deterministic behavior |
| Agentic systems / AI agents | Autonomous AI entities that take commands and execute unsupervised actions on behalf of users (e.g., SOC automation, banking applications) |
| East-west traffic | Internal network traffic between systems (as opposed to north-south, external); exponentially increases with edge inferencing |
| Edge inferencing | Running AI inference at network edge (on devices/edge servers) rather than centrally; creates massive API call overhead and infrastructure strain |
| Control plane vs. data plane | Traditional systems separate administrative control (control plane) from traffic handling (data plane); AI systems lack this separation—data itself is the control mechanism |
| Jailbreaking | Exploiting AI model vulnerabilities to bypass safety guardrails |
| Deepfakes | AI-synthesized media (audio, video, text) indistinguishable from authentic communications |
| Spear-phishing | Targeted phishing attacks; now "industrialized at scale" with AI-generated personalized content |
| SOC (Security Operations Center) | Team/platform for 24/7 threat monitoring; increasingly augmented with AI agents for automated analysis and response |
| AI Operating System | Proposed platform architecture with layers: (1) context layer, (2) agentic layer, (3) trust & governance layer; enables controlled, auditable AI decision-making |
| Distributed mesh security | Security policies infused throughout network infrastructure as virtual, mobile instances rather than fixed hardware appliances |
| Silicon security | National-level consideration; securing hardware (chips, processors) against AI-driven vulnerabilities |
| Assessment framework for AI systems | Tool to evaluate whether AI systems are secure and functionally performing as claimed; sector-specific (health, telecom, finance) frameworks needed |
| ETI framework | Mentioned by government official; framework for AI system evaluation |
| Sandboxing (regulatory) | Controlled environments (RBI sandbox, telecom sandbox) where new technologies/applications can be tested before production deployment |
| Provenance & authenticity | Mechanisms to verify source, lineage, and trustworthiness of data/communications; foundational to measuring trust |
| DPDP / GDPR compliance | Regulatory compliance risk lens; insufficient alone for systemic risk management |
| Three risk lenses: | (1) Compliance risk (regulatory checkbox), (2) Operational risk (model reliability, service provider dependency), (3) Strategic risk (reputation, financial impact, competitive disruption) |
Gaps & Missing Elements
- No specific AI models or tools named (beyond generic mentions of LLMs and Microsoft Security Copilot)
- No quantified metrics on attack success rates, cost of breaches, or ROI on security investments
- Limited discussion of international coordination beyond reference to EU AI Act and national security
- No mention of specific standards (ISO, NIST, OWASP) for AI security assessment
- Unclear implementation timelines for proposed frameworks and governance models
- Limited concrete examples of AI-driven attacks or successful AI-based defenses (mostly conceptual discussion)
