All sessions

Scaling AI for Billions: Building Digital Public Infrastructure

Contents

Executive Summary

This panel discussion examines the dual nature of AI in cybersecurity—as both an unprecedented opportunity to manage security at scale and a profound new risk surface that enterprises and nation-states are unprepared to address. Speakers emphasize that while AI adoption is accelerating rapidly across critical infrastructure, foundations remain fragile, creating an urgent need for new assessment frameworks, governance models, and a fundamental rethinking of how organizations approach security, trust, and resilience.

Key Takeaways

  1. There is an urgent foundational crisis: Before deploying AI agents, enterprises must first secure fragile, legacy digital infrastructure—you cannot "build a skyscraper on a bungalow foundation." Assessment frameworks and institutional oversight are necessary prerequisites.

  2. Governance architecture matters more than model choice: Rather than debating LLM A vs. LLM B, organizations should focus on building AI Operating Systems with explicit trust, governance, and control layers that ensure agents act only within defined boundaries.

  3. The cyber-security/AI divide will replicate the digital divide: Just as there's a cyber security maturity gap across sectors (financial vs. health), there will be an "AI divide" across enterprises. Capacity building and standard assessment frameworks are urgent national priorities.

  4. AI-native business disruption is coming within 5 years: New companies built around AI-first models will disrupt incumbents the way Uber, Booking.com, and fintech disrupted traditional industries. Organizations without strategic foresight on this risk will miss the window.

  5. Shift from "compliance risk" to "strategic risk" framing: Boards must move beyond checkbox compliance (GDPR, sectoral rules) to quantifying operational and strategic AI risk in financial terms—reputation impact, service provider dependency, model reliability—and communicate this to stakeholders.

Key Topics Covered

  • AI as dual-edged technology: Opportunities in security automation vs. emerging vulnerabilities and attack vectors
  • Infrastructure fragility: Existing digital infrastructure in enterprises already compromised; AI amplification of these weaknesses
  • Model security & data poisoning: Protecting AI models from jailbreaking, confidential information leakage, and adversarial data manipulation
  • Nation-state and adversarial AI: Asymmetric threat landscape where adversaries have greater motivation and resources to weaponize AI
  • Rapid adoption vs. maturity gap: 90% of large enterprises want to deploy AI agents, but only 67% have data governance; only 33% understand AI threats
  • The control plane problem: Unlike traditional systems, AI systems have no separate control plane—data itself becomes the control mechanism
  • Agentic systems & autonomous decision-making: Risks of AI agents taking unsupervised actions without human oversight
  • Model drift & determinism: AI systems become non-deterministic over time; difficulty distinguishing cyber failures from design flaws
  • Digital Public Infrastructure (DPI) security: Special risks when AI is embedded in healthcare, telecom, financial, and power sector systems
  • Trust and governance frameworks: Need for "AI Operating Systems" with explicit trust, governance, and control layers
  • Cyber security talent and opportunities: India positioned as a global hub for AI-security talent development
  • Strategic risk assessment: Boards need frameworks to quantify AI-related risks in financial terms (compliance, operational, and strategic risk lenses)

Key Points & Insights

  1. The ambition-reality gap is critical: 90% of enterprises want AI agents, but only ~20% have the foundational maturity (data governance, compute capacity, threat understanding, innovation capability) to deploy them safely. This gap represents systemic risk.

  2. AI shifts the asymmetry in cyber attacks: Historically, defenders must protect everything while attackers need one success. AI creates a "level playing field"—defenders now have AI-enabled SOCs with agentic automation that can detect threats at unprecedented scale and speed.

  3. Data poisoning replaces network intrusions as primary vector: Unlike traditional systems with separate control and data planes, AI systems have no administrative control plane—the data itself IS the control mechanism. Model drift and poisoning happen through inputs, making detection and prevention fundamentally harder.

  4. Infrastructure will multiply vulnerabilities 100x: Adding AI to already-fragile enterprise digital infrastructure (islands of OEM technologies, unsecured operational technology) doesn't add risk linearly—it exponentially compounds fragility. East-west traffic and edge inferencing create massive new attack surfaces and strain.

  5. Silicon to software requires complete rethinking: New applications must handle probabilistic models in contexts requiring deterministic outputs (financial transactions, healthcare, citizen services). Every layer—silicon, systems, applications—must be redesigned for AI's exponential performance demands and uncertainty.

  6. Distributed mesh security replaces perimeter defense: Security cannot be bolted on as appliances anymore. It must be infused throughout the network fabric as virtual, distributed instances that move with policy requirements, not fixed hardware locations.

  7. AI Operating Systems are essential, not optional: Organizations need platforms with context layers, agentic layers, AND explicit trust/governance layers that control what agents can/cannot do. Single LLM comparisons miss the point—governance architecture is what matters.

  8. National security implications demand proactive frameworks: Nations adopting AI gain competitive advantage; those that don't fall behind. But capacity gaps exist (cyber security maturity varies by sector). Assessment frameworks, sandboxes, and institutional oversight (CERT, sectoral regulators) are essential guardrails.

  9. Deep fakes and social engineering are industrialized at scale: AI doesn't just help defenders—it industrializes attacks. Spear-phishing, identity attacks, and manipulation now happen at unprecedented volume and sophistication, potentially impacting organizational reputation and customer trust measurably.

  10. AI scales decisions, not just transactions: Previous tech revolutions (cloud, internet) helped companies scale transactions. AI will scale decisions—requiring entirely new organizational paradigms, culture, talent models, and risk frameworks that most companies haven't begun to adopt.


Notable Quotes or Statements

  • Daisy (Cisco): "The good news is we are as ready as everybody else. The bad news is maybe we're not that ready as we think we are."

  • Laxmi (Tata Communications): "I don't think people have woken up to the fact that they are fast running towards the cliff."

  • Laxmi (Tata Communications): "You can't build a skyscraper with a foundation of a bungalow, which is what they're trying to do."

  • Narins (Government official): "The adversarial part is that nation states or big enterprises use AI as a tool with much greater motivation and thought process than those using AI for productivity gains."

  • Laxmi (Tata Communications): "AI will scale decisions and when you're scaling decisions, you need a different paradigm altogether."

  • Richard: "With AI, this is becoming a big issue because how you can distinguish a scam from a real communication when the scam communication looks exactly like the real communication."

  • Dashan (Cyber Security Company): "Cyber security has been a very asymmetric equation—intruders need to get one thing right, we need to get everything right. With AI, we are now at a level playing field."

  • Praeep: "AI is quietly reshaping the risk equation within the enterprise right now."


Speakers & Organizations Mentioned

Role/AffiliationIdentifierKey Focus
PanelistDaisyCisco; AI readiness index; network security and DPI resilience
PanelistNarinsGovernment official; national security implications; technology adoption timelines
PanelistLaxmi (Lakshmi) SarTata Communications; critical infrastructure fragility; AI Operating Systems; 5-year outlook
PanelistRichardResilience and human factors; deepfakes; agent autonomy risks
PanelistDashanCyber security company; CXO/board perspectives; hope vs. fear in AI adoption
PanelistPraeepStrategic risk and trust frameworks; board-level risk quantification
Panelist[Name unclear]Government/CERT official; DPI, sectoral regulation, assessment frameworks, sandboxing
ModeratorSamrad (inferred)Panel facilitation

Organizations/Entities Referenced:

  • Cisco
  • Tata Communications (Tatacom)
  • RBI (Reserve Bank of India) – sandbox regulations
  • CERT India
  • CIPC (Cyber, Information & Critical Infrastructure Protection)
  • Department of DRD (likely Department of Research & Development)
  • Financial sector regulators
  • Telecom sector regulators
  • Microsoft (security copilot mentioned)
  • EU AI Act
  • GDPR / DPDP (Data Protection Directive/Protocol)

Technical Concepts & Resources

ConceptDefinition / Context
Data poisoningAdversarial manipulation of input data to cause model drift and unpredictable behavior over time
Model driftDegradation of model performance due to data distribution shifts or adversarial inputs; non-deterministic behavior
Agentic systems / AI agentsAutonomous AI entities that take commands and execute unsupervised actions on behalf of users (e.g., SOC automation, banking applications)
East-west trafficInternal network traffic between systems (as opposed to north-south, external); exponentially increases with edge inferencing
Edge inferencingRunning AI inference at network edge (on devices/edge servers) rather than centrally; creates massive API call overhead and infrastructure strain
Control plane vs. data planeTraditional systems separate administrative control (control plane) from traffic handling (data plane); AI systems lack this separation—data itself is the control mechanism
JailbreakingExploiting AI model vulnerabilities to bypass safety guardrails
DeepfakesAI-synthesized media (audio, video, text) indistinguishable from authentic communications
Spear-phishingTargeted phishing attacks; now "industrialized at scale" with AI-generated personalized content
SOC (Security Operations Center)Team/platform for 24/7 threat monitoring; increasingly augmented with AI agents for automated analysis and response
AI Operating SystemProposed platform architecture with layers: (1) context layer, (2) agentic layer, (3) trust & governance layer; enables controlled, auditable AI decision-making
Distributed mesh securitySecurity policies infused throughout network infrastructure as virtual, mobile instances rather than fixed hardware appliances
Silicon securityNational-level consideration; securing hardware (chips, processors) against AI-driven vulnerabilities
Assessment framework for AI systemsTool to evaluate whether AI systems are secure and functionally performing as claimed; sector-specific (health, telecom, finance) frameworks needed
ETI frameworkMentioned by government official; framework for AI system evaluation
Sandboxing (regulatory)Controlled environments (RBI sandbox, telecom sandbox) where new technologies/applications can be tested before production deployment
Provenance & authenticityMechanisms to verify source, lineage, and trustworthiness of data/communications; foundational to measuring trust
DPDP / GDPR complianceRegulatory compliance risk lens; insufficient alone for systemic risk management
Three risk lenses:(1) Compliance risk (regulatory checkbox), (2) Operational risk (model reliability, service provider dependency), (3) Strategic risk (reputation, financial impact, competitive disruption)

Gaps & Missing Elements

  • No specific AI models or tools named (beyond generic mentions of LLMs and Microsoft Security Copilot)
  • No quantified metrics on attack success rates, cost of breaches, or ROI on security investments
  • Limited discussion of international coordination beyond reference to EU AI Act and national security
  • No mention of specific standards (ISO, NIST, OWASP) for AI security assessment
  • Unclear implementation timelines for proposed frameworks and governance models
  • Limited concrete examples of AI-driven attacks or successful AI-based defenses (mostly conceptual discussion)