IT Services

Synthesized from 52 talks · India AI Impact Summit 2026

Contents

Overview

India's IT services sector stands at an inflection point that is simultaneously an existential threat and a generational opportunity. The 25-year labor arbitrage model—exporting billable hours at scale—is being structurally dismantled by AI agents capable of performing knowledge work at a fraction of the cost, and the sector's own clients are the ones deploying them . What replaces it is not yet settled, but the emerging consensus across 52 talks points toward a model in which Indian firms orchestrate AI systems rather than staff them, own business outcomes rather than deliver time-and-materials, and export domain intelligence rather than coding capacity . With 6 million software engineers who could become 18 million effective workers through AI-driven productivity multipliers , and a domestic market of 1.4 billion people generating training signal at population scale, India has genuine structural advantages—but only if firms, individuals, and government move with urgency on workforce transformation, infrastructure investment, and governance architecture simultaneously.


Key Insights

  • The labor arbitrage model is permanently broken; domain orchestration is the replacement. IT services firms must pivot from "people delivering services" to "domain experts orchestrating AI agents." The competitive moat is 25 years of accumulated process and domain knowledge—but only if companies can translate that into AI system design, not headcount. Firms that revert to time-and-materials pricing during procurement cycles are making a slow exit decision .

  • Outcome-based pricing is the non-negotiable business model reset. Providers must move from per-seat software licensing and utilization metrics to sharing measurable business outcomes—cost reduction, revenue uplift, lives improved. The technology to enable this exists; the organizational will to restructure commercial agreements is the bottleneck . AI cost reductions of 70–90% in domains like call centers and fleet management make the ROI case straightforward for clients .

  • The "pilot to production" gap is the central execution problem. Across BFSI, manufacturing, consulting, and public sector deployments, speakers consistently identified the same failure mode: technically successful pilots that cannot scale due to data quality, legacy system integration, organizational change management, and the absence of clear business metrics. The 12% enterprise AI success rate cited in consulting contexts is a sector-wide indictment of exploratory pilot culture . The fix is disciplined progression—Proof of Value → Rapid Build → Scale—with compliance and business gates at each stage .

  • Sovereign, on-premise AI is shifting from competitive differentiator to regulatory requirement. The DPDP Act, RBI guidelines, and data localization mandates are moving decisions about AI infrastructure from CTOs to boards. For IT services firms, this creates a market for India-built, on-premise agentic platforms that can serve 15–20 cross-functional use cases simultaneously—amplifying ROI while satisfying compliance . Hyperscaler dependency is increasingly a liability, not a convenience .

  • Multimodal, multilingual, voice-first AI is table stakes for India-scale deployment, not a feature. English-only, text-centric AI architectures exclude 80% of potential Indian users . IT services firms building or deploying enterprise AI that ignores Indian dialects, regional financial context, and voice interfaces are building for a minority of the market. Bhashini's 22-language capability and the demonstrated failure of global voice platforms on Indian dialects underscore that this requires purpose-built infrastructure, not localization patches.

  • The context problem, not the model problem, is what blocks enterprise AI at scale. The bottleneck in enterprise deployments is the organizational capacity to provide unified, conflict-free, contextual information to AI systems—not model capability. Fragmented data, undocumented business logic, and siloed systems mean even state-of-the-art models fail in production . IT services firms with deep integration experience have a genuine advantage here if they build semantic orchestration layers rather than reselling foundation models.

  • Agentic, multi-agent architectures are replacing isolated chatbots as the delivery unit. Specialized agents that preserve context across handoffs, operate across domain boundaries, and coordinate through master-agent orchestration outperform generalist systems on complex, multi-dimensional enterprise tasks . TCS's AI Orchestrator and similar platforms demonstrate that low-code orchestration can democratize deployment—but the "no-code" framing masks significant backend complexity requiring domain expertise to configure and validate .

  • Responsible AI governance is a revenue enabler, not a compliance cost. Organizations that embed governance—immutable audit logging, fairness indices, transparency architecture, data isolation—gain access to regulated healthcare and financial data that ungoverned systems cannot touch . For IT services firms, positioning responsible AI as a trust infrastructure product, not a checkbox, opens high-margin regulated verticals. Clients in BFSI and healthcare are already demanding tri-partite validation: technology excellence, legal alignment, and business problem validation as simultaneous prerequisites .

  • India's opportunity is applied AI across economic sectors, not foundational model competition. Building a competitive large language model is capital-intensive and likely unwinnable against well-capitalized US and Chinese incumbents. The defensible path is domain-specific models, AI-augmented services exports, and application-layer differentiation—with success measured by the number of Indian companies generating 50%+ international revenue in advanced tech sectors, not by domestic startup valuations .


Recurring Themes

  • Human-in-the-loop is not a transitional compromise—it is the production architecture for high-stakes domains. Speakers across healthcare, defense, BFSI, recruitment, and industrial settings independently converged on the same design pattern: AI handles scale and speed; humans retain decision authority for irreversible, regulated, or ethically complex actions . This is not technophobia—it is the trust-building mechanism that enables adoption in sectors where failure is catastrophic.

  • Workforce transformation cannot be delegated to institutions. Government skilling programs, university curricula, and corporate training initiatives were consistently assessed as insufficient in isolation to transform 30 million workers at the speed required . The recurring prescription: individuals must take agency over continuous learning using AI tools directly; hackathons, project-based experience, and hands-on deployment matter more than credentials. Simultaneously, organizations must shift from just-in-time hiring to sustained internal learning cultures .

  • Data readiness is a prerequisite that consistently outranks model sophistication. Across manufacturing, MSME markets, public systems, consulting, and BFSI, speakers identified clean, consented, structured, and well-governed data—not better models—as the binding constraint on AI impact . This creates a specific market opportunity: IT services firms that build data infrastructure, governance frameworks, and metadata standards before deploying models are the ones reaching production scale.

  • India's infrastructure gaps—compute, power, cooling, connectivity—require policy action now, before lock-in. Data center PUE standards, grid modernization, renewable energy PPAs, and co-location near renewable parks need to be established while India's capacity is mostly unbuilt . Delayed action means costly retrofitting and grid congestion. Several speakers noted the Japan-India and France-India partnership frameworks as concrete mechanisms for accelerating infrastructure investment, not merely aspirational diplomacy.

  • Trust is the scarce resource, and it must be designed in, not added on. Across financial services, public sector AI, healthcare, and SME markets, speakers independently identified trust—built through transparency, explainability, auditability, and responsible design—as the actual rate-limiting factor for adoption at scale . Opaque systems, regardless of accuracy, fail to achieve institutional endorsement or citizen uptake.


Open Challenges & Tensions

  • Speed versus safety in agentic deployment remains genuinely unresolved. Multiple speakers endorsed moving to autonomous, agentic AI systems at enterprise scale, while others—especially those working in safety-critical industrial and healthcare contexts—insisted that LLMs cannot handle reflex actions or irreversible decisions and require multi-sensor redundancy and exception handling from day one . The sector lacks agreed criteria for when human oversight can be safely relaxed, and the commercial pressure to reduce human-in-the-loop costs runs directly against the governance imperative.

  • Outcome-based pricing is universally endorsed and structurally resisted. Every speaker who addressed business models called for shifting from time-and-materials to outcome ownership. Yet procurement cycles, client risk aversion, and the difficulty of attributing business outcomes to specific AI interventions mean that firms continue reverting to familiar billing structures under pressure . No speaker offered a credible mechanism for forcing the transition at industry scale rather than in individual contracts.

  • Domestic AI adoption must grow or the talent flywheel breaks. Investors and founders repeatedly noted that without Indian enterprises adopting and scaling AI solutions internally, talented founders will exit to markets where margins justify the effort . Yet the very workforce disruption AI causes—concentrated in the IT services sector itself—creates institutional incentives to delay adoption. This is a structural conflict of interest that no governance framework currently addresses.

  • "Sovereign AI" versus "pragmatic integration" is an active strategic disagreement. Some speakers advocated building entirely indigenous AI stacks—models, inference infrastructure, hardware—over a 5–10 year horizon . Others argued India should simultaneously adopt proven global platforms rather than rejecting them in favor of purely indigenous development, particularly where time-to-market matters . The DPDP Act and RBI guidelines create regulatory pressure toward sovereignty, but the capability and cost gaps between domestic and global infrastructure remain large and largely unacknowledged in policy discussions.

  • Deep tech talent retention is unsolved and underweighted. Quantum computing, robotics, and domain-specific AI were identified as India's defensible frontier opportunities , but they require retaining research talent that is currently migrating to better-capitalized global labs. The India AI Mission's GPU subsidies and the ₹12,500 crore RDIF address compute access, but no speaker outlined a credible mechanism for making frontier research careers in India economically competitive with offers from US or European institutions.


Notable Examples

  • TCS's AI Orchestrator was cited as a production example of low-code multi-agent orchestration across industrial physical AI use cases, enabling deployment of 15–20 cross-functional AI applications simultaneously. Speakers noted that the "no-code" framing is commercially effective but masks substantial backend complexity in workflow mapping, sensor configuration, and safety validation .

  • Bhashini, India's government-backed 22-language AI platform, was referenced as foundational infrastructure for digital inclusion—enabling voice-plus-translation access for populations excluded by English-only or literacy-dependent interfaces . Its collaborative glossary model, with ministries contributing 15–16 lakh domain-specific terms each, was cited as a public-good data architecture others should replicate.

  • Vahan.ai's recruitment AI was offered as a concrete illustration of the augmentation-versus-automation tension: the platform achieved 5x productivity gains with human-in-the-loop design against an ambition of 50x with fuller automation, demonstrating that trust-dependent tasks require human judgment to retain institutional legitimacy .

  • Fujitsu's 400-researcher Bangalore center was cited as evidence that the India-Japan AI partnership has moved beyond diplomatic aspiration into operational deployment, with Japan contributing engineering discipline and domain expertise in manufacturing and healthcare, and India contributing talent scale and market testing capacity .

  • The RBI's Digital Payment Intelligence Platform was highlighted as a national-level coordination framework for AI-driven fraud defense—necessary but assessed as insufficient without real-time cross-border intelligence sharing, since scam operations already operate internationally while Indian defenses remain jurisdictionally bounded .