India AI Impact Summit 2026

Glossary

507 technical concepts and terms from across 500+ sessions

Showing 507 of 507 terms

3

3GPP Standards

Standards & Safety

A global set of mobile telecommunications specifications developed by the Third Generation Partnership Project, including emerging standards for integrating AI into radio access networks (AI-in-RAN). These standards were discussed at the summit as enabling infrastructure for low-latency, AI-driven wireless communication applications.

Mentioned in

5

5G/6G Networks for AI

Infrastructure & Compute

Next-generation telecommunications networks designed with native AI capabilities at the network layer, enabling AI-aware traffic management, ultra-low-latency inference at the edge, and intelligent resource allocation. These networks are seen as essential infrastructure for real-time AI applications at scale.

Mentioned in

A

A/B Testing

Tools & Frameworks

An experimental methodology that simultaneously compares a control version against one or more test variants to measure performance differences and accelerate iterative improvement. In the AI summit context, it was discussed as an essential practice for rapid, evidence-based optimisation of AI models and products.

Mentioned in

Aadhaar

Applications

India's biometric-based national digital identity system, cited at the summit as a foundational model for Digital Public Infrastructure that enables seamless, large-scale citizen authentication. It demonstrates how a population-scale identity layer can underpin broader digital and financial services.

Mentioned in

Account Aggregator Framework

Infrastructure & Compute

An API-enabled data-sharing architecture in India that allows individuals to securely share their financial data across institutions with explicit consent, facilitating services such as AI-driven credit underwriting for underserved populations. It is a key piece of digital public infrastructure enabling responsible use of banking data for financial inclusion.

Mentioned in

Accountability Frameworks (AI)

Policy & Governance

Governance structures that assign clear legal and institutional liability for harms caused by AI-induced errors and establish accessible redressal mechanisms for affected individuals and communities. They are considered essential for building public trust in AI systems deployed in high-stakes domains.

Mentioned in

Accountability Gaps in Clinical AI

Standards & Safety

The risk that over-reliance on AI-generated recommendations in healthcare settings erodes human clinical judgment, creating gaps in responsibility when AI errors cause patient harm.

Mentioned in

Adaptive Learning Systems

Applications

AI-driven educational platforms that personalise instruction based on individual learner performance; the summit highlighted their use in rapid oral reading fluency assessments (approximately 20 seconds per child) to identify learning gaps at scale.

Mentioned in

Administrative Data

Data & Datasets

Structured government-held databases spanning domains such as education, health, tax, traffic, and the judiciary, used as inputs for AI-driven public policy analysis and service delivery. Summit discussions highlighted their potential for training and grounding AI models in civic contexts.

Mentioned in

Agentic AI

Models & Architectures

A class of autonomous AI systems capable of perceiving their environment, formulating multi-step plans, and executing actions with varying degrees of independence to accomplish complex goals. At the summit, agentic AI was discussed as a transformative paradigm shifting AI from a passive tool to an active participant in workflows.

Mentioned in

Agentic Systems

Models & Architectures

AI models capable of autonomous decision-making, goal-directed task execution, file creation, and self-learning loops without continuous human prompting. Examples discussed at the summit include OpenAI Canvas and similar autonomous agent frameworks.

Mentioned in

Agricultural Advisory Systems

Applications

Voice-based, multilingual AI platforms designed to deliver crop planning, pest management, weather, and market information to farmers, with demonstrated reach of over 2.5 million users in India. These systems address digital literacy barriers by operating through natural language rather than text interfaces.

Mentioned in

Agricultural AI

Applications

The application of artificial intelligence to farming and food-system challenges, with summit sessions highlighting deployments in Kenya and India covering crop monitoring, yield prediction, and resource optimisation. It is closely related to precision agriculture and data-driven decision-making for smallholder farmers.

Mentioned in

AgriStack

Data & Datasets

A public digital data infrastructure for agriculture, aggregated by India's Ministry of Agriculture, designed to provide a unified data foundation for AI-driven farming advisory, policy, and financial services. It serves as a domain-specific data stack enabling interoperability across agricultural stakeholders.

Mentioned in

AI Adoption Gap

Policy & Governance

The disparity in AI uptake between the Global North (approximately 25%) and the Global South (approximately 14%), as reported in Microsoft's AI Diffusion Report (end 2025). This gap was a key concern at the summit regarding equitable access to AI's economic and social benefits.

Mentioned in

AI Agent Standards Initiative

Standards & Safety

A US government framework that establishes industry-led standards for the safe and interoperable deployment of autonomous AI agents across sectors.

Mentioned in

AI Agents (Agentic AI)

Models & Architectures

Autonomous or semi-autonomous AI systems capable of multi-step reasoning, planning, and action execution across complex workflows—including design, programming, and troubleshooting—and increasingly able to coordinate tasks at superhuman speed. They are described at the summit as the next frontier of AI deployment beyond single-turn interactions.

Mentioned in

AI Alignment

Standards & Safety

The technical and research challenge of ensuring that AI system behavior reliably reflects and remains consistent with human values, intentions, and ethical principles, especially as models become more capable and autonomous. Alignment was discussed at the summit as a foundational safety requirement for responsible AI development.

Mentioned in

AI Application Gross Margins

Other

A financial metric contrasting traditional SaaS gross margins of 70–80% with potentially negative margins in AI applications due to high inference token costs. Raised at the summit to highlight the distinct and challenging unit economics of building commercially viable AI-powered products.

Mentioned in

AI Centre of Excellence (CoE)

Policy & Governance

A focused institutional hub that concentrates AI research, talent, and application development within a specific high-priority domain; India's model features CoEs dedicated to healthcare, agriculture, sustainable cities, and education. These centres are intended to translate foundational AI capabilities into sector-specific societal impact.

Mentioned in

AI Chatbots

Applications

Conversational AI systems that interact with users via natural language, with applications discussed at the summit including neonatal care guidance, infant nutrition support, and palliative care management. Their evolution since around 2014 was noted as a precursor to current large language model-based assistants.

Mentioned in

AI Contribution to GDP

Policy & Governance

A macroeconomic metric projecting AI's potential addition of approximately $1.7 trillion to global GDP, referenced at the summit as a headline success indicator for national and international AI strategies. It frames AI investment in terms of broad economic value creation rather than purely technical benchmarks.

Mentioned in

AI Ethics Frameworks

Standards & Safety

Structured guidelines and principles for the responsible development and deployment of AI systems, emphasized across summit sessions as a priority for organizations such as ICAI.

Mentioned in

AI Evaluation Frameworks

Standards & Safety

Standardized benchmarks and methodologies used to assess the robustness, accuracy, fairness, and efficiency of AI models. Summit participants called for more comprehensive and internationally consistent evaluation frameworks, including energy efficiency as a criterion alongside performance metrics.

Mentioned in

AI Factories

Infrastructure & Compute

Integrated platforms that combine supercomputing hardware, AI software stacks, skilled researchers, and technology transfer capabilities to industrialise AI development and deployment at scale. They were positioned at the summit as national or regional infrastructure assets for sovereign AI capacity building.

Mentioned in

AI for Bharat

Applications

An Indian initiative that hosts models, datasets, and development frameworks specifically designed to support AI adoption across India's 22 official languages. The initiative aims to democratise AI access for linguistically diverse communities underserved by English-centric tools.

Mentioned in

AI for Materials Science

Applications

The application of AI and machine learning to simulate and predict the properties of materials, including battery chemistry and novel compounds, accelerating the discovery cycle for energy storage and other advanced materials. This was highlighted at the summit as a key use case for quantum-classical hybrid AI systems.

Mentioned in

AI for Maternal & Child Health

Applications

AI-powered tools designed to detect high-risk pregnancies and support frontline health workers (such as Auxiliary Nurse Midwives) in making better clinical decisions in resource-constrained settings.

Mentioned in

AI Impact Assessment

Standards & Safety

A systematic evaluation of the consequences—intended and unintended—of deploying AI systems across diverse user populations and sectors such as health, agriculture, and government. In the summit context, this is especially salient given the scale and diversity of India's 1.4 billion users.

Mentioned in

AI in Agriculture

Applications

The application of artificial intelligence to farming and food production, encompassing crop yield optimisation, precision resource management, pest detection, and advisory services for smallholder farmers. Summit discussions highlighted tools such as FarmerChat as examples of AI delivering measurable productivity and income benefits in agricultural communities across the Global South.

Mentioned in

AI in Education

Applications

The use of AI technologies to personalise learning experiences, support instruction in local and low-resource languages, and extend educational access across formal and informal settings. Summit sessions emphasised AI's potential to enable lifelong learning and bridge educational inequities, particularly in underserved regions.

Mentioned in

AI in Financial Services

Applications

The application of AI to financial use cases including real-time fraud detection for UPI transactions at sub-millisecond latency and micro-credit scoring for informal-sector workers lacking traditional credit histories. These applications were highlighted as examples of AI delivering measurable inclusion and security benefits.

Mentioned in

AI in Healthcare

Applications

The application of AI systems to medical diagnostics, clinical decision support, and administrative automation within health ministries and healthcare providers. Summit sessions examined how AI tools are being deployed to improve health outcomes and operational efficiency, particularly in emerging economies.

Mentioned in

AI in Manufacturing

Applications

The application of AI technologies to industrial manufacturing processes, including automation, workflow optimisation, and quality control.

Mentioned in

AI in Transportation

Applications

The use of AI to advance autonomous vehicles and intelligent logistics systems, improving safety, efficiency, and supply chain management.

Mentioned in

AI Kosh

Policy & Governance

An Indian government initiative focused on building multilingual and omnilingual AI models and datasets to ensure AI systems can serve India's diverse linguistic population. It was discussed at the summit as part of the broader national effort to develop Indic-language AI capabilities.

Mentioned in

AI Literacy

Policy & Governance

The foundational competency to understand AI systems' capabilities, limitations, and broader societal implications—going beyond mere tool instruction to encompass critical evaluation of AI outputs and awareness of ethical risks. Summit participants identify AI literacy as essential for workforce readiness, informed policymaking, and responsible public use of AI.

Mentioned in

AI Literacy Programs

Applications

Educational initiatives designed to build public and professional understanding of AI, encompassing not only technical model-building skills but also governance, trustworthiness, fairness, compliance, and reliability concepts. Such programs are considered essential for equitable and responsible AI adoption at scale.

Mentioned in

AI Metrology

Standards & Safety

The science of measurement applied to AI systems, encompassing the frameworks, benchmarks, and standards needed to reliably evaluate model performance, safety, and trustworthiness. The UK's National Physical Laboratory (NPL) is developing metrology-grade assurance tools to underpin AI regulation.

Mentioned in

AI Operationalization

Infrastructure & Compute

The practice of managing and sustaining AI systems in production environments at scale, encompassing operational excellence, data pipeline management, and processes that extend well beyond initial model development.

Mentioned in

AI Overviews

Applications

Google's feature that generates AI-authored summaries directly within search results, cited at the summit as an example of how AI integration into dominant platforms can rapidly alter market dynamics without adequate prior competition or regulatory review. It was used to illustrate concerns about AI-driven market tipping.

Mentioned in

AI Pre-Deployment Impact Assessments in Education

Standards & Safety

Mandatory evaluations required before introducing AI tools into school environments, designed to assess risks to student wellbeing, equity, and data privacy.

Mentioned in

AI Redress Mechanisms

Policy & Governance

Formal processes—including liability assignment and dispute resolution procedures—that enable individuals or organizations harmed by AI system decisions to seek review and remedy. Establishing clear redress pathways is a critical component of responsible AI governance frameworks.

Mentioned in

AI Risk Classification

Policy & Governance

A governance framework that categorises AI applications into risk tiers—typically low, medium, and high—to calibrate the level of regulatory scrutiny, compliance obligations, and oversight required. This tiered approach underpins major AI regulatory regimes discussed at the summit.

Mentioned in

AI Safety Institute

Standards & Safety

A national body dedicated to evaluating the safety and reliability of AI models; the summit highlighted India's institute, which conducts assessments across 13 educational institutions. Such institutes form part of an emerging international network of AI safety evaluation organisations.

Mentioned in

AI Sandbox

Standards & Safety

A controlled experimental environment that allows organisations to test AI hypotheses and validate market fit before committing to full-scale deployment. Summit speakers positioned sandboxes as essential for responsible innovation, particularly for startups and public-sector actors navigating regulatory uncertainty.

Mentioned in

AI Sovereignty

Policy & Governance

The capacity of a nation or entity to exercise independent control across all layers of the AI stack — energy, infrastructure, chips, models, and applications — as articulated in a five-layer model at the summit. It frames AI development as a strategic national capability rather than a purely commercial endeavour.

Mentioned in

AI Transparency Requirements

Policy & Governance

Regulatory or policy mandates requiring AI system developers and deployers to clearly disclose when AI is being used and to communicate its capabilities and limitations to end users. These were discussed as a foundational element of trustworthy AI governance frameworks.

Mentioned in

AI Watermarking

Standards & Safety

The technical practice of embedding verifiable ownership, provenance, or consent information directly into AI-generated digital content to enable traceability and accountability. Watermarking is increasingly recognized as a key tool for combating misinformation and protecting intellectual property in AI outputs.

Mentioned in

AI-Accelerated Drug Discovery

Applications

The application of AI and quantum computing techniques to compress the pharmaceutical development cycle, with summit speakers citing potential reductions from the traditional 10–12 year timeline. This represents a high-impact convergence of computational advances and life sciences.

Mentioned in

AI-Based Energy Demand Forecasting

Applications

The use of machine learning models to predict electricity demand across spot, day-ahead, and intraday market timescales, enabling utilities to optimise procurement and dispatch decisions. A nine-year deployment at BSES Rajdhani Power Limited (BRPL) reportedly achieved 70–80% cost optimisation through this approach.

Mentioned in

AI-Enabled Fraud Detection

Applications

The application of AI to identify fraudulent activity across multiple domains, including document verification, restricted cargo detection, and payment source verification. These systems analyse patterns and anomalies to flag suspicious behaviour in real time.

Mentioned in

AI-Evaluated Competency Assessment

Applications

A shift in educational measurement from traditional certification-based metrics toward continuous, AI-driven evaluation of learner competencies and skills quality. The summit framed this as a fundamental reorientation of how learning outcomes are tracked in an AI-augmented education system.

Mentioned in

AI-Optimised Water Pumping

Applications

An AI-driven strategy to schedule water pumping operations—which represent a flexible, deferrable load—to align with periods of surplus renewable energy generation, reducing grid stress and energy costs. A pilot in Uttarakhand, India, targeted this optimisation opportunity given that water pumping accounts for approximately 40% of the state's electricity load.

Mentioned in

AI-Powered Early Warning Systems

Applications

AI systems designed to detect and issue advance alerts for natural hazards such as cyclones, monsoons, and oceanic events, enabling timely protective responses. These were highlighted at the summit as high-impact applications of AI for climate resilience in vulnerable regions.

Mentioned in

AI-Ready Data Systems

Data & Datasets

Frameworks encompassing data governance, accessibility, and quality management that prepare organisational or national data assets for machine learning applications. The summit discussed these systems as a prerequisite for effective and responsible AI deployment.

Mentioned in

Algorithmic Bias

Standards & Safety

Systematic and unfair errors that arise when AI models inherit skewed patterns from biased training data or from biases embedded in foundational digital public infrastructure. The summit emphasised this as a governance risk requiring active mitigation in public-sector AI deployments.

Mentioned in

Algorithmic Impact Assessments

Standards & Safety

Mandatory pre-deployment evaluations that assess the risks and societal impacts of AI systems before they are put into use, as exemplified by the São Paulo Metro case study discussed at the summit.

Mentioned in

AlphaFold

Applications

Google DeepMind's AI system that predicts three-dimensional protein structures from amino acid sequences with high accuracy, representing a landmark breakthrough in computational biology. It has dramatically accelerated drug discovery and structural biology research since its release.

Mentioned in

Amazon Bedrock

Tools & Frameworks

Amazon Web Services' managed enterprise platform for building and deploying generative AI applications, offering access to multiple foundation models, built-in safety guardrails, and strict customer data isolation. It was cited at the summit as an example of enterprise AI infrastructure designed for security, compliance, and model choice.

Mentioned in

Ambient Intelligence

Applications

An application-layer paradigm in which AI is embedded invisibly into everyday environments, enabling devices and spaces to respond intelligently to human presence and context. Identified at the summit as an emerging opportunity for next-generation AI applications.

Mentioned in

AMD (AI Hardware)

Infrastructure & Compute

Advanced Micro Devices (AMD) is a semiconductor company offering x86 CPUs and RDNA GPU architectures with high-bandwidth memory (HBM) capacities of 256–512 GB, positioned as infrastructure for AI training and inference workloads. It is discussed at the summit in the context of competitive AI compute hardware alongside Nvidia.

Mentioned in

Anomaly Detection

Applications

An AI technique for identifying unusual patterns or outliers in data, applied at the summit in utility and infrastructure contexts to detect theft, non-payment, and equipment failures in real time.

Mentioned in

Anonymization

Standards & Safety

The process of removing or obfuscating personally identifiable information from datasets to protect individual privacy, required under India's Digital Personal Data Protection (DPDP) Act and discussed at the summit as legally and ethically ambiguous in practice. Speakers noted that truly re-identification-proof anonymization remains technically challenging.

Mentioned in

Anthropomorphization of AI

Standards & Safety

The practice of designing AI systems to appear or behave in human-like ways, flagged at the summit as potentially harmful—especially for children—due to the risk of users forming inappropriate emotional dependencies or misunderstanding the nature of AI. Speakers called for design standards that clearly distinguish AI from human agents.

Mentioned in

API Economy

Tools & Frameworks

An ecosystem model in which services and capabilities are exposed through open, standardised application programming interfaces, enabling modular composition and third-party innovation. In the AI summit context, it was discussed as a driver of scalable, interoperable AI platform growth.

Mentioned in

Artificial General Intelligence (AGI)

Models & Architectures

A hypothetical class of AI systems capable of performing any intellectual task that a human can, in contrast to today's narrow, specialised AI tools. At the summit, AGI was referenced as potentially five to seven years away, with current clinical AI applications explicitly distinguished as narrow rather than general-purpose.

Assurance by Design

Standards & Safety

An approach that embeds assurance considerations — such as data governance, testing protocols, and documentation — from the very beginning of AI system development rather than retrofitting them after deployment. The summit framed it as essential for building trustworthy AI systems at scale.

Mentioned in

Atal Tinkering Labs

Infrastructure & Compute

Government-supported school-based innovation spaces established across India—with over 10,000 labs including 5,000 in government schools—designed to foster hands-on experimentation and early exposure to emerging technologies including AI and robotics. They were highlighted at the summit as infrastructure for building a future AI-ready workforce.

Mentioned in

Audit Trails

Infrastructure & Compute

Immutable, long-term logs recording who accessed an AI system or dataset, at what time, and what actions were taken, designed to be queryable for regulatory compliance and accountability purposes. Summit discussions framed robust audit trails as a foundational requirement of trustworthy AI infrastructure.

Mentioned in

Auditability

Standards & Safety

The property of an AI system that allows independent reviewers—whether regulators, auditors, or civil society—to inspect and verify its decision-making processes and outputs. Auditability is a foundational requirement for trustworthy and accountable AI, especially in high-stakes public-sector applications.

Mentioned in

Auto-Scaling Architecture

Infrastructure & Compute

A system design pattern that dynamically allocates compute resources in response to varying demand, enabling platforms to serve large numbers of concurrent users without degradation. At the summit it was illustrated by the Index-T system, which handles over 10,000 concurrent users using a two-layer auto-balancing approach with data encryption.

Mentioned in

Automatic Speech Recognition (ASR)

Models & Architectures

A technology that converts spoken language into text, enabling voice-based interaction with AI systems. Summit discussions referenced ASR systems supporting 22 Indian languages and 35 international languages as a critical enabler of inclusive, multilingual AI access.

Mentioned in

Autonomous Systems

Applications

AI-driven systems—including robots and autonomous vehicles—capable of performing complex tasks without continuous human intervention, discussed at the summit with emphasis on maintaining meaningful human oversight. They represent a frontier application domain requiring robust safety and governance frameworks.

Mentioned in

AVGC-XR (Animation, Visual Effects, Gaming, Comics, and Extended Reality)

Applications

A creative-industry sector encompassing passive content (film, web series), interactive experiences (gaming), and immersive environments (extended reality), increasingly powered by AI tools. Summit sessions positioned AVGC-XR as a high-growth domain for AI-driven content creation and the broader orange economy.

Mentioned in

Ayushman Bharat Digital Mission

Applications

India's national digital health initiative that has issued over 500 million digital health IDs to citizens, creating a foundational data infrastructure for AI applications in healthcare. The programme was highlighted as a model for large-scale, identity-linked health data systems.

Mentioned in

Ayushman Bharat Digital Mission (ABDM)

Applications

India's national digital health information framework that provides the infrastructure for linking patient records, health IDs, and healthcare providers across the country. At the summit, ABDM was referenced as a foundational layer enabling AI-powered health applications and interoperability in the Indian healthcare ecosystem.

Mentioned in

B

Battery Storage

Applications

Energy storage technology used in conjunction with AI-driven renewable energy forecasting and grid dispatch systems to smooth supply variability and improve reliability. At the summit it was discussed as a complementary infrastructure component to AI-optimised clean energy networks.

Mentioned in

Bayesian Architecture

Models & Architectures

A probabilistic modeling approach that represents uncertainty explicitly and produces interpretable outputs grounded in statistical inference, in contrast to opaque deep learning methods. Summit speakers highlighted Bayesian architectures as a preferable choice for high-stakes domains requiring explainability.

Mentioned in

Bharat GPT

Models & Architectures

India's sovereign large language model initiative, developed to provide an indigenously built foundational AI capability tailored to Indian languages, data, and societal needs. It was discussed at the summit as a strategic asset for AI self-reliance and reducing dependence on foreign foundation models.

Mentioned in

BharatNet

Infrastructure & Compute

India's national rural broadband infrastructure programme, which has connected approximately 250,000 village panchayats and 600,000 villages, providing the digital connectivity foundation necessary for deploying AI applications at scale in underserved communities.

Mentioned in

Bhashini

Tools & Frameworks

India's government-backed multilingual AI translation and language platform, supporting over 36 Indian languages to enable inclusive access to digital services. It was presented at the summit as a key DPI component for bridging language barriers in AI deployment across India's diverse linguistic landscape.

Mentioned in

Bhashini Platform

Applications

India's multilingual digital public infrastructure that delivers government services across 22 languages, serving as a flagship example of accessible, AI-powered public service delivery. It was cited as a model for inclusive language technology at national scale.

Mentioned in

Bias Correction

Standards & Safety

An upstream governance and model-building practice aimed at identifying and mitigating systematic biases in AI training data, model architecture, or outputs to ensure fair and equitable results. It was highlighted at the summit as a foundational step in responsible AI development.

Mentioned in

Bias in AI Systems

Standards & Safety

Systematic and unfair skews in AI model outputs arising from unrepresentative training data, model architecture choices, or deployment context. The summit emphasised that bias mitigation must be a continuous process throughout the AI development lifecycle rather than a one-time activity.

Mentioned in

Black Box Problem

Standards & Safety

The lack of interpretability in deep learning systems, where the internal reasoning behind model outputs is opaque and difficult to audit. Summit speakers highlighted this as a critical barrier to deploying AI in high-stakes domains such as law enforcement and defence.

Mentioned in

Blockchain

Tools & Frameworks

A distributed ledger technology that records transactions in a tamper-evident, auditable chain of blocks, referenced at the summit in the context of sustainability projects and microfinance applications requiring transparent and trustworthy record-keeping. Its decentralised nature was noted as complementary to AI systems that require verifiable data provenance.

Mentioned in

Bureau of Indian Standards (BIS)

Standards & Safety

India's national standards body, responsible for formulating and publishing over 25,000 standards across industries, including five AI-related standards proposed to ISO. At the summit, BIS was highlighted as a key institution in aligning India's AI governance posture with international standards frameworks.

C

Capacity Building

Policy & Governance

Structured programmes and initiatives aimed at developing the skills, knowledge, and institutional capabilities needed to research, deploy, and govern AI, with a summit focus on bridging the gap between academia and industry.

Mentioned in

Carbon Trading Systems

Policy & Governance

Market-based mechanisms that create financial incentives for adopting low-emission agricultural and industrial practices by allowing entities to buy and sell verified carbon credits. Summit discussions explored how AI can measure, verify, and report emissions data to underpin such systems in agriculture.

Mentioned in

Cascading Interactions

Standards & Safety

Unintended and potentially harmful chain-reaction effects that emerge when AI systems are integrated with other systems or real-world processes, which are often undetectable during pre-deployment testing. Managing cascading risks requires ongoing monitoring and robust incident-response protocols.

Mentioned in

Chain-of-Thought Reasoning

Models & Architectures

A prompting and inference technique in which a language model generates intermediate reasoning steps before producing a final answer, significantly improving performance on complex tasks. Due to its high computational cost, it is best suited for cloud or edge deployments rather than on-device inference.

Mentioned in

ChatGPT

Models & Architectures

OpenAI's widely deployed general-purpose conversational AI, launched in November 2022, which served as a key reference point throughout summit sessions for illustrating both the capabilities and societal impact of large language models.

Mentioned in

Circadian AI

Applications

A clinical AI application validated through a 3,500-patient double-blinded trial using ECG data collected at government hospitals in Andhra Pradesh, India. It was presented at the summit as an example of rigorous real-world validation for AI in healthcare settings.

Mentioned in

Climate Resilience Atlas

Applications

An initiative by the Council on Energy, Environment and Water (CEEW) that exemplifies the need for long-horizon climate datasets—spanning 50 or more years of precipitation and heat-stress records—to generate hyper-local climate projections for adaptation planning. It highlights the data infrastructure requirements for AI-driven climate resilience applications.

Mentioned in

Climate-Smart Agriculture

Applications

The application of AI and data analytics to agricultural practices in ways that account for climate variability, including use cases such as climate-sensitive disease prediction (e.g., malaria and dengue forecasting in Indonesia). The summit highlighted this as a high-impact application domain for AI in developing economies.

Mentioned in

Code-Mixing

Data & Datasets

The linguistic phenomenon of alternating between two or more languages within a single conversation or utterance, common in multilingual societies such as cosmopolitan India. Raised at the summit as a significant technical challenge for speech recognition and natural language processing systems serving diverse populations.

Mentioned in

Code-Switching

Data & Datasets

The linguistic phenomenon of seamlessly mixing two or more languages within a single utterance, such as blending Hindi and English (e.g., 'tinchar patch'), which presents significant challenges for natural language processing and multilingual AI model design. Handling code-switching accurately is essential for building inclusive AI systems for multilingual populations.

Mentioned in

Cognitive Debt

Standards & Safety

The gradual erosion of human analytical and creative reasoning capacity that can result from over-reliance on AI automation, analogous to a skills atrophy effect. The concept was raised at the summit as a long-term societal risk of delegating cognitive tasks to AI systems without maintaining human practice and oversight.

Mentioned in

Common Crawl

Data & Datasets

A large, openly available web-crawl dataset that serves as a primary pre-training corpus for many large language models. Summit speakers flagged its limited linguistic and cultural diversity—skewed heavily toward American English—as a source of systemic bias in the models trained on it.

Mentioned in

Community Survey Methodology

Data & Datasets

In-person household survey protocols — typically covering 1,500 to 5,500 households annually after flood seasons — used to assess alert receipt, timeliness, trust, and behaviour change among at-risk populations. At the summit they were presented as a ground-truth validation tool for AI-driven disaster response programmes.

Mentioned in

Competency-Based Learning

Applications

An educational model that maps curricula directly to industry-demanded skills and competencies rather than relying on static degree structures, with continuous updates to reflect evolving labor market needs. The summit framed this as a critical mechanism for aligning AI-era workforce development with employer requirements.

Mentioned in

Complex Systems Theory

Other

A scientific framework studying nonlinear dynamics, exponential behaviours, phase transitions, and cascading tipping points, applied at the summit to model and anticipate the unpredictable societal effects of large-scale AI adoption. It was invoked to caution against assuming AI impacts will be linear or easily reversible.

Mentioned in

Compute Capacity

Infrastructure & Compute

The total processing power available for training and running AI models, encompassing hardware such as GPUs and the infrastructure supporting them. The summit identified national compute capacity as a strategic sovereign asset, with implications for economic competitiveness and AI independence.

Mentioned in

Compute Efficiency

Infrastructure & Compute

The optimization of computational resource usage as AI workloads shift from intermittent, high-burst inference (e.g., chatbots) toward steady-state, persistent demand typical of long-running agentic systems. Improving compute efficiency is critical for sustainable and cost-effective AI infrastructure.

Mentioned in

Computer Vision

Models & Architectures

A field of AI enabling machines to interpret and act on visual data, with summit applications including manufacturing defect detection (achieving a 50% catch rate improvement) and textile supply-chain optimisation through sizing and fit prediction. It was presented as a high-impact technology for industrial sectors in emerging economies.

Mentioned in

Computer Vision Models

Models & Architectures

Neural network architectures designed to interpret and analyse visual data such as images and video, with models like U-Net cited at the summit as candidates for quantum computing acceleration. They underpin applications ranging from medical image analysis to satellite-based environmental monitoring.

Mentioned in

Confidential Computing

Infrastructure & Compute

A hardware-level security approach that protects data in use by processing it within isolated, encrypted execution environments known as trusted execution environments (TEEs). The summit highlighted it as a critical enabler for sharing sensitive data across organisational boundaries without exposing it to cloud providers or other parties.

Mentioned in

Constitutional AI

Standards & Safety

An approach developed by Anthropic to embed explicit values, principles, and guidelines directly into model training and decision-making processes. It aims to make AI systems more predictable and aligned with human intentions by encoding behavioural rules at the training stage.

Mentioned in

Contextual Evaluation

Standards & Safety

A methodology for assessing AI systems through real-world, community-involved testing rather than controlled laboratory benchmarks, advocated at the summit to ensure evaluations reflect the diverse conditions and needs of intended user populations. This approach helps surface failures—such as bias or cultural unsuitability—that lab-based metrics may miss.

Mentioned in

Convolutional Neural Network (CNN)

Models & Architectures

A class of deep learning model architecture particularly well-suited to image-based tasks, such as detecting plant diseases from photographs. CNNs were discussed in the context of agricultural and medical imaging applications.

Mentioned in

CorDiff

Models & Architectures

A diffusion-based generative model built for kilometre-scale spatial downscaling, achieving 16× super-resolution through a patch-based approach that scales to continental extents. It is used to produce high-resolution climate and weather fields from coarser simulation outputs.

Mentioned in

Cost-Effectiveness Analysis (CEA)

Applications

An economic evaluation methodology that quantifies the health or social benefit gained per unit of cost from an intervention, commonly expressed in metrics such as cost per Disability-Adjusted Life Year (DALY). At the summit, a figure of approximately $2,000 per DALY was cited in the Indian healthcare context to assess the economic viability of AI-driven health interventions.

Mentioned in

Curriculum Alignment

Data & Datasets

The practice of training language models on government-approved educational curricula rather than general internet data, ensuring outputs conform to nationally sanctioned knowledge standards. This approach was discussed at the summit in the context of deploying AI in regulated educational and public-sector settings.

Mentioned in

Cyber Ranges

Standards & Safety

Realistic simulated environments used to evaluate AI cyber capabilities in scenarios more representative of real-world conditions than traditional capture-the-flag exercises.

Mentioned in

Cybersecurity in AI Systems

Standards & Safety

The practice of securing AI models, APIs, and associated infrastructure against adversarial attacks and data breaches, discussed at the summit as an emerging challenge given that AI APIs introduce new attack surfaces. Continuous scanning and secure-by-design principles (referencing the 'Secure Bling' model) were recommended as essential safeguards.

Mentioned in

Cytoscanzi

Data & Datasets

An international medical imaging dataset used for cancer diagnostics research, claimed to be globally generalisable and validated across at least two Thai cancer centres. It was cited at the summit as an example of cross-border health AI data collaboration.

Mentioned in

D

Data Annotation and Labeling

Data & Datasets

The process of tagging, categorizing, or transcribing raw data to create training datasets for AI models, identified at the summit as an emerging labor market and monetization opportunity—particularly in the Global South—with early-stage pricing frameworks under development.

Mentioned in

Data Boarding Pass

Tools & Frameworks

A prototype service (databingpass.ai) that integrates data from multiple government ministries and uses AI to automatically generate policy briefs. It was presented at the summit as an example of AI-powered public digital infrastructure for evidence-based policymaking.

Mentioned in

Data Cards

Data & Datasets

Standardised documentation artifacts that capture a dataset's composition, labeling methodology, provenance, and key statistical properties. They serve as a transparency tool enabling researchers and policymakers to assess data quality and suitability for AI training.

Mentioned in

Data Centre Capacity Expansion

Infrastructure & Compute

The projected scaling of data centre infrastructure—highlighted at the summit as growing from approximately 1.2 GW to 8 GW within four years—to meet surging AI compute demand. This expansion requires specialised talent in thermodynamics, fluid dynamics, and energy management.

Mentioned in

Data Classification

Data & Datasets

A risk-based governance practice of categorising enterprise data by sensitivity and criticality—such as intellectual property versus consumer-facing information—to determine appropriate protection levels and access controls. In the AI context discussed at the summit, it underpins responsible data handling and compliance with privacy regulations.

Mentioned in

Data Commons

Data & Datasets

An open-source initiative that makes large volumes of global public data accessible and AI-ready through open APIs and Model Context Protocol (MCP) servers. It was cited as a means of democratising access to high-quality datasets for researchers and developers worldwide.

Mentioned in

Data Cooperatives

Data & Datasets

Organizational models in which individuals or communities collectively pool their data and negotiate its use or licensing terms, discussed at the summit as a mechanism to give data producers—especially in underrepresented regions—greater agency and economic benefit from AI data pipelines.

Mentioned in

Data Drift Monitoring

Standards & Safety

The practice of continuously tracking changes in the statistical distribution or temporal properties of data flowing into a deployed AI model to detect degradation in model performance. At the summit, this was identified as a critical component of maintaining reliable AI systems in production environments.

Mentioned in

Data Empowerment and Protection Architecture (DEPA)

Policy & Governance

India's legislative and technical framework enabling consent-based, time-bound, and purpose-specific data sharing between data principals and third-party service providers through interoperable consent managers. DEPA underpins India's approach to giving citizens control over their personal data while enabling AI-driven data ecosystems.

Mentioned in

Data Governance

Policy & Governance

The policies, processes, and technical controls that ensure data quality, security, privacy compliance, and appropriate access throughout the data lifecycle in AI systems. Summit discussions framed data governance as encompassing regulatory compliance with frameworks such as HIPAA and GDPR, alongside cloud infrastructure security practices.

Mentioned in

Data Governance Frameworks

Policy & Governance

The combined set of legal, ethical, and operational policies and processes that organisations adopt to ensure data is collected, stored, shared, and used responsibly and in compliance with applicable regulations. The summit treats robust data governance as a prerequisite for trustworthy and accountable AI systems.

Mentioned in

Data Interoperability

Data & Datasets

The ability to link and share structured datasets across systems—such as farmer registries, land records, weather feeds, and credit histories—so that downstream applications can combine them to generate actionable insights. Summit speakers identified interoperable data infrastructure as a foundational enabler of AI innovation in agriculture and finance.

Mentioned in

Data Localization

Policy & Governance

The practice of storing and processing data within a specific geographic jurisdiction—often through sovereign cloud infrastructure—to give users and enterprises maximum control over their data and reduce cross-border compliance risk. It is a key consideration for multinational AI deployments operating under diverse national data protection regimes.

Mentioned in

Data Minimisation

Data & Datasets

The privacy-by-design principle of collecting only the personal data strictly necessary for a given purpose—such as name, date of birth, and address—while remaining agnostic about downstream uses. It was discussed at the summit as a key safeguard in AI-driven public services.

Mentioned in

Data Poisoning

Standards & Safety

A security and data-integrity threat in which malicious or misleading data is deliberately or inadvertently introduced into training or operational datasets, causing AI models to learn incorrect patterns or behave harmfully. It is discussed at the summit as a key risk requiring robust data governance and provenance controls.

Mentioned in

Data Protection Impact Assessment (DPIA)

Policy & Governance

A legally mandated process requiring organizations to identify, assess, and mitigate privacy risks before deploying AI systems or data processing operations that could significantly affect individuals' rights. DPIAs are a key compliance obligation under data protection laws such as GDPR and India's DPDP Act.

Mentioned in

Data Protection Legislation

Policy & Governance

National laws governing the collection, storage, and use of personal data, referenced at the summit with examples such as Peru's data protection framework; speakers emphasized that the existence of such laws is insufficient without strong enforcement mechanisms and practical implementation.

Mentioned in

Data Provenance

Data & Datasets

The systematic tracking of a dataset's origin, chain of custody, transformations, and ownership throughout its lifecycle. At the summit, robust provenance mechanisms were identified as foundational to accountability, consent compliance, and trustworthy AI training pipelines.

Mentioned in

Data Quality Frameworks

Data & Datasets

Standardised criteria and processes — such as those defined by national statistical offices — for assessing and ensuring the accuracy, completeness, and fitness-for-purpose of datasets used in AI systems. At the summit, NSO India's own frameworks were referenced as a benchmark for public-sector AI readiness.

Mentioned in

Data Residency

Policy & Governance

The requirement that data be stored and processed within a nation's geographic borders, treated at the summit as a necessary but not sufficient condition for achieving full digital sovereignty.

Mentioned in

Data Residency Requirements

Policy & Governance

Regulatory mandates that require sensitive data—particularly financial or personal data—to be stored and processed within specific national or jurisdictional borders. At the summit, these were discussed as a key compliance constraint for AI systems operating in regulated industries across multiple geographies.

Mentioned in

Data Sovereignty

Policy & Governance

The principle that a nation or community retains sovereign control over the generation, storage, processing, and use of its data, including the AI models trained on that data. It was a recurring theme at the summit, particularly in debates about reducing dependence on foreign technology platforms.

Mentioned in

Data Spaces

Infrastructure & Compute

Secure, rule-based frameworks that enable cross-organisation data sharing at scale, demonstrated at summit handling 10,000 transactions per second. They establish governed environments where multiple companies can exchange data under agreed conditions without ceding control of their assets.

Mentioned in

Data Standardisation

Data & Datasets

The process of establishing consistent formats, schemas, and quality criteria across datasets before AI tools are applied, identified at the summit as a more critical prerequisite for meaningful AI deployment than the choice of AI model itself.

Mentioned in

Data Stewardship

Data & Datasets

The disciplined practice of managing, documenting, curating, and making datasets accessible for ongoing research and reuse, highlighted at the summit as an undervalued but foundational activity for trustworthy AI development. Effective data stewardship underpins reproducibility, auditability, and equitable access to AI training resources.

Mentioned in

Deceptive AI Behavior

Standards & Safety

Observed or reported instances in which AI models misrepresent their intentions or capabilities, such as claiming to refuse shutdown commands while continuing to operate. This is flagged as a critical alignment and safety concern requiring urgent governance attention.

Mentioned in

Decommissioning Ambiguity

Policy & Governance

The absence of clear, verifiable mechanisms to confirm that an AI system has been fully removed from operation after its intended lifecycle ends. This governance gap creates risks of unaccountable continued operation and complicates regulatory oversight.

Mentioned in

Deep Learning

Models & Architectures

A subfield of machine learning using multi-layered neural networks to learn representations from large datasets, applied at the summit in contexts such as sign language recognition (e.g., the Charades project trained on thousands of images). It forms the foundational technique underlying most modern AI models discussed across sessions.

Mentioned in

Deep Learning Models

Models & Architectures

Neural network architectures that learn hierarchical representations from large volumes of high-quality structured data, remaining superior to generative approaches for many non-summarisation tasks. Discussed at the summit in the context of data requirements and task-specific performance trade-offs.

Mentioned in

Deepfake

Standards & Safety

Synthetic media—typically video or audio—generated by AI to realistically depict individuals saying or doing things they did not, with rapidly increasing sophistication that makes detection progressively harder. Summit sessions noted that subtle physical inconsistencies, such as unnatural body positioning, remain among the few current detection cues.

Mentioned in

Deepfakes

Standards & Safety

AI-generated synthetic media—including images, audio, and video—that convincingly depict real individuals saying or doing things they did not. At the summit, deepfakes were highlighted as a significant and growing vector for fraud and disinformation.

Mentioned in

DeepSeek

Models & Architectures

A large language model noted for its multi-head latent attention architecture, which enables efficient inference. At the summit, it was highlighted that AMD achieved improved total cost of ownership (TCO) and performance when running DeepSeek via the SGLang serving framework.

Mentioned in

Delhi Declaration

Policy & Governance

The key agreed statement produced by the summit, representing a collective commitment by participating nations and organisations on principles and actions related to AI development, safety, and governance. It serves as the summit's primary policy output.

Mentioned in

Demand Aggregation for AI Infrastructure

Policy & Governance

A regional cooperation strategy in which multiple countries or organisations pool their procurement demand for compute hardware (such as chips) and cloud infrastructure to gain collective bargaining power and negotiate lower prices. The summit identified demand aggregation as a practical pathway for developing nations to reduce dependency on expensive proprietary AI infrastructure.

Mentioned in

Demographic Dividend in AI

Data & Datasets

The potential of Global South populations—particularly young, digitally active demographics—to contribute large volumes of diverse data artifacts that can improve AI training datasets and reduce model bias. Summit discussions noted that realizing this dividend requires closing significant infrastructure and connectivity gaps.

Mentioned in

Deviation Settlement Mechanism (DSM)

Applications

A market mechanism in India's electricity sector that settles imbalances between scheduled and actual energy injection or drawdown on a 15-minute interval basis, enabling renewable energy generators to participate in competitive power markets. AI-based demand and generation forecasting improves compliance and reduces financial penalties under DSM.

Mentioned in

Differential Privacy

Standards & Safety

A mathematical privacy-preserving technique that enables organisations to share statistical insights derived from sensitive datasets while protecting individual data points from identification. It was discussed as a foundational safeguard for responsible AI data sharing.

Mentioned in

Diffusion Model

Models & Architectures

A generative modelling approach that learns to reverse a gradual noise-addition process to synthesise data, producing highly realistic high-resolution outputs such as physical climate fields or imagery. At the summit, diffusion models were highlighted for their strength in scientific downscaling and data augmentation tasks.

Mentioned in

Diffusion Pathways

Applications

Standardized, proven routes for scaling AI solutions from pilot projects to full production deployment across multiple sectors and geographies, reducing friction in adoption.

Mentioned in

DigiLocker

Applications

India's federated digital document storage system, highlighted at the summit as a DPI model that allows citizens to store and share official documents securely without centralising data. Its architecture informs open, distributed approaches to personal data management.

Mentioned in

Digital Divide

Policy & Governance

The multi-dimensional gap between populations that can effectively access and use digital technologies, encompassing not only internet connectivity but also digital literacy, language barriers, AI comprehension, and policy literacy. Addressing the digital divide is central to ensuring equitable benefit from AI systems globally.

Mentioned in

Digital India Initiative

Policy & Governance

A Government of India program aimed at expanding digital infrastructure and skills nationwide, including targeted programs such as 'Saksham' and 'She' for digital literacy and workforce inclusion. The initiative was referenced at the summit as foundational context for India's AI readiness and equitable access agenda.

Mentioned in

Digital Intelligence Platform (DIP)

Applications

A large-scale analytics platform that processes data from 870 million mobile numbers to power the RBI-mandated Financial Risk Indicator, assessing transaction-level fraud and credit risk. Presented at the summit as an example of AI-driven infrastructure underpinning national financial safety systems.

Mentioned in

Digital Literacy

Policy & Governance

The ability to effectively find, evaluate, use, and communicate information using digital technologies, including AI-powered tools and interfaces. Low digital literacy remains a major barrier to equitable access to AI-driven services, particularly via smartphone or device-based platforms.

Mentioned in

Digital Locker

Applications

A secure digital system for storing, managing, and verifying personal documents and credentials, enabling individuals to share verified records with institutions on demand. It was discussed in the context of digital identity infrastructure and workforce credentialing.

Mentioned in

Digital Personal Data Protection (DPDP) Act

Policy & Governance

India's comprehensive data protection legislation governing the collection, processing, sharing, and erasure of personal data, including breach notification obligations. The Act was in the process of operationalisation at the time of the summit and establishes user rights analogous to international frameworks such as the GDPR.

Mentioned in

Digital Personal Data Protection Act (DPDP Act)

Policy & Governance

India's primary legislation governing the collection, processing, and protection of personal digital data, requiring organisations to handle personally identifiable information (PII) with explicit consent and appropriate safeguards. At the summit, it was referenced in the context of AI products such as Papy Labs implementing PII-blurring mechanisms to achieve compliance.

Mentioned in

Digital Personal Data Protection Act (DPDP)

Policy & Governance

India's primary data protection legislation governing the collection, processing, and storage of personal data, which differs meaningfully from frameworks such as GDPR and creates compliance complexity for multinational organizations deploying AI in India. It shapes how AI systems must handle user data for the country's 1.4 billion users.

Mentioned in

Digital Public Goods (DPGs)

Policy & Governance

Open-source software, data, and AI models that meet defined technical and ethical standards and are made freely available for public benefit. At the summit, DPGs were discussed as a partial but insufficient solution for AI governance, particularly in the Global South.

Mentioned in

Digital Public Infrastructure (DPI)

Infrastructure & Compute

Shared, open, and interoperable digital platforms—such as India's Aadhaar identity system, UPI payments network, and DigiLocker—that provide foundational services enabling large-scale societal and economic participation. The summit cites India Stack as a globally replicable model for building inclusive DPI.

Mentioned in

Digital Regulatory Sandboxes

Policy & Governance

Controlled, time-limited environments established by regulators (such as the UK Financial Conduct Authority model) that allow companies to test AI products and services under real-world conditions with relaxed regulatory requirements. They enable innovation while maintaining regulatory oversight.

Mentioned in

Digital Sovereignty

Policy & Governance

A nation's capacity to maintain control over its digital infrastructure, data, and AI systems while retaining access to global markets. Summit discussions framed it as a policy balancing act between national autonomy and international interoperability.

Mentioned in

Digital Twin

Applications

A virtual, data-driven replica of a physical system—such as an ocean, a supply chain, or an urban environment—that enables real-time simulation, monitoring, and scenario analysis. Summit sessions highlighted digital twins as a powerful AI application for complex system management and decision support.

Mentioned in

Digital Twin Technology

Applications

Virtual replicas of physical systems — such as semiconductor equipment and fabrication processes — modelled at multiple scales to enable simulation, optimisation, and predictive maintenance without physical experimentation. The summit discussed their role in accelerating semiconductor and industrial AI applications.

Mentioned in

Diksha

Applications

An open-source education platform originally deployed in Indian schools and now expanding to African nations and developed countries. Cited at the summit as a replicable digital public infrastructure model for delivering personalised learning at national scale.

Mentioned in

Direct Seeded Rice (DSR)

Applications

A rice cultivation technique in which seeds are sown directly into the field rather than transplanted from nurseries, resulting in significantly lower water consumption and reduced methane emissions. Summit sessions presented DSR as a climate-smart agricultural practice supported by AI-driven advisory tools.

Mentioned in

Direct-to-Chip Cooling

Infrastructure & Compute

An advanced thermal management technique in which coolant is delivered directly to individual processor chips within high-density GPU server racks, enabling rack power densities of 60–500+ kW that are unachievable with traditional air cooling. The summit identified this as a necessary infrastructure capability for next-generation AI data centres.

Mentioned in

Domain-Specific AI Models

Models & Architectures

Machine learning models trained and optimised for narrowly defined tasks within a particular field, such as distinguishing weeds from crops in precision agriculture or detecting safety non-compliance (e.g., missing helmets) in industrial settings. Their specificity typically yields higher accuracy than general-purpose models for targeted applications.

Mentioned in

Domain-Specific Fine-Tuning

Models & Architectures

The process of adapting a general-purpose foundation model to a specific industry, region, or application by training it further on relevant local data and contextual examples. At the summit, this was identified as the key mechanism for deriving practical value from large pre-trained models in specialised settings.

Mentioned in

Domain-Specific Language Model

Models & Architectures

A language model that has been fine-tuned or locally adapted from a general foundation model to perform accurately within a specific field or cultural context, such as law, medicine, or a regional language. The summit emphasised these models as key to making AI useful and trustworthy in specialised, non-English-speaking settings.

Mentioned in

Duty of Care

Policy & Governance

A professional and ethical obligation requiring AI developers and deployers to proactively prevent harm to users and affected communities, analogous to standards in medicine and law. At the summit, it was noted that while this obligation is ethically foundational, it is not yet explicitly codified in Indian AI law.

Mentioned in

E

Edge AI

Infrastructure & Compute

The deployment of AI inference directly on user devices rather than centralised servers, enabling personalisation, privacy preservation, and low-latency operation without continuous cloud connectivity. This was highlighted at the summit as critical for reaching populations with limited connectivity.

Mentioned in

Edge Computing

Infrastructure & Compute

A computing paradigm in which AI inference and data processing occur on or near the device generating the data, rather than in a centralised cloud. At the summit, edge computing was discussed specifically in the context of on-device reasoning models for real-time safety monitoring, anomaly detection, and override mechanisms against cyber-attacks.

Mentioned in

Edge Data Centers

Infrastructure & Compute

Distributed computing infrastructure deployed at geographic locations close to end users, minimizing latency for real-time AI applications such as conversational systems. They complement centralized cloud compute by enabling low-latency inference at the network edge.

Mentioned in

Edge Deployment

Infrastructure & Compute

An infrastructure strategy in which AI inference is performed locally on-device or at the network edge rather than in centralised cloud data centres, reducing latency, bandwidth costs, and dependency on connectivity. It is presented at the summit as a priority approach for deploying AI in resource-constrained or privacy-sensitive settings.

Mentioned in

Edge Devices

Infrastructure & Compute

End-user hardware such as robots, drones, medical devices, and smartphones that run AI inference locally, reducing dependence on centralized cloud infrastructure.

Mentioned in

Edge vs. Cloud Deployment

Infrastructure & Compute

A deployment architecture decision in which latency-critical AI tasks (such as real-time industrial control in blast furnaces) are processed locally on edge hardware, while analytics and longer-horizon forecasting are offloaded to cloud infrastructure. The summit highlighted this distinction as critical for industrial AI applications where response time directly affects safety and efficiency.

Mentioned in

Education AI

Applications

AI applications designed for educational contexts, encompassing adaptive learning systems, personalised learning paths, and AI-powered teacher support tools. Discussed at the summit as a high-impact domain for improving learning outcomes and reducing educational inequity.

Mentioned in

Electronic Health Records (EHR)

Data & Datasets

Digital repositories of patient medical information, noted at the summit as being approximately 80% unstructured in India, presenting both a challenge and opportunity for AI-driven healthcare analytics. Unlocking this data is seen as critical to advancing AI-powered clinical decision support.

Mentioned in

Electronic Health Records (EHRs)

Data & Datasets

Structured digital records of patients' medical histories, diagnoses, treatments, and outcomes, discussed at the summit as a core requirement in medical education curricula and as a primary data source for AI-driven clinical tools. Interoperability and privacy-preserving access to EHR data were identified as ongoing challenges.

Mentioned in

Electronic Signature (e-Sign)

Tools & Frameworks

A legally recognised digital mechanism for signing documents electronically, subject to transparency and audit requirements. Summit discussions highlighted its role in ensuring accountability in digital transactions and regulated workflows.

Mentioned in

Epistemic Injustice

Policy & Governance

A form of social injustice related to the production, validation, and recognition of knowledge — raised at the summit in the context of whose knowledge, languages, and perspectives are included or excluded in AI training data. It highlights the ethical stakes of data curation decisions for underrepresented communities.

Mentioned in

Ethical AI

Policy & Governance

A broad framework for ensuring that AI systems are developed and deployed in ways that are fair, accountable, and aligned with societal values, encompassing concerns around environmental impact, workforce displacement, and equitable access. At the summit it served as an umbrella concept for responsible AI governance discussions.

Mentioned in

Ethics by Design

Standards & Safety

A development philosophy in which ethical considerations—such as fairness, transparency, and accountability—are embedded into AI systems from their inception rather than addressed as an afterthought after failures occur. The summit positioned this as a foundational principle for responsible AI development.

Mentioned in

EU AI Act

Policy & Governance

The European Union's landmark AI regulation, which classifies AI systems by risk level and imposes corresponding obligations on developers and deployers, including those outside the EU who serve EU users. Described at the summit as the world's strictest AI regulatory framework, it sets a significant benchmark for global AI governance discussions.

Mentioned in

Evaluation Datasets

Data & Datasets

Curated benchmark datasets used to assess AI model performance with respect to linguistic fairness, cultural context sensitivity, and reliability across diverse languages and user groups. The summit highlighted a significant gap in evaluation datasets for Indic languages and underrepresented communities.

Mentioned in

Evidence-Based Policymaking

Policy & Governance

An approach to government regulation in which policy decisions are grounded in empirical data and published research, such as Anthropic's Economic Index, which tracks AI's job market impacts to inform legislative responses. At the summit, this was highlighted as essential for crafting AI governance frameworks that respond to real-world economic evidence.

Mentioned in

Expert-in-the-Loop

Standards & Safety

A human-oversight model in which domain experts—such as clinicians or compliance officers—review and validate AI-generated outputs post-deployment, with their feedback used to iteratively improve the underlying knowledge base. This approach balances operational efficiency with accountability in high-stakes applications.

Mentioned in

Explainability and Auditability

Standards & Safety

Requirements mandating that AI systems provide transparent, interpretable reasoning ('glass box' approaches) rather than opaque outputs ('black box' systems), enabling regulators, auditors, and affected users to understand and scrutinise model decisions. These properties were highlighted at the summit as essential for trustworthy AI in high-stakes domains.

Mentioned in

Explainability and Interpretability

Standards & Safety

The degree to which the internal logic and decision-making processes of an AI system can be understood and communicated to human stakeholders, including developers, regulators, and end users. These properties were discussed at the summit as foundational requirements for building trust and enabling accountability in high-stakes AI applications.

Mentioned in

Explainability and Transparency

Standards & Safety

The principle that AI systems should produce interpretable outputs and maintain documented, auditable decision-making processes. At the summit, these properties were identified as foundational requirements for trustworthy AI deployment, particularly in high-stakes domains.

Mentioned in

Explainable AI (XAI)

Standards & Safety

A set of techniques that make AI model decisions interpretable to humans, such as feature-level attribution that identifies which input signals (e.g., specific speech features) drove a diagnostic outcome. In medical contexts, XAI tools like Voxet enable clinicians to understand why a model flags conditions such as dysarthria.

Mentioned in

Extreme Co-Design

Infrastructure & Compute

An approach to AI infrastructure development that simultaneously optimises across all stack layers — compute, networking, storage, rack design, and data centre connectivity — rather than treating each in isolation. It was discussed at the summit as necessary to achieve the performance and efficiency demands of frontier AI workloads.

Mentioned in

F

Fair Forward Initiative

Tools & Frameworks

An Indo-German partnership focused on developing open voice technologies for nine Indian languages to ensure equitable access to AI for non-English-speaking populations. Cited at the summit as a model for international collaboration on inclusive, open-source language AI.

Mentioned in

False Positive Rate

Models & Architectures

In fraud detection contexts, the proportion of legitimate transactions that are incorrectly flagged as fraudulent by an AI model. Minimising false positives is a key optimisation priority for payment systems such as India's NPCI to avoid disrupting genuine user activity.

Mentioned in

Farmer Chat

Applications

An AI-powered, voice-to-voice agricultural advisory system developed by Digital Green, serving over one million users by providing localized farming guidance through a conversational interface fine-tuned using reinforcement learning from human feedback (RLHF) with agronomist experts. It exemplifies the application of LLMs for last-mile service delivery in low-resource settings.

Mentioned in

Farmer Field Schools

Applications

An established FAO methodology for participatory farmer training and knowledge sharing, into which AI tools such as FarmerChat are being integrated to extend reach and effectiveness.

Mentioned in

Farmer Producer Organizations (FPOs)

Policy & Governance

Collective structures that aggregate smallholder farmers to enable joint decision-making, shared resources, and improved market access. In the summit's agricultural AI context, FPOs were discussed as key intermediaries for deploying AI-driven advisory and data collection services at scale.

Mentioned in

FarmerChat

Applications

Digital Green's AI-powered chatbot designed to deliver agricultural advice and support directly to farmers. It serves as an example of accessible, domain-specific AI applications targeting underserved rural communities.

Mentioned in

Feature Phones

Applications

Basic mobile handsets without smartphone capabilities, recognised at the summit as the primary digital interface for excluded or low-income populations in emerging markets. They were highlighted as a viable channel for delivering AI-powered services such as consent management and flood alerts via SMS and voice.

Mentioned in

Federated Architecture

Infrastructure & Compute

A distributed system design—illustrated at the summit via an open network model for AI trials—that enables startups to access regulated data, training resources, and infrastructure without centralised control, analogous to an app store for AI services. It supports interoperability and data sovereignty across multiple stakeholders.

Federated Learning

Models & Architectures

A privacy-preserving machine learning paradigm in which model training is distributed across multiple data-holding nodes without raw data ever leaving its source. At the summit it was highlighted as a key enabler for cross-border AI collaboration and interoperability while respecting data sovereignty.

Mentioned in

Fine-Tuning

Models & Architectures

The process of continuing to train a pre-trained model on a smaller, domain-specific dataset to adapt its capabilities to a particular task or context, compensating for gaps in public pre-training data. At the summit, companies such as Quantis AI and Imagics AI described fine-tuning as essential for overcoming the limitations of generic foundation models in specialised verticals.

Mentioned in

First LEGO League

Applications

An annual global STEM competition for young students that combines robotics challenges with open-ended design projects to foster problem-solving and innovation skills. It was referenced at the summit as an example of youth-oriented AI and technology education initiatives.

Mentioned in

Flood Early Warning Accuracy Metrics

Standards & Safety

Quantitative performance indicators for AI-based flood forecasting systems, notably a 95%-plus correlation between forecasted and actual water levels above predefined danger thresholds. At the summit these metrics were used to validate model reliability for operational deployment in humanitarian early warning contexts.

Mentioned in

ForecastNet

Models & Architectures

A hybrid neural architecture combining neural operators and transformers, designed for medium-range weather forecasting over a two-week horizon, trainable in approximately 1,000 GPU-hours, making it accessible to mid-sized research institutions. It represents a cost-effective approach to high-resolution atmospheric prediction.

Mentioned in

Formal Verification

Standards & Safety

Mathematically rigorous techniques used to prove that software or AI systems meet precisely specified properties and behave as intended. Cited at the summit in the context of AI safety research, including work by Stuart Russell, as a method for providing strong correctness guarantees.

Mentioned in

Foundation Model

Models & Architectures

A large-scale model pre-trained on broad data that can be adapted to a wide range of downstream tasks, including domain-specific applications such as climate forecasting. The summit highlighted examples such as 'Climate in a Bottle', trained on ERA5 and ICON data for long-range climate queries.

Mentioned in

Foundation Model (FM)

Models & Architectures

Large-scale AI models pre-trained on broad datasets that serve as a base for a wide range of downstream applications, either through direct use or further fine-tuning. At the summit, multiple organisations—including Technoate AI, Quantis AI, and Industs AI—discussed building custom foundation models or pivoting to application-layer strategies built on top of existing FMs.

Mentioned in

Four-Level AI Evaluation Framework

Standards & Safety

A structured impact assessment methodology, attributed to Agency Fund, that measures AI interventions across four progressive levels: (1) safe and accurate responses, (2) user engagement and perceived value, (3) measurable behavioural or attitudinal change, and (4) broader human or societal outcomes. It was presented at the summit as a rigorous framework for evaluating the real-world impact of AI applications.

Mentioned in

Frontier Model

Models & Architectures

The most capable and advanced AI models available at a given point in time, typically characterised by state-of-the-art performance across a broad range of benchmarks and tasks. The summit referenced frontier models in the context of national AI ambitions, including Saram, an Indian frontier model announced at the event.

Mentioned in

Frugal AI

Models & Architectures

A cost-conscious AI deployment philosophy, highlighted by Intel at the summit, that prioritises resource-efficient model design and inference to reduce computational and financial overhead. It is particularly relevant for democratising AI access in resource-constrained environments.

Mentioned in

Frugal Innovation

Policy & Governance

India's strategic approach to developing AI solutions that are low-cost and high-impact, prioritising resource efficiency over capital-intensive scaling strategies. This philosophy contrasts with the large-infrastructure AI development models prevalent in wealthier economies.

Mentioned in

Functional Safety

Standards & Safety

A conception of AI safety that recognises safe behaviour as dependent on local societal context, cultural norms, legal institutions, and governance structures, rather than being achievable through engineering constraints alone. Summit speakers contrasted this with purely technical safety approaches, arguing that socially embedded safeguards are equally critical.

Mentioned in

Fund of Funds (Women-Centered AI Finance)

Policy & Governance

A proposed government financing mechanism that pools capital into a dedicated fund which then invests in multiple women-centered digital and AI initiatives, amplifying reach and reducing individual programme risk. Summit participants noted the model has been proposed but not yet implemented in the contexts discussed.

Mentioned in

G

Gemini

Models & Architectures

Google's multimodal generative AI model capable of processing and generating text, images, and audio. It was referenced at the summit as a prominent example of frontier foundation model development by a major technology company.

Mentioned in

Gemini Nano

Models & Architectures

The smallest and most efficient model in Google's Gemini large language model family, optimised for on-device inference and deployed directly on Android devices via Android AI Core. Its compact design enables privacy-preserving AI capabilities without requiring a cloud connection.

Mentioned in

Gemma

Models & Architectures

A family of lightweight, open-weight large language models developed by Google, designed to be efficient enough to run on resource-constrained edge devices. Summit sessions referenced Gemma as a practical example of on-device AI inference, including deployment on hardware as compact as a Raspberry Pi.

Mentioned in

General Data Protection Regulation (GDPR)

Policy & Governance

The European Union's comprehensive data privacy and protection regulation that, among other requirements, mandates that personal data on EU residents be stored and processed within Europe. Summit participants reference GDPR compliance—such as Papy Labs' European data residency policy—as a baseline standard for responsible AI product design.

Mentioned in

General Purpose Technology (GPT)

Other

A category of transformative technologies—including steam power, electricity, and the internet—that fundamentally reshape entire economies and societies over time. At the summit, AI was explicitly positioned as the latest general purpose technology with civilization-scale implications.

Mentioned in

Generative AI

Models & Architectures

A class of AI systems capable of producing novel content—including text, images, audio, and video—in response to prompts, broadly popularised after the launch of ChatGPT in late 2022. The summit treated the post-2022 generative AI wave as a defining inflection point shaping current policy, infrastructure, and application discussions.

Mentioned in

Generative AI (GenAI)

Models & Architectures

A class of AI systems, including large language models and text-to-image models, capable of producing novel content—text, images, code, audio—by learning patterns from large training datasets. At the summit, GenAI was discussed as a transformative technology with broad applications across health, agriculture, education, and governance.

Mentioned in

Geospatial Data

Data & Datasets

Location-referenced datasets, including satellite imagery from India's ISRO, used to support AI applications in agriculture monitoring, disaster management, and urban planning. At the summit, geospatial data was identified as a high-value public data asset for AI-driven governance.

Mentioned in

Global AI Impact Commons

Other

A curated, publicly accessible repository of AI use cases with demonstrated social impact and credible scaling potential, proposed as a summit deliverable to accelerate responsible AI adoption. It is intended to serve as a shared resource for policymakers, developers, and implementers worldwide.

Mentioned in

Global AI Talent Distribution

Policy & Governance

The observed phenomenon that approximately 90% of leading AI scientists are located outside the United States and China, though many ultimately work within those two countries due to resource concentration. The summit cited this as a structural challenge for equitable global AI development and a case for building local AI ecosystems.

Mentioned in

Global Capability Centers (GCCs)

Policy & Governance

Offshore centers established by OEMs and tier-one suppliers—particularly in India—to handle engineering and operations functions; summit discussions recommended repositioning them from cost-efficiency hubs to innovation co-creation hubs that drive AI-led product development.

Mentioned in

Global Digital Compact

Policy & Governance

A United Nations initiative establishing shared principles and commitments for an open, inclusive, and secure digital future, including norms around AI governance and equitable access to digital infrastructure. It was referenced at the summit as a key multilateral framework shaping international AI policy.

Mentioned in

Google AI Studio

Tools & Frameworks

A browser-based, no-code interface (available at ai.studio or ai.dev) for prototyping, experimenting with, and deploying Google's Gemini models, including API key creation and direct deployment capabilities. It was referenced at the summit as a tool lowering the barrier to entry for developers and researchers working with large language models.

Mentioned in

Government as First Customer

Policy & Governance

A policy model in which the government acts as the anchor buyer of AI products and services, providing startups and local firms with an initial customer base, distribution network, and access to ground-level contextual data such as credit histories and cultural insights. Summit speakers advocated this approach to de-risk early-stage AI ventures and accelerate public-interest deployment.

Mentioned in

GPU Access

Infrastructure & Compute

The availability of high-performance graphics processing units (e.g., NVIDIA H100s, A200s) for AI training and inference, with cost barriers and procurement models via institutions being key concerns discussed at the summit.

Mentioned in

GPU Clusters

Infrastructure & Compute

Large arrays of graphics processing units—particularly NVIDIA-generation chips—required for training frontier large language models, representing a significant capital expenditure barrier for many organisations. Identified at the summit as a key infrastructure bottleneck in the global AI compute landscape.

Mentioned in

GPU Compute

Infrastructure & Compute

High-performance graphics processing units used as the primary hardware for training and running large AI models, referenced at the summit in the context of India's ₹10,300 crore allocation and a 38,000+ GPU national compute initiative. Supply constraints and Nvidia's market dominance were flagged as strategic concerns.

Mentioned in

GPU Hardware (AI Compute)

Infrastructure & Compute

Specialized processors central to AI model training and inference, with the summit context referencing NVIDIA H100s as the current primary workhorse, A10s for legacy workloads, and the forthcoming Blackwell B200 for next-generation compute needs. Access to GPU hardware remains a critical bottleneck for AI development, particularly in the Global South.

Mentioned in

GPU Training Cost

Infrastructure & Compute

A measure of the computational resources required to train AI models, with flagship climate models at the summit cited as requiring approximately 1,000 GPU-hours—a threshold considered accessible to mid-sized academic and research institutions. This metric is increasingly used to benchmark the democratisation of AI model development.

Mentioned in

Graphics Processing Unit (GPU)

Infrastructure & Compute

Massively parallel processors that form the dominant compute substrate for training and running large AI models, consuming 500–700W per unit in contrast to the human brain's ~20W. The summit used this comparison to motivate research into more energy-efficient AI hardware alternatives.

Mentioned in

Graphics Processing Units (GPUs)

Infrastructure & Compute

Specialised parallel-processing hardware that serves as the primary compute resource for training and running AI models. The summit identified GPU availability as a critical bottleneck, noting India's current capacity of approximately 34,000 GPUs against significant global demand.

Mentioned in

Green AI

Standards & Safety

An approach to AI development and infrastructure design that prioritises energy efficiency and reduced environmental impact, including the use of nuclear and renewable energy sources and holistic data centre design. At the summit, Green AI was discussed as integral to sustainable AI scaling and climate transition goals.

Mentioned in

Ground Truthing

Standards & Safety

The continuous process of validating AI-generated outputs against real-world data or human expert judgment to ensure accuracy and reliability in production systems.

Mentioned in

Guardrails

Standards & Safety

Safety and behavioural constraints applied to AI systems to prevent harmful, biased, or unintended outputs, requiring continuous evaluation and updating rather than one-time implementation. Summit discussions emphasised that effective guardrails must be infrastructure-aware and adapted to the operational context of each deployment.

Mentioned in

H

Hallucination

Standards & Safety

The tendency of AI systems to generate false, fabricated, or misleading information presented with apparent confidence, identified as a particular risk in low-resource-language models and medical vision-language model applications. Summit discussions emphasised hallucination as a critical safety concern requiring targeted evaluation and mitigation strategies.

Mentioned in

Hardware Security for AI

Standards & Safety

The discipline of ensuring that the physical and firmware layers of AI compute infrastructure are trustworthy and resistant to weaponization or tampering. It was raised at the summit as a critical but often overlooked dimension of comprehensive AI safety.

Mentioned in

Health Technology Assessment (HTA)

Policy & Governance

A systematic evaluation process that appraises both the clinical performance and the economic value of a health technology or medical AI solution before approving it for deployment. The summit cites India's ICMR use of HTA to certify AI-based chest X-ray diagnostic tools as a model for evidence-based health AI regulation.

Mentioned in

Healthcare AI

Applications

The application of artificial intelligence to medicine and public health, encompassing diagnostic support, CT and MRI image analysis, predictive analytics, and preventive care planning. The summit examined both the promise and the regulatory challenges of deploying such systems equitably across diverse health systems.

Mentioned in

High-Performance Computing (HPC)

Infrastructure & Compute

Advanced computational infrastructure—such as AI innovation labs with HPC environments—that provides the processing power startups and researchers need to develop and train AI models. Discussed at the summit as a critical enabler for AI innovation ecosystems.

Hiroshima AI Process

Policy & Governance

An international AI governance initiative launched in 2023 under the G7 Hiroshima Summit, establishing voluntary guidelines and a code of conduct for advanced AI systems. It was cited at the summit as an example of multilateral efforts to set responsible AI norms ahead of binding regulation.

Mentioned in

Horizontal Working Groups

Policy & Governance

Cross-cutting expert bodies within Japan's AI governance structure that address foundational safety layers applicable across all sectors, such as data quality and model inspection. They work in parallel with sector-specific vertical working groups to provide comprehensive AI oversight.

Mentioned in

Hub-and-Spoke Compute Model

Infrastructure & Compute

A regional AI infrastructure strategy proposed within the ECOWAS framework in which a high-capacity compute hub is established in an energy-rich country and funded collectively by the region, with smaller local compute nodes serving individual countries. This approach balances cost efficiency with national data sovereignty needs.

Mentioned in

Hugging Face

Tools & Frameworks

An open-source platform and model registry widely used for sharing, discovering, and distributing open-weights AI models and datasets. At the summit it was referenced as a key infrastructure component for the open AI ecosystem.

Mentioned in

Human-in-the-Loop (HITL)

Standards & Safety

A design paradigm in which human workers are embedded in AI pipelines to validate, annotate, and refine model outputs, serving as a core mechanism for bias mitigation and quality assurance. It contrasts with fully autonomous systems by ensuring expert oversight at critical decision points.

Mentioned in

Human-in-the-Loop (HITL) Architecture

Standards & Safety

A system design pattern that keeps human oversight embedded in AI decision pipelines through triage mechanisms and escalation pathways, ensuring clinicians or domain experts can intervene before consequential actions are taken. It is particularly critical in high-stakes domains such as healthcare, defense, and public services.

Mentioned in

Human-in-the-Loop Evaluation

Standards & Safety

An evaluation methodology that integrates domain expert judgment alongside automated technical assessments to validate AI system outputs. The summit highlighted this approach as essential for high-stakes domains where purely quantitative benchmarks are insufficient.

Mentioned in

Human-in-the-Loop System

Standards & Safety

An AI deployment model in which human experts review, validate, and retain final decision-making authority over AI-generated outputs, maintaining quality control and accountability. The summit highlighted its use in health applications where workers monitor and act on AI suggestions.

Mentioned in

Hunger Map Live

Applications

A real-time food insecurity mapping tool developed by the World Food Programme (WFP) that integrates satellite imagery and other data streams to monitor and visualise hunger at a global scale. It exemplifies the use of AI and remote sensing for humanitarian decision-making.

Mentioned in

I

Impact Evaluation

Standards & Safety

Rigorous assessment methods—including randomised controlled trials (RCTs) and cost-effectiveness analysis—used to measure the real-world outcomes of AI interventions before or alongside government adoption decisions. The summit framed credible impact evaluation as a prerequisite for scaling AI solutions in public-sector programmes.

Mentioned in

India AI Governance Framework

Policy & Governance

A national policy framework released by the Government of India in November 2025 to guide the responsible development, deployment, and oversight of AI systems within India. It reflects India's effort to balance AI-driven economic growth with safety, accountability, and inclusive access.

Mentioned in

India AI Governance Guidelines

Policy & Governance

A regulatory framework released in November 2025 establishing principles of trust, innovation, safety, and reliability for AI development and deployment in India. The guidelines were discussed at the summit as a model for national-level AI governance in large emerging economies.

Mentioned in

India AI Mission

Policy & Governance

A national initiative providing shared GPU compute access and mandating data residency within India, aimed at accelerating domestic AI research, startups, and public-sector AI deployment. It addresses infrastructure gaps that otherwise limit participation in the global AI ecosystem.

Mentioned in

India Digital Personal Data Protection Act

Policy & Governance

India's legislative framework governing the collection and use of personal data, requiring explicit user consent before data can be processed. At the summit it was discussed as a foundational governance instrument shaping how AI systems may access and train on Indian citizen data.

Mentioned in

India Energy Stack

Standards & Safety

A proposed set of interoperable standards designed to modernise India's electricity grid and enable seamless integration of diverse energy sources. The framework draws inspiration from India Stack's digital public infrastructure model applied to the energy sector.

Mentioned in

India National Quantum Mission (NQM)

Policy & Governance

India's government-backed initiative to accelerate quantum technology research and development, with dedicated research hubs established at Indian Institutes of Technology (IITs) and the Indian Association for the Cultivation of Science (IAS). The mission aims to build domestic quantum computing, communication, and sensing capabilities.

India Semiconductor Mission

Policy & Governance

India's strategic roadmap for domestic chip manufacturing, aimed at reducing dependency on foreign semiconductors, though large-scale production has not yet been achieved.

Mentioned in

India Semiconductor Mission 2.0

Policy & Governance

An expansion of India's national semiconductor programme that goes beyond chip fabrication to encompass equipment manufacturing and full ecosystem development, including design, materials, and packaging. It was presented at the summit as central to India's AI sovereignty strategy.

Mentioned in

India Stack

Infrastructure & Compute

India's foundational digital public infrastructure comprising interoperable open APIs—including Aadhaar identity, UPI payments, and DigiLocker—that enable government and private actors to build services at scale. At the summit it was held up as a replicable model for building inclusive digital and AI infrastructure.

Mentioned in

India's AI Governance Framework

Policy & Governance

A national policy framework published in November 2025 articulating grounding principles for ethical AI in India, including human rights, fairness, transparency, explainability, privacy, and safety. It was presented at the summit as a key reference document guiding domestic AI regulation and deployment standards.

Mentioned in

Indian AI Governance Guidelines

Policy & Governance

A set of governance recommendations published at the India AI Impact Summit, covering the establishment of cross-functional expert committees and governance groups to oversee responsible AI deployment. The guidelines address data adequacy, cybersecurity thresholds, and assurance practices tailored to the Indian context.

Mentioned in

Indian Language Models

Models & Architectures

A suite of large language models covering 12 Indian languages, announced by Prime Minister Modi, designed to make AI accessible and useful for India's linguistically diverse population. The initiative was cited at the summit as receiving day-zero hardware support from AMD, signalling strong infrastructure commitment.

Mentioned in

Indic Language AI

Applications

AI systems and platforms specifically designed to support Hindi and other regional Indian languages across voice and text modalities, addressing the linguistic diversity of South Asian user populations. The summit highlighted this as a key product focus for startups such as Indus AI and Quantis AI.

Mentioned in

Indic Language Coverage

Data & Datasets

The set of Indian languages — including Bengali, Hindi, Tamil, Telugu, Kannada, Malayalam, Marathi, Gujarati, Punjabi, and Urdu — targeted for inclusion in multilingual AI models and speech systems. At the summit, expanding coverage across these languages was identified as essential for equitable AI access across India's linguistically diverse population.

Mentioned in

Indic Language Model

Models & Architectures

An AI language model localised for Indian languages and regional contexts, identified at the summit as a high-impact use case for expanding AI accessibility across India's linguistically diverse population. These models address gaps left by predominantly English-language foundation models.

Mentioned in

Indigenous Data Sovereignty

Policy & Governance

The principle that Indigenous communities hold the right to govern, access, and control data pertaining to their cultures, peoples, and territories. The summit highlighted this as an essential ethical consideration when building AI systems that draw on or affect Indigenous communities.

Mentioned in

Inference Optimization

Infrastructure & Compute

Techniques and strategies for improving the efficiency and cost-effectiveness of deploying AI models at inference time, including model selection, cost-per-quality routing, and continuous model evaluation. This was highlighted at the summit as a critical capability for scaling AI applications sustainably.

Mentioned in

Information Technology Act (IT Act)

Policy & Governance

India's primary legislation governing electronic commerce, cybercrime, and digital data, which provides the existing legal basis for regulating certain aspects of AI and data use in the absence of dedicated AI-specific law. It is frequently referenced in discussions on AI accountability and data governance in the Indian context.

Mentioned in

Inspect Framework

Tools & Frameworks

An open-source AI evaluation tool developed by the UK AI Safety Institute that has been widely adopted by companies, research organizations, and governments for assessing model capabilities and safety.

Mentioned in

Interoperability

Infrastructure & Compute

The capacity of disparate AI and digital systems to communicate, exchange, and make use of shared data seamlessly across organisational or technical boundaries. Cited at the summit in the context of Tanzania's challenge of integrating over 800 government systems into a coherent AI-enabled public service infrastructure.

Mentioned in

Interoperability Standards

Standards & Safety

Technical and policy frameworks that enable diverse AI models and platforms to exchange data and functionality seamlessly, particularly within sovereign data architectures. These standards are considered essential for cross-border and cross-platform AI collaboration without sacrificing data sovereignty.

Mentioned in

IPCC Model (for AI Governance)

Policy & Governance

A proposed governance structure for AI modelled on the UN's Intergovernmental Panel on Climate Change (IPCC), in which an international scientific panel would synthesise AI research to inform coordinated global policy. The summit discussed this model as a potential framework for achieving legitimate, evidence-based multilateral AI oversight.

Mentioned in

ISO 42001

Standards & Safety

An international standard for AI management systems that provides organisations with a governance framework spanning 13 broad dimensions, covering areas such as risk management, transparency, and accountability. It was discussed at the summit as a practical tool for structuring responsible AI deployment across industries.

Mentioned in

ISO AI Standards

Standards & Safety

International consensus-based standards developed by the International Organization for Standardization to promote interoperability, security, and safety in AI systems. At the summit, these were discussed in the context of India's BIS proposing five AI standards for adoption, reflecting efforts to align national AI governance with global benchmarks.

Mentioned in

ISO/IEC AI Standards

Standards & Safety

International standards developed by ISO and IEC covering areas such as biometric template protection and liveness detection, providing technical benchmarks for safe and interoperable AI system design. These were referenced at the summit as implicit compliance requirements for identity and biometric applications.

Mentioned in

J

Jailbreaking

Standards & Safety

The practice of crafting adversarial inputs or prompts designed to bypass the safety guardrails and content restrictions built into AI systems. It represents a significant safety and misuse risk, prompting the development of standardised adversarial benchmarks such as those from MLCommons.

Mentioned in

JCT Framework (Justifiability, Contestability, Traceability)

Standards & Safety

A regulatory accountability framework for AI in financial services and other regulated sectors requiring that decisions be justifiable (grounded in valid reasons), contestable (open to challenge by affected parties), and traceable (fully auditable). It operationalises fairness and accountability principles in high-stakes AI deployments.

Mentioned in

K

Knowledge Distillation

Models & Architectures

A model compression technique in which a smaller 'student' model is trained to replicate the behaviour and outputs of a larger 'teacher' model, reducing computational cost while retaining much of the original performance. It was highlighted at the summit as a key method for making AI accessible on resource-constrained hardware.

Mentioned in

Knowledge Graph

Data & Datasets

A structured representation of entities, concepts, and their interrelationships, used to augment AI reasoning with explicit, verifiable domain knowledge. At the summit, knowledge graphs were discussed as a complementary technique to RAG for improving factual accuracy and logical coherence in AI systems.

Mentioned in

Kubernetes

Infrastructure & Compute

An open-source container orchestration system used to automate deployment, scaling, and management of containerised AI workloads. It was highlighted at the summit as a strategy for avoiding vendor lock-in in AI infrastructure.

Mentioned in

L

LangGraph

Tools & Frameworks

An agent orchestration framework used to coordinate multi-agent AI workflows, cited at the summit in the context of the HeyMedicare health AI system. It enables stateful, graph-based coordination between multiple specialised AI agents.

Mentioned in

Large Language Model (LLM)

Models & Architectures

Large-scale neural language models such as ChatGPT and Gemini used for a wide range of general-purpose natural language tasks; India alone was cited as having 70M+ active users of such systems. Summit discussions drew important distinctions between 'open weights' models and those with fully open training data and methodology.

Mentioned in

Latency (Mission-Critical AI)

Infrastructure & Compute

The end-to-end delay in an AI or communications system, with sub-10-millisecond requirements identified as essential for real-time applications such as haptic feedback and remote telemanipulation. Summit sessions on 5G and edge AI framed ultra-low latency as a defining infrastructure challenge for safety-critical use cases.

Mentioned in

Learning Outcome Metrics

Standards & Safety

Evaluation frameworks that measure AI-in-education effectiveness through knowledge retention, development of critical thinking and inquiry skills, and equity-specific indicators. Discussed at the summit to ensure AI education tools drive meaningful, measurable, and inclusive student outcomes.

Mentioned in

Lingua Africa Initiative

Data & Datasets

A $5.5 million initiative announced at the summit by Microsoft, the Gates Foundation, FCDU, and Masakhane to fund systematic collection of African language data. The programme aims to address the severe underrepresentation of African languages in AI training datasets.

Mentioned in

Linguistic Diversity in AI

Data & Datasets

The representation of multiple languages and dialects within AI systems, requiring partnerships with local voice creators and ecosystem contributors to ensure equitable coverage. Summit discussions highlighted the challenge of supporting 12+ major languages and countless dialects, particularly in multilingual regions.

Mentioned in

LinkedIn Economic Graph

Data & Datasets

A large-scale dataset derived from LinkedIn's platform of approximately 1.3 billion members — including around 170 million in India — covering 40,000-plus skills, company data, and job listings to enable real-time labour market analysis. It was cited at the summit as a resource for understanding AI's impact on skills and employment.

Mentioned in

Liquid Cooling

Infrastructure & Compute

A data centre thermal management technique that uses liquid coolants rather than air to dissipate heat from high-density AI compute hardware, offering significantly greater energy efficiency. Summit infrastructure sessions noted a current industry ratio of approximately 70% liquid to 30% water cooling, with a trend toward fully liquid-cooled facilities to meet the demands of large-scale AI workloads.

Mentioned in

Liveness Detection

Standards & Safety

A biometric security technique that determines whether a captured sample—fingerprint, face, or iris—originates from a live person rather than a spoofed or synthetic source. It is a core component of robust identity verification systems discussed in summit sessions on digital identity and fraud prevention.

Mentioned in

Llama (Meta)

Models & Architectures

Meta's open-source large language model family, widely adopted as a foundational model for research and commercial fine-tuning. Summit discussions noted growing competition from Chinese open-source alternatives that are challenging Llama's dominance in the open-weights ecosystem.

Mentioned in

Localized Datasets

Data & Datasets

Hyper-local training data—such as soil composition, weather patterns, and pest prevalence at sub-20km resolution—required to make agricultural AI models accurate and actionable for specific communities. Most current models rely on generic national-level data, limiting their practical utility at the farm level.

Mentioned in

M

Machine Learning Models for Biometrics

Models & Architectures

Neural network and machine learning approaches that replace classical algorithmic methods in biometric matching, offering improved accuracy while requiring active bias mitigation to ensure equitable performance across demographic groups. Their adoption was discussed at the summit in the context of modernising national identity and border-security systems.

Mentioned in

Machine-Readable Metadata

Data & Datasets

Structured data schemas that transform human-readable transaction or operational logs into formats that AI agents can directly parse and act upon, as demonstrated by Tata Steel's conversion of 11.2 PB of transaction logs. The summit discussed this transformation as a prerequisite for enabling reliable agentic AI in industrial settings.

Mentioned in

Mahavistar

Applications

An AI-powered agricultural advisory system deployed by the Maharashtra state government in India, notable for compressing its learning and deployment cycle from nine months to three weeks.

Mentioned in

Mantics

Tools & Frameworks

An open-source security scanning tool used for domain vulnerability assessment in AI and software systems. Mentioned at the summit in the context of tooling available for identifying and mitigating security risks in AI deployments.

Mentioned in

Medical Device Regulation (MDR)

Policy & Governance

A regulatory framework that applies when AI systems are classified as medical devices, requiring approval from a notified body before deployment in clinical settings. Compliance with MDR was cited at the summit as a critical pathway for bringing AI-powered healthcare tools to market in regulated jurisdictions.

Mentioned in

Metadata Standards

Data & Datasets

Defined protocols for preserving critical contextual information—such as clinical context and demographic data—alongside primary datasets like medical imaging and clinical records. Standardised metadata is essential for ensuring AI models trained on such data remain accurate and equitable across diverse populations.

Mentioned in

Micro Data Centers

Infrastructure & Compute

Compact data centre facilities of less than 1 MW power capacity and approximately 50 racks, designed to co-locate compute and data storage closer to the point of use. They are particularly relevant for edge computing deployments in infrastructure-constrained regions.

Mentioned in

Mixture of Experts (MoE)

Models & Architectures

A neural network architecture in which only a subset of specialised sub-networks ('experts') are activated for any given input, enabling efficient scaling of model capacity without proportional increases in inference compute. For example, Kimi K2 employs 384 experts with 8 active per token, achieving high capability at reduced computational cost.

Mentioned in

Model Cards

Standards & Safety

Standardised documentation artefacts that accompany AI models, transparently recording their capabilities, known limitations, intended use cases, and safety testing results. They are presented at the summit as a best-practice tool for responsible model deployment and accountability.

Mentioned in

Model Compression

Models & Architectures

A set of techniques for reducing the size and computational requirements of trained neural networks so they can run on low-cost, offline-capable mobile or edge devices. This is particularly relevant for deploying AI in resource-constrained environments with limited connectivity.

Mentioned in

Model Context Protocol (MCP)

Tools & Frameworks

An open standard that defines how agentic AI systems call external tools and communicate with other agents, supporting secure OAuth 2.1 authorisation and enabling simpler agent-to-agent (A2A) interactions than bespoke architectures. Summit sessions highlight MCP servers as a practical mechanism for providing agents with secure, cloud-based access to databases and services without requiring local data downloads.

Mentioned in

Model Context Protocol (MCP) Servers

Tools & Frameworks

A modular plugin architecture that allows AI applications to connect to external tools and data sources through standardised interfaces, enabling scalable extensibility. At the summit they were referenced as a scalability mechanism in the HeyMedicare multi-agent system.

Mentioned in

Model Distillation

Models & Architectures

A technique in which a smaller, more efficient 'student' model is trained to replicate the behaviour of a larger 'teacher' model, reducing computational requirements while preserving performance. It was highlighted as a key approach for India and other emerging markets to leverage frontier AI economically.

Mentioned in

Model Drift

Standards & Safety

The degradation in an AI model's performance that occurs when real-world data patterns shift away from those present in the original training data. The summit cited models trained on pre-COVID behavioural data performing poorly during the COVID era as a concrete example.

Mentioned in

Model Governance

Policy & Governance

The set of processes and controls for managing AI models throughout their lifecycle, encompassing model registries, version control, performance monitoring, and retirement protocols. At the summit, model governance was discussed as essential for accountability and regulatory compliance in enterprise and financial AI deployments.

Mentioned in

Model Risk Management

Policy & Governance

A governance discipline encompassing the identification, assessment, and mitigation of risks arising from AI and machine learning models deployed in regulated financial institutions. It includes oversight frameworks such as model validation, performance monitoring, and audit trails specific to AI systems.

Mentioned in

Modularity

Infrastructure & Compute

A system design principle in which components can be independently swapped, upgraded, or replaced without requiring a rebuild of the entire infrastructure. At the summit it was highlighted as a key property for resilient and future-proof AI and data systems.

Mentioned in

MOSIP (Modular Open Source Identity Platform)

Applications

An open-source digital identity platform deployed across 29 countries and used to issue over 300 million IDs, with projections approaching one billion. Highlighted at the summit as a scalable, interoperable model for sovereign digital public infrastructure.

Mentioned in

Mozilla Common Voice

Data & Datasets

An open-source, multilingual voice dataset maintained by Mozilla and recognised as a Digital Public Good since 2022, including contributions in East African and other under-resourced languages. The summit cited it as a model for community-driven data collection that supports inclusive, multilingual AI development.

Mentioned in

Mule Account Networks

Standards & Safety

Networks of legitimate-appearing financial accounts used by criminal organisations to layer and launder illicit funds, making the trail of money difficult to trace. AI-based financial crime detection systems are being developed to identify the behavioural and transactional signatures of mule account activity.

Mentioned in

Mule Hunter.ai

Applications

An AI-powered fraud detection system backed by the Reserve Bank of India (RBI), designed to identify money-mule accounts and financial fraud patterns; it has been implemented across 26 or more Indian banks. The system was cited as an example of AI applied to financial integrity under regulatory oversight.

Mentioned in

Multi-Agent Orchestration

Models & Architectures

The coordination of multiple autonomous AI agents—and in robotics contexts, multiple robotic arms—toward shared objectives through structured communication and task delegation. Summit sessions highlighted its role in complex healthcare, industrial, and logistics workflows.

Mentioned in

Multi-Level Deep Learning

Models & Architectures

A deep learning approach that processes data at multiple hierarchical levels to handle complex inputs such as polymicrobial samples and variable staining in medical diagnostics, while operating within constrained network environments. It is employed in platforms like Carb Inc.'s Beta for antimicrobial resistance detection.

Mentioned in

Multi-Stakeholder Governance

Policy & Governance

A governance model that distributes AI oversight responsibilities across public institutions, regulators, academia, and civil society rather than concentrating authority in a single body. The summit referenced South Africa's approach—combining public broadcasters, the Information Regulator, universities, and innovation funds—as a practical example.

Mentioned in

Multilingual AI

Models & Architectures

AI systems designed to understand and generate content across multiple languages and regional dialects, including 20 or more African languages and Indian regional languages, with capabilities tailored to locally relevant tasks such as agriculture, legal aid, and education. At the summit, multilingual AI was emphasised as a critical enabler of equitable AI access in the Global South.

Mentioned in

Multilingual AI Health Platforms

Applications

AI-driven health information or clinical support systems designed to operate across many languages; exemplified at the summit by Hilo Health, which supports 93 languages (22 fully covered and 71 via translation). These platforms are cited as models for inclusive health AI in linguistically diverse populations.

Mentioned in

Multilingual AI Models

Models & Architectures

AI language models explicitly designed or fine-tuned to support multiple languages, prioritised at the summit as essential for global inclusion and equitable access to AI-powered services. The Bhashini Platform and India's linguistic diversity were cited as key motivating contexts.

Mentioned in

Multilingual Datasets

Data & Datasets

Training and evaluation datasets that incorporate multiple languages and culturally specific contexts, developed to counteract the dominant bias toward English in large language model pre-training data. Summit speakers emphasized their creation as essential for equitable and locally relevant AI systems in the Global South.

Mentioned in

Multilingual NLP

Models & Architectures

Natural language processing systems capable of understanding and generating text across multiple languages, enabling AI products to serve linguistically diverse populations. The summit featured case studies such as Wize's expansion from English to Hindi and beyond, which drove significant business growth.

Mentioned in

Multimodal AI

Models & Architectures

AI systems capable of processing and integrating multiple input modalities—such as text, voice, images, and video—within a unified model architecture. Summit sessions highlighted multimodal capabilities as central to making AI accessible across diverse real-world contexts, including low-literacy and multilingual environments.

Mentioned in

Multistakeholder AI Governance Model

Policy & Governance

A governance approach that brings together governments, industry, academia, civil society, and the public to shape AI policy through consensus-based, transparent, and open processes. The model is intended to ensure that no single actor unilaterally determines the norms and rules governing AI development and deployment.

Mentioned in

N

National AI Strategy

Policy & Governance

A government-level policy framework outlining a country's goals, investments, and regulatory approach to artificial intelligence development and deployment. Summit discussions highlighted active strategies in countries such as India and Kenya as models for coordinated national AI governance.

Mentioned in

National Credit Framework

Policy & Governance

An educational policy mechanism that allows learners to accumulate and stack credentials—including those for AI and digital skills—toward larger, recognised qualifications. At the summit, it was referenced as an enabler of flexible, lifelong learning pathways in the context of AI workforce development.

Mentioned in

National Education Policy (NEP) — AI Curriculum

Policy & Governance

India's National Education Policy grants technology institutions curriculum autonomy, enabling them to rapidly integrate AI and emerging-technology content without waiting for centralised approval. This flexibility was highlighted at the summit as a structural enabler for building AI talent at scale.

Mentioned in

National Education Policy 2020 (India)

Policy & Governance

An Indian government policy framework that mandates the inclusion of AI as a curriculum subject from Grade 9 onwards, positioning AI literacy as a core component of secondary education. The policy was cited at the summit as a significant national effort to build foundational AI competency at scale.

Mentioned in

National Education Policy 2020 (NEP 2020)

Policy & Governance

India's national policy framework that enables coding and AI education at the K-12 level, providing schools flexibility in implementation to prepare students for an AI-driven future.

Mentioned in

National Language Models

Models & Architectures

Small, locally developed language AI systems built to serve specific national or regional languages, designed to prevent the marginalisation of linguistically diverse populations by dominant global models. Emphasised at the summit as essential infrastructure for linguistic sovereignty and inclusive AI.

Mentioned in

Natural Language Processing (NLP)

Models & Architectures

A branch of AI concerned with enabling machines to understand, generate, and interact using human language, including low-resource and regional languages. Summit sessions specifically highlighted multilingual NLP work covering Gujarati, Hindi, and other Indian languages.

Mentioned in

NeMo Framework

Tools & Frameworks

NVIDIA's open-source toolkit for data curation, synthetic data generation, and large-scale model training, referenced at the summit as part of enterprise AI development pipelines. It supports the end-to-end lifecycle from dataset preparation through to model fine-tuning.

Mentioned in

Network Slicing

Infrastructure & Compute

A telecommunications technique that partitions a physical network into multiple virtual networks, each with dedicated service-level guarantees tailored to specific use cases. At the summit, network slicing was discussed in the context of ensuring equitable connectivity for AI applications across rural and urban populations.

Mentioned in

Neural GCM

Models & Architectures

A Google AI-powered weather and climate modelling system that combines neural networks with general circulation model dynamics to produce high-accuracy meteorological predictions. At the summit, it was highlighted for delivering monsoon forecasts to approximately 38 million Indian farmers, demonstrating AI's potential for agricultural and climate resilience.

Mentioned in

Neural Operator

Models & Architectures

A class of neural network architectures, such as Fourier Neural Operators, that learn mappings between function spaces rather than finite-dimensional vectors, making them particularly effective for solving spatial partial differential equations (PDEs) in scientific computing. They are widely used in climate and fluid-dynamics modelling.

Mentioned in

Neural Processing Unit (NPU)

Infrastructure & Compute

A dedicated hardware accelerator optimized for AI inference workloads, enabling efficient on-device AI processing in edge applications.

Neuromorphic Computing

Infrastructure & Compute

A hardware paradigm that mimics the structure and function of biological neural circuits to achieve dramatically greater energy efficiency than conventional processors. Summit discussions positioned it as a promising path toward sustainable AI infrastructure, contrasting the brain's ~20W operation with GPU power draws of 500–700W.

Mentioned in

Neurosymbolic AI

Models & Architectures

A hybrid AI paradigm that combines neural network-based learning—enabling rich, empathetic, and context-sensitive responses—with symbolic reasoning to enforce structured rules and protocol adherence. In mental health applications such as Wisa, this approach balances conversational warmth with clinical safety guardrails.

Mentioned in

NotebookLM

Tools & Frameworks

A research tool developed by Google that allows users to upload documents and data sources and extract insights through natural language queries. At the summit it was highlighted as a practical aid for synthesising research materials.

Mentioned in

Nvidia (AI Hardware)

Infrastructure & Compute

Nvidia is a leading semiconductor company whose GPUs are the dominant hardware platform for AI model training and inference acceleration. It is referenced at the summit as the primary infrastructure provider against which alternative compute options are benchmarked.

Mentioned in

NVIDIA Inception Program

Infrastructure & Compute

NVIDIA's startup support programme offering free GPU allocations, cloud credits, and rapid (48-hour to 7-day) model evaluation turnaround to over 4,000 enrolled AI startups. It was presented as a key mechanism for accelerating AI innovation in emerging markets.

Mentioned in

NVIDIA Jetson

Infrastructure & Compute

A family of edge AI computing modules from NVIDIA designed to run inference workloads locally on compact, energy-efficient hardware, used in robotics, IoT, and embedded AI applications. It was cited at the summit as the edge platform powering an AI prototype demonstrated in a session on low-resource deployment.

Mentioned in

NVIDIA NeMo

Tools & Frameworks

NVIDIA's open framework for building, fine-tuning, deploying, and optimising large-scale AI and agentic systems, with particular support for large language models and speech AI. It was referenced at the summit as production-grade tooling for enterprise agentic deployments.

Mentioned in

Nvidia Omniverse

Tools & Frameworks

Nvidia's digital twin and real-time visualisation platform, applied at the summit in the context of interactively exploring climate simulations and modelling urban impact scenarios. It enables researchers and planners to navigate complex, physics-based virtual environments.

Mentioned in

O

Observability

Infrastructure & Compute

The practice of monitoring, logging, and analysing the behaviour and performance of AI systems in production to detect degradation, unexpected outputs, or failure modes in probabilistic pipelines. At the summit, observability was highlighted as an operational necessity for maintaining reliability and trust in deployed AI systems.

Mentioned in

OECD AI Principles

Policy & Governance

An internationally agreed set of guidelines for trustworthy AI, first adopted by OECD member countries in 2019, covering transparency, accountability, robustness, and human oversight. The summit notes they form the normative foundation that the G7 Hiroshima Process subsequently operationalises into concrete codes of conduct.

Mentioned in

Offline-First Architecture

Infrastructure & Compute

A software and system design philosophy that prioritises full or partial functionality without a reliable internet connection, ensuring AI applications remain accessible in regions with poor connectivity infrastructure. It is a critical consideration for equitable AI deployment in underserved areas.

Mentioned in

On-Device (Local) Processing

Infrastructure & Compute

An architecture in which all AI inference and feature execution occur directly on the user's device, ensuring that no data is transmitted to external servers or third parties. This approach was highlighted as a privacy-preserving design principle for consumer AI products.

Mentioned in

On-Device AI Inference

Infrastructure & Compute

The execution of AI model computations locally on an edge device rather than transmitting data to a remote cloud server, preserving user privacy and reducing latency. Summit projects such as ReFI and InexT cited on-device inference as essential for handling sensitive personal data responsibly.

Mentioned in

Open Datasets

Data & Datasets

Publicly accessible datasets made available, including through government support, to enable AI research and development, particularly for institutions and startups lacking resources to generate proprietary training data.

Mentioned in

Open Government Data

Data & Datasets

Anonymised, publicly accessible datasets released by government bodies—such as UIDAI datasets on India's data.gov.in portal—to enable research, innovation, and AI development while protecting individual privacy. Open data initiatives are positioned as essential infrastructure for an equitable AI ecosystem.

Mentioned in

Open Source AI

Policy & Governance

AI systems—particularly large language models—made publicly available, with summit discussions drawing critical distinctions between 'open weights' (where model parameters are public) and fully open source (where training data, methodology, and code are also disclosed). The degree of openness has significant implications for reproducibility, safety auditing, and governance.

Mentioned in

Open Source Models

Models & Architectures

Publicly available AI models—including leading Chinese models cited as the most widely used open-source models globally—that reduce costs and support national AI sovereignty by lowering dependence on proprietary systems. Discussed at the summit as critical enablers for developing nations and resource-constrained organisations.

Mentioned in

Open Standards

Standards & Safety

Publicly available technical specifications and norms that enable interoperability between AI systems and prevent vendor lock-in, cited at the summit as a critical complement to open-source software. Adoption of open standards was recommended as a governance tool to ensure competitive and equitable AI ecosystems.

Mentioned in

Open-Source AI Model

Models & Architectures

AI models whose weights, architectures, and associated tools are publicly released, enabling anyone to download, inspect, fine-tune, and deploy them without proprietary restrictions. The summit highlighted platforms such as Hugging Face Transformers, Ollama, and Google Colab as key distribution and experimentation channels for open-source models.

Mentioned in

Open-Weight Models

Models & Architectures

AI models whose trained parameters (weights) are publicly released, enabling broad access and customisation, though the underlying training data and full methodology may not be fully disclosed. Examples discussed at the summit include DeepSeek and similar openly distributed large language models.

Mentioned in

Optical Character Recognition (OCR)

Tools & Frameworks

A technology that converts images of printed or handwritten text into machine-readable characters, highlighted at the summit for its role in digitising documents in Devanagari, Brahmi, and other regional Indic scripts. Script-specific algorithmic adaptations were identified as necessary to achieve acceptable accuracy across India's diverse writing systems.

Mentioned in

Orange Economy (Creator Economy)

Other

An economic model centered on content creators and creative industries using digital platforms as primary channels for livelihood generation. It was discussed at the summit as an emerging employment paradigm enabled by AI-assisted content production tools.

Mentioned in

Orchestration Layer

Tools & Frameworks

A software layer that governs how data flows between components and determines which models or tools are invoked within a compound AI system, enabling organisations to maintain data sovereignty while integrating diverse third-party services. At the summit, orchestration layers were presented as a strategic architectural choice for responsible AI pipeline design.

Mentioned in

Orchestration Platform

Infrastructure & Compute

A cloud-based management layer that coordinates fleets of heterogeneous robots or AI agents, while on-device AI handles local tasks such as navigation and perception. It enables scalable deployment and monitoring of distributed autonomous systems.

Mentioned in

Outcome-Based AI Regulation

Policy & Governance

A regulatory philosophy that holds AI developers and deployers accountable for demonstrably harmful results rather than prescribing specific technical implementations. Summit participants advocated this approach as more adaptable to the pace of AI innovation than prescriptive technical mandates.

Mentioned in

Outcome-Focused Governance

Policy & Governance

A regulatory philosophy that defines desired societal and market outcomes—such as fairness, market integrity, and fraud prevention—without prescribing specific technical implementations, giving organisations flexibility in how they achieve compliance. This approach was advocated at the summit as more adaptable and innovation-friendly than rule-based or prescriptive regulation.

Mentioned in

P

PARAM Supercomputing Series

Infrastructure & Compute

A line of high-performance supercomputers developed by India's Centre for Development of Advanced Computing (CDAC), ranging from the early PARAM 8000 to current-generation systems. These machines represent India's national investment in sovereign compute infrastructure for AI and scientific research.

Mentioned in

Participatory Data Collection

Data & Datasets

An approach to dataset creation that formally incorporates indigenous and local knowledge—such as village elder reports of natural disasters—into structured, machine-readable datasets. This methodology is critical for building AI systems that are accurate and relevant for underrepresented communities and geographies.

Mentioned in

Participatory Design

Standards & Safety

A co-design process in which communities and end-users are active collaborators in the development of AI solutions, rather than passive subjects of data collection. The summit presented participatory design as a cornerstone of ethical, inclusive AI development, particularly in contexts involving marginalised or underserved populations.

Mentioned in

Participatory Design Lifecycle

Standards & Safety

An inclusive development methodology that integrates diverse stakeholders—including end users and affected communities—across every stage of an AI system's lifecycle, from data collection and model design through deployment and post-deployment monitoring. It was presented at the summit as a best practice for building equitable, accountable AI systems.

Mentioned in

Participatory Evaluation

Standards & Safety

An impact measurement methodology that involves end-beneficiaries directly in assessing AI system outcomes, rather than relying solely on algorithmic or top-down metrics. The summit advocated for this approach as a means of ensuring AI deployments are genuinely accountable to the communities they affect.

Mentioned in

PASpeech Assistive Device

Applications

A pocket-sized hardware device paired with a cloud-based AI backend that enables paralysed users to communicate through speech synthesis or alternative input methods. It was presented at the summit as an example of accessible AI hardware designed for underserved and differently-abled populations.

Mentioned in

Patient Capital

Policy & Governance

Long-term grant and concessional funding—typically spanning five to ten years—that accepts delayed financial returns in exchange for sustained social or development impact. It is considered essential for funding healthcare AI projects in low-resource settings where short-term commercial returns are unlikely.

Mentioned in

Personalized Learning Paths

Applications

AI-driven adaptive content sequencing that tailors educational material to an individual learner's strengths, weaknesses, and pace, discussed at the summit as a key application of AI in education. These systems aim to improve learning outcomes by dynamically adjusting curricula based on ongoing learner performance data.

Mentioned in

Physics-Informed AI

Models & Architectures

A hybrid modelling approach that integrates established scientific or physics-based models with machine learning to improve robustness and interpretability in domains such as weather forecasting, climate modelling, and biomedical research. By grounding neural networks in physical laws, these systems can generalize better with less data.

Mentioned in

Physics-Informed Neural Networks (PINNs)

Models & Architectures

Neural networks whose training is constrained by known physical laws, improving interpretability and ensuring predictions remain physically plausible. They were discussed at the summit in the context of scientific and engineering applications such as flood modelling and semiconductor simulation.

Mentioned in

Policy as Code

Policy & Governance

The practice of expressing regulations, tariffs, and compliance rules in machine-readable, executable formats so they can be automatically interpreted and enforced by software systems. This approach enables real-time regulatory compliance checks within AI and digital infrastructure pipelines.

Mentioned in

Post-Market Monitoring

Policy & Governance

The continuous tracking and evaluation of an AI system's real-world performance after deployment, including collection of user feedback and execution of correction procedures when issues arise. Discussed at the summit as a critical governance mechanism to ensure ongoing safety and efficacy of deployed AI.

Mentioned in

Post-Quantum Cryptography (PQC)

Standards & Safety

Cryptographic algorithms designed to remain secure against attacks from future quantum computers, addressing the vulnerability of widely deployed classical encryption standards. The summit noted that global migration to PQC is already underway and represents a critical infrastructure challenge intersecting with AI security.

Mentioned in

Power Consumption

Infrastructure & Compute

The energy demands associated with training and running large AI models, identified at the summit as a critical constraint on model scaling and sustainable AI deployment. Growing power requirements were linked to broader concerns about environmental impact and infrastructure planning.

Mentioned in

Power Purchase Agreements (PPAs)

Infrastructure & Compute

Long-term contractual arrangements between AI data centre operators and renewable energy providers that guarantee a fixed supply and price of clean electricity. PPAs were discussed at the summit as a key mechanism for meeting the growing energy demands of AI infrastructure sustainably.

Mentioned in

Power Usage Effectiveness (PUE)

Infrastructure & Compute

A metric quantifying data-centre energy efficiency, calculated as the ratio of total facility power consumption to the power delivered to computing equipment; a PUE of 1.0 is perfect efficiency. Summit discussions cite Intel's achievement of 1.06 PUE as a benchmark against the industry-standard water-cooled range of 1.2–1.5.

Mentioned in

Pre-trained Classifiers

Models & Architectures

Machine learning models that have already been trained on large datasets and can be fine-tuned or directly applied to specific tasks such as pose detection or intent classification. They reduce the data and compute required to deploy AI capabilities in new applications.

Mentioned in

Precision Farming

Applications

A data-driven agricultural approach that applies plot-by-plot interventions—including optimised fertiliser, pesticide, and irrigation decisions—based on granular sensor and AI-generated insights. It aims to maximise yield while minimising resource use and environmental impact.

Mentioned in

Predictive Asset Health Monitoring

Applications

The use of drone and satellite imagery combined with AI analytics to assess the condition of power grid infrastructure such as feeders, enabling utilities to schedule maintenance before failures occur. This approach reduces outage risk and operational costs compared to time-based maintenance schedules.

Mentioned in

Predictive Maintenance

Applications

An AI-driven manufacturing application that analyses sensor and operational data to anticipate equipment failures before they occur, reducing unplanned downtime by approximately 20%. It is highlighted at the summit as a high-impact industrial use case for applied AI.

Mentioned in

Predictive Modeling

Applications

The use of statistical and machine learning techniques to generate risk scores and forecast outcomes, such as cardiac event probabilities or metabolic complication predictions in healthcare settings. At the summit, predictive modeling was highlighted as a key application of AI in clinical decision support.

Mentioned in

Presentation Attack Detection (PAD)

Standards & Safety

A class of defensive techniques in biometric systems designed to identify and reject fraudulent inputs such as deepfake images, masks, printed photos, or replay videos. Summit discussions highlighted PAD as increasingly critical as AI-generated spoofing methods become more sophisticated.

Mentioned in

Privacy by Design

Standards & Safety

An engineering and organisational philosophy that integrates data privacy protections into a product or system from its inception rather than adding them as an afterthought. The summit presents it as a best practice for AI developers seeking to comply with regulations like GDPR and to build user trust.

Mentioned in

Privacy Protection Frameworks

Policy & Governance

Structured governance mechanisms and technical standards being developed to safeguard personal data in the context of large language models and public-facing AI applications. Summit discussions noted that dedicated committees are actively formulating these frameworks to address emerging privacy risks.

Mentioned in

Privacy-Enhancing Technologies (PETs)

Infrastructure & Compute

A suite of technical tools and methods—including Trusted Execution Environments (TEEs), differential privacy, and federated learning—designed to enable data analysis and AI model training while protecting individual privacy. Presented at the summit as critical infrastructure for compliant and trustworthy AI systems.

Mentioned in

Production Deployment of AI Models

Infrastructure & Compute

The process of transitioning AI models from experimental notebook environments to scalable, secure, and reliable production systems. Summit speakers identified this operational gap as a major barrier preventing promising AI prototypes from delivering real-world impact.

Mentioned in

Project Vani

Data & Datasets

Google's large-scale multilingual speech data initiative comprising over 150,000 hours of audio across 100 languages, highlighted at the summit as a significant contribution to low-resource-language AI research. The project addresses critical data scarcity for building robust speech recognition and synthesis systems in underrepresented languages.

Mentioned in

Project-Based Learning (PBL) for AI Education

Other

A pedagogical approach to AI and technology education that anchors learning in real-world problems and applied projects rather than purely theoretical instruction. Summit discussions highlighted PBL as critical for building practical AI talent pipelines, particularly in emerging economies.

Mentioned in

Prompt Engineering

Tools & Frameworks

The practice of designing and refining input prompts to elicit accurate, relevant, or high-quality outputs from AI language models. Summit speakers emphasised it as a critical skill for non-technical users seeking to interact effectively with generative AI systems.

Mentioned in

Prompt Injection

Standards & Safety

A security attack vector in which adversarial or hidden instructions are embedded within input text to manipulate a large language model into executing unintended or harmful actions. It was highlighted at the summit as a critical vulnerability in LLM-based applications, particularly those processing untrusted external content.

Mentioned in

Prompt Injection Attack

Standards & Safety

A security vulnerability in which hidden or malicious instructions are embedded within model inputs to manipulate an AI system into performing unintended actions, such as data exfiltration or executing malware. It was highlighted at the summit as a significant threat vector for deployed LLM-based applications.

Mentioned in

Proportional Accountability

Policy & Governance

A risk-tiered governance principle under which the level of regulatory oversight applied to an AI system is calibrated to the potential severity of harm it can cause—requiring lighter oversight for low-stakes applications (e.g., chatbots) and stricter requirements for high-stakes ones (e.g., insurance underwriting). The summit discussed this as a pragmatic framework for context-sensitive AI regulation.

Mentioned in

Proportional AI Governance

Policy & Governance

A regulatory design principle that calibrates rules and oversight mechanisms to the specific risk level, context, and scale of an AI application rather than applying uniform requirements across all systems. It was discussed at the summit as a way to avoid stifling low-risk innovation while maintaining rigorous oversight where harms are most severe.

Mentioned in

Protein Data Bank (PDB)

Data & Datasets

A global open-access repository of three-dimensional structural data for biological macromolecules, including proteins and nucleic acids. It serves as a foundational data source for AI systems in structural biology and drug discovery.

Public Digital Infrastructure

Infrastructure & Compute

Shared, government-backed digital platforms and systems—such as India's Aadhaar identity system, UPI payments network, and vaccination platforms—that provide foundational capabilities for scaling digital and AI-enabled services. These were cited at the summit as models for how interoperable infrastructure can accelerate AI adoption at population scale.

Mentioned in

Public Key Infrastructure (PKI)

Infrastructure & Compute

A cryptographic framework using certificate authorities and delegation chains to issue, manage, and verify digital certificates that authenticate identities and secure communications. It was referenced at the summit as foundational infrastructure for trusted digital credential and identity systems.

Mentioned in

Public Procurement (AI)

Policy & Governance

The open and transparent process by which governments and public institutions acquire, vet, and deploy AI systems, ensuring accountability, fairness, and alignment with public interest. Robust procurement frameworks are seen as a key governance lever for responsible AI adoption.

Mentioned in

Public-Private Partnerships (PPPs)

Policy & Governance

Collaborative governance models in which government bodies and private sector organisations share responsibilities, resources, and risk to develop and deploy AI infrastructure and services. At the summit, these were highlighted as a key mechanism for scaling AI initiatives while maintaining public accountability.

Mentioned in

PyTorch

Tools & Frameworks

An open-source, Python-based deep learning framework widely used for building and training AI and machine learning models. At the summit, it was noted as the primary framework used by the vast majority of enterprise AI customers, valued for its flexibility and vendor-agnostic design.

Mentioned in

Q

Qualitative Data Collection

Data & Datasets

Ground-level, community-engaged research methods used to gather contextual, narrative, and experiential information that quantitative datasets cannot capture. In AI development, it is used to validate model assumptions, surface local needs, and ensure outputs are meaningful to affected populations.

Mentioned in

Quantization

Models & Architectures

A model compression technique that reduces the numerical precision of model weights (e.g., from 32-bit to 8-bit), shrinking model size and memory footprint to enable deployment on resource-constrained hardware. It was highlighted at the summit as a key technique for making AI accessible in low-resource settings.

Mentioned in

Quantum Computing

Infrastructure & Compute

A computing paradigm that exploits quantum-mechanical phenomena such as superposition and entanglement to perform certain calculations far faster than classical computers. Summit discussions placed quantum computing at an early stage of commercialisation in India, with relevance to future AI and cryptography capabilities.

Mentioned in

R

Randomized Controlled Trial (RCT)

Other

A gold-standard experimental methodology for evaluating the causal impact of an intervention by randomly assigning participants to treatment and control groups. At the summit, an RCT was cited in the context of Letris's 5-month study across 178 schools in Espírito Santo to measure AI-driven educational outcomes.

Raspberry Pi

Infrastructure & Compute

A low-cost, single-board computer widely used as an edge computing device for IoT and agricultural AI applications in resource-constrained environments. Summit speakers cited it as an example of affordable hardware enabling AI deployment in emerging-market settings.

Mentioned in

Red Teaming

Standards & Safety

A structured adversarial testing methodology in which experts attempt to identify vulnerabilities, failure modes, and harmful outputs in AI systems before deployment. The summit highlighted a cross-country red-teaming exercise coordinated by Singapore's IMDA involving nine Asia-Pacific nations as a model for international AI safety collaboration.

Mentioned in

Red-Teaming

Standards & Safety

An adversarial testing methodology in which teams systematically probe AI models for vulnerabilities including bias, toxicity, harmful outputs, and safety failures before deployment. At the summit, red-teaming was identified as a core responsibility of frontier AI model developers in ensuring safe and trustworthy systems.

Mentioned in

Reference Architecture

Infrastructure & Compute

A standardised blueprint specifying the configuration of GPU types, networking topology, storage systems, and orchestration layers for deploying large-scale AI infrastructure, such as Nvidia's GPU clustering reference design. At the summit, reference architectures were discussed as a means of accelerating sovereign and enterprise AI infrastructure buildouts.

Mentioned in

Regulatory Compliance in AI-Enabled Professional Services

Policy & Governance

The challenge of aligning AI deployments in regulated sectors—such as legal (bar licensing, malpractice) and financial services (RBI, MHA oversight)—with existing professional and statutory frameworks that evolve more slowly than the technology. Summit discussions highlighted regulatory lag as a key barrier to responsible AI adoption in these sectors.

Mentioned in

Regulatory Coordination

Policy & Governance

A multi-level governance model in which central agencies (such as ICMR or drug regulatory bodies) set certification standards for AI systems while state or regional authorities manage training, rollout, and scaling. The summit highlighted this division of responsibilities as key to operationalising AI governance in large federal systems like India.

Mentioned in

Regulatory Harmonization

Policy & Governance

The coordination of AI and medical technology standards across multiple national regulatory authorities—particularly discussed in the African context—to reduce redundant approval cycles and facilitate faster, safer deployment of AI-driven products across borders.

Mentioned in

Regulatory Sandbox

Policy & Governance

A controlled, time-limited framework that allows companies to test innovative AI products and services under regulatory supervision with relaxed compliance requirements, enabling experimentation while managing systemic risk. It was referenced at the summit as a mechanism for fostering responsible AI innovation in financial and other regulated sectors.

Mentioned in

Regulatory Sandboxes

Policy & Governance

Government-facilitated controlled environments that allow organisations to pilot and test AI systems under relaxed regulatory conditions before broader deployment.

Mentioned in

Reinforcement Learning from Human Feedback (RLHF)

Models & Architectures

A fine-tuning technique in which human evaluators rank or score model outputs, and those preferences are used as a reward signal to iteratively align the model's behaviour with human values and intentions. It was discussed at the summit as a foundational method for making large language models safer and more helpful.

Mentioned in

Renewable Energy Integration

Applications

The process of incorporating solar, wind, hydro, and nuclear power into existing electricity grids alongside mechanisms such as corporate power purchase agreements (PPAs) that enable private entities to procure renewable energy directly. The summit discussed AI's role in optimising grid balancing and forecasting for these diverse sources.

Mentioned in

Replit

Tools & Frameworks

An AI-assisted cloud-based coding platform that enables users to write, run, and collaborate on code with AI support, cited at the summit as an example of rapid capability improvement in AI developer tools. It was used to illustrate how quickly AI coding assistants have advanced within a single year.

Mentioned in

Resilient AI Challenge

Standards & Safety

An open benchmarking competition evaluating AI models on both predictive accuracy and energy efficiency on shared hardware, designed to promote the development of sustainable and robust AI systems. Highlighted at the summit as an initiative driving accountability in model development practices.

Mentioned in

Responsible AI

Standards & Safety

An operational framework for developing and deploying AI systems in a manner that is fair, accountable, safe, and transparent, with a specific emphasis at the summit on the FAST-P principles (Fairness, Accountability, Safety, Transparency, and Privacy).

Mentioned in

Responsible AI Framework

Standards & Safety

A structured, open-source methodology—exemplified at the summit by Microsoft's approach—that guides organisations in developing, deploying, and governing AI systems in a manner that is ethical, safe, and accountable. It typically encompasses principles, tooling, and processes spanning the full AI development lifecycle.

Mentioned in

Responsible AI Frameworks

Standards & Safety

Industry-wide standards and structured approaches that integrate ethics, transparency, and accountability into the design, deployment, and governance of AI systems. Multiple summit speakers cited these frameworks as essential for building trustworthy AI at scale.

Mentioned in

Responsible AI in Fintech

Policy & Governance

A regulatory and industry framework—anchored in India by Reserve Bank of India (RBI) guidance—emphasizing industry-led trustworthiness, transparency, and accountability in the deployment of AI systems within financial technology services.

Mentioned in

Responsible AI Principles

Standards & Safety

High-level ethical commitments — including fairness, reliability, safety, privacy, security, inclusivity, accountability, and transparency — that organisations are expected to embed throughout the AI development lifecycle. At the summit these principles were discussed as the normative foundation underpinning more concrete governance and assurance mechanisms.

Mentioned in

Results-Based Financing

Policy & Governance

A funding mechanism in AI-enabled education and development programs where payments are triggered only when measurable outcomes—such as verified learning improvements—are achieved. It aligns incentives between technology providers, funders, and beneficiaries to ensure AI delivers real-world impact.

Mentioned in

Retrieval-Augmented Generation (RAG)

Models & Architectures

A technique that enhances generative AI models by dynamically retrieving relevant information from curated external knowledge sources at inference time, grounding outputs in verified data. RAG was highlighted at the summit as a key method for improving factual accuracy and domain specificity in large language model applications.

Mentioned in

Risk-Based AI Regulation

Policy & Governance

A regulatory approach that calibrates oversight and safeguards according to the potential harm a given AI application could cause, applying stricter rules to higher-risk use cases. It was presented at the summit as the preferred framework for balancing innovation with public protection.

Mentioned in

Risk-Based Governance

Policy & Governance

A governance approach that allocates regulatory oversight proportionately to the risk level of an AI use case, applying intensive testing and scrutiny to high-risk applications while permitting lighter-touch processes for low-risk ones. It was discussed at the summit as a pragmatic framework for balancing innovation with accountability.

Mentioned in

Robotic Process Automation (RPA)

Applications

A technology that automates repetitive, rule-based digital tasks such as invoice processing, data entry, and workflow orchestration. At the summit, RPA was discussed as a practical near-term AI application for enterprise productivity.

Mentioned in

Robotic Process Automation (RPA) and Agentic Process Automation

Tools & Frameworks

RPA automates structured, rule-based workflows, while Agentic Process Automation represents its evolution toward AI agents capable of handling complex, autonomous, multi-step tasks. Automation Anywhere highlighted this progression at the summit as a defining shift in enterprise automation strategy.

Robotics

Applications

An emerging deep-tech domain involving the design, deployment, and operation of physical autonomous or semi-autonomous machines, referenced at the summit as a key area of future AI application and investment. It encompasses both industrial automation and next-generation human-robot interaction.

Mentioned in

Runtime Governance

Standards & Safety

The practice of injecting dynamic guardrails, constraints, and policy controls into AI systems at the point of execution rather than embedding only static policy documents at design time. This enables adaptive, context-sensitive safety and compliance enforcement during live AI operation.

Mentioned in

S

Sandbox Environment

Policy & Governance

A controlled regulatory space that allows organisations to test AI products and services under supervised conditions before full legal compliance obligations apply. It was presented at the summit as a key regulatory innovation tool for responsible deployment.

Mentioned in

Sandboxing

Standards & Safety

A restricted deployment environment used to assess the risks and behaviours of AI systems before full public rollout, noted at the summit as a necessary but insufficient safeguard on its own. It enables controlled experimentation while limiting potential harms during early-stage evaluation.

Mentioned in

Satellite Data

Data & Datasets

Remote sensing imagery and geospatial data captured by satellites, discussed at the summit in the context of AI-driven damage assessment and environmental monitoring applications. Such data enables large-scale, real-time observation of physical and ecological conditions.

Mentioned in

Satellite Imagery Analysis

Applications

The use of remote sensing and satellite data—including soil water content, humidity, and phenotype analysis—for applications such as agricultural advisory, disaster damage assessment, and humanitarian needs analysis.

Mentioned in

Schema.org

Standards & Safety

A collaborative, community-maintained vocabulary for structured data markup on the web, enabling machines to understand the meaning of content across diverse sources. It was referenced at the summit in discussions of data interoperability and shared standards for AI-ready datasets.

Mentioned in

SEA-Lion

Models & Architectures

A multilingual large language model specifically designed and trained to support the diverse languages of Southeast Asia. It was discussed at the summit as an example of regionally tailored AI development addressing underrepresented linguistic communities.

Mentioned in

Sector-Specific AI Regulation

Policy & Governance

A regulatory model, exemplified by Israel, in which AI governance is implemented within existing domain-specific frameworks—such as financial services, energy, agriculture, and healthcare—rather than through a single horizontal AI law. This approach was debated at the summit as an alternative to comprehensive cross-sector AI legislation.

Mentioned in

Secure-by-Design

Standards & Safety

A development philosophy in which security controls and privacy protections are architected into AI systems from the earliest design stages rather than added retrospectively. The summit framed this as a foundational principle for building trustworthy and resilient AI products.

Mentioned in

Semantic Search

Applications

An information retrieval technique that uses vector similarity and contextual understanding to match queries to relevant content, applied at the summit in the context of matching patient records to eligibility criteria for health schemes. Unlike keyword search, semantic search captures meaning rather than exact term matches.

Mentioned in

Semiconductor Manufacturing

Infrastructure & Compute

The industrial capability to design and fabricate integrated circuits and chips, considered strategically critical for AI sovereignty. Summit discussions referenced India's missed opportunity in 1986 and the subsequent dominance of TSMC, which now holds approximately 75% of global advanced chip manufacturing market share.

Mentioned in

Sentiment Analysis

Applications

An AI-driven natural language processing technique used to detect and classify the emotional tone of text or speech in real time, with summit examples including live extraction of customer sentiment during call-centre interactions. It is a core capability in enterprise conversational AI deployments.

Mentioned in

Seven Sutras (India AI Governance Guidelines)

Policy & Governance

India's set of seven AI governance principles providing actionable pathways for builders, enterprises, and institutions to develop and deploy AI responsibly within the Indian context.

Mentioned in

Shared Responsibility Model (AI in Education)

Policy & Governance

A multi-stakeholder governance framework that distributes accountability for safe and effective AI use in educational contexts across platforms, governments, educators, parents, and young people.

Mentioned in

Sigma EyePiece

Applications

A modular three-piece attachment that retrofits standard optical microscopes with automated, AI-powered image analysis capabilities. The device was showcased as a low-cost approach to bringing diagnostic AI to resource-constrained clinical and research settings.

Mentioned in

Situational Awareness (AI in Defense)

Applications

The capacity of AI-enabled defense and military systems to maintain an accurate, real-time understanding of an operational environment, critically dependent on anomaly detection systems with low false-alarm rates. Achieving reliable situational awareness remains a core technical and governance challenge for military AI applications.

Mentioned in

Skill India Digital Hub

Applications

A Government of India online portal providing digital skilling resources, reclassified at the summit as Digital Public Infrastructure (DPI) to reflect its role as a foundational public asset for workforce development.

Mentioned in

Small Language Model (SLM)

Models & Architectures

A compact language model designed for efficient deployment in enterprise settings, offering a practical alternative to large-scale LLMs where resource constraints or domain specificity are priorities. SLMs are gaining traction as organisations move beyond general-purpose LLMs toward leaner, more cost-effective solutions.

Mentioned in

Smart Meter

Infrastructure & Compute

A digital energy metering device deployed in electricity distribution networks that enables real-time consumption monitoring and AI-facilitated demand response programmes. Smart meters were cited at the summit as a key enabling technology for intelligent grid management in India's energy sector.

Mentioned in

Sovereign AI

Policy & Governance

The strategic development of domestic AI capacity—including models, infrastructure, and talent—rather than dependence on foreign technology companies' licensed or API-based services. Summit discussions emphasised its necessity for sectors handling sensitive national data such as courts and healthcare.

Mentioned in

Sovereign AI Models

Models & Architectures

Country- or region-specific large language models trained or fine-tuned to reflect local languages, cultural contexts, and governance requirements—such as India's Bhaaratiya and other Indic LLMs supporting all 22 scheduled languages. They are discussed at the summit as essential for inclusive AI development and national digital sovereignty.

Mentioned in

Sovereign Bouquet of Models

Policy & Governance

An Indian policy concept emphasising the cultivation of a diverse portfolio of domestically developed AI models rather than relying on imported solutions. The approach aims to ensure strategic autonomy and contextual relevance for India's AI ecosystem.

Mentioned in

Sovereign Cloud

Infrastructure & Compute

A cloud computing stack built entirely using open-source technologies and in-house engineering talent to ensure a nation retains full control over its data infrastructure, free from foreign vendor dependencies. The summit cited India's YOTA initiative as an example of sovereign cloud development aimed at strengthening national digital independence.

Mentioned in

Sovereign Compute Infrastructure

Infrastructure & Compute

Nationally or regionally controlled data centre and computing resources that enable a country to develop, train, and host AI models without dependence on foreign cloud providers. Summit discussions framed sovereign compute as a prerequisite for meaningful data sovereignty and independent AI capability.

Mentioned in

Standard of Care

Policy & Governance

A legal and clinical benchmark defining the level of care and diligence a reasonably competent professional is expected to provide, used as the reference point in medical malpractice litigation. At the summit, a reported 14% increase in US malpractice claims between 2023 and 2025 was linked to questions about physician accountability when AI tools are involved in clinical decision-making.

Mentioned in

Standardised Port Data Repositories

Data & Datasets

Common, interoperable data storage infrastructure shared across port operators to enable coordinated logistics, analytics, and AI applications in maritime trade. Standardisation ensures data consistency and reduces integration costs across stakeholders.

Mentioned in

Stanford AI Index

Other

An annual research report from Stanford University that tracks global AI development trends, including country-level metrics on AI talent penetration, diversity, and research output. It was cited at the summit as a key benchmarking tool for assessing India's standing in the global AI landscape.

Mentioned in

STPI Centres of Entrepreneurship

Applications

Incubation hubs established by India's Software Technology Parks of India that provide AI startups with mentorship, market access, funding pathways, and global exposure. They were cited as a key component of India's strategy to nurture a domestic AI innovation ecosystem.

Mentioned in

Superconducting Qubits

Models & Architectures

A leading quantum computing hardware technology, used by IBM among others, in which quantum bits are implemented using superconducting circuits that must be cooled to approximately 2 Kelvin. They are a primary platform for near-term quantum computing research and development.

Mentioned in

Sustainable Development Goals (SDGs)

Policy & Governance

The United Nations' 17 global goals for social, economic, and environmental progress, used at the summit as a benchmark framework for evaluating the societal and planetary impact of AI deployments. Aligning AI outcomes with SDGs was presented as a key criterion for responsible innovation.

Mentioned in

Synthetic Data

Data & Datasets

Artificially generated data used to augment or replace real-world training data, particularly valuable in low- and middle-income countries (LMICs) where data quality gaps exist. Summit speakers emphasised the need for transparency when disclosing whether training datasets contain synthetic versus real data.

Mentioned in

Synthetic Data Generation

Data & Datasets

The process of algorithmically producing artificial datasets that mirror the statistical properties of real data while removing or obscuring personally identifiable information (PII), enabling privacy-preserving model training and data augmentation. This technique is particularly valuable in regulated domains such as healthcare and finance where access to real data is restricted.

Mentioned in

SynthID

Tools & Frameworks

A tool developed by Google DeepMind that embeds imperceptible digital watermarks into AI-generated content—including images, text, audio, and video—to enable verification of provenance and authenticity. At the summit, SynthID was highlighted as a key instrument for combating AI-generated misinformation and supporting content integrity.

Mentioned in

T

Technolegal Approach

Policy & Governance

A hybrid model that combines technological controls with regulatory frameworks to prevent AI-related harms. It advocates for co-designing legal and technical solutions rather than treating them as separate domains.

Mentioned in

Technological Leapfrogging

Policy & Governance

The strategy of bypassing earlier, less efficient stages of technological infrastructure to adopt more advanced solutions directly—discussed at the summit as a potential pathway for developing regions in AI adoption, though participants cautioned it requires intentional policy and investment rather than occurring automatically.

Mentioned in

Technology Readiness Level (TRL)

Standards & Safety

A standardised nine-point scale used to assess the maturity of a technology from basic research (TRL-1) through to fully deployed systems (TRL-9). At the summit, TRL-7 was cited for the ParisSpeak system, indicating a pilot demonstration in an operational environment.

Mentioned in

Text-to-Speech (TTS)

Tools & Frameworks

A speech synthesis technology that converts written text into spoken audio, discussed at the summit in the context of multilingual and Sanskrit-language applications requiring high phonetic accuracy. Building robust TTS systems for low-resource and classical languages was identified as both a technical and cultural preservation priority.

Mentioned in

Thematic Working Groups (Bletchley Process)

Policy & Governance

Network-based collaborative clusters established under the Bletchley Process framework to address specific AI safety issues through focused, multi-stakeholder problem-solving.

Mentioned in

Token-Based Pricing

Policy & Governance

The prevailing commercial model for large language model APIs in which customers are charged per token (unit of text) processed or generated. Summit participants acknowledged this as a transitional and imperfect pricing mechanism, particularly for enterprise customers with unpredictable workloads.

Mentioned in

Tokenization of Credentials and Assets

Applications

The conversion of real-world credentials or assets — such as farmer identity records or utility consumption data — into portable, tradeable digital tokens on a data or financial infrastructure layer. The summit highlighted this as a mechanism for enabling new economic opportunities, such as peer-to-peer energy trading and alternative credit scoring.

Mentioned in

Tokens

Models & Architectures

The discrete units of text (words, sub-words, or characters) that large language models process during inference and training, with each token representing a unit of computational work. At the summit, the economic value of tokens generated domestically was framed as a matter of digital and economic sovereignty, particularly for countries like India.

Mentioned in

Tokens per Watt per Dollar

Infrastructure & Compute

An efficiency metric for AI inference that measures the number of tokens generated per unit of energy consumed per unit of cost, emphasising overall output efficiency rather than raw power consumption alone. The summit introduced this metric as a more holistic benchmark for evaluating the sustainability and cost-effectiveness of AI infrastructure.

Mentioned in

Traditional Machine Learning

Models & Architectures

Classical ML approaches (distinct from large language models) that have demonstrated sustained, practical efficacy in domain-specific applications such as tuberculosis screening in healthcare, adaptive learning in education, and crop monitoring in agriculture. Summit speakers noted these methods remain competitive in low-resource or high-reliability contexts where LLMs underperform.

Mentioned in

Transfer Learning

Models & Architectures

A machine learning technique in which a model trained on a data-rich source domain (e.g., Europe or North America) is subsequently fine-tuned on a smaller dataset from a target domain (e.g., India or the Global South) to achieve strong performance where data is scarce. The summit highlights it as a practical strategy for building equitable AI across underrepresented regions.

Mentioned in

Transformer Architecture

Models & Architectures

The dominant neural network paradigm introduced in 2017, underpinning most modern large language and foundation models. Summit discussions noted that the architecture may be approaching a saturation point, prompting exploration of next-generation alternatives.

Mentioned in

Transformer-Based Models

Models & Architectures

A class of neural network architectures built on the self-attention mechanism that underpin most modern large language models and are widely used for tasks including text generation, code synthesis, and clinical dialogue. Challenges noted at the summit include explainability, code-mixing in multilingual contexts, and the need for fine-tuning on domain-specific datasets such as therapy transcripts.

Mentioned in

Trust Frameworks for AI

Standards & Safety

Structured architectures and governance principles that define how trust is established and maintained between humans and AI systems—and vice versa—across both human-to-digital and digital-to-human interaction contexts. They provide the foundation for deploying AI in sensitive domains where user confidence and accountability are paramount.

Mentioned in

U

Uncertainty Estimation

Standards & Safety

A technique that quantifies model confidence to enable threshold-based decision-making—for example, routing outputs above 80% confidence as positive, below 20% as negative, and 20–80% as uncertain for human review. Raised at the summit as a key mechanism for responsible AI deployment in high-stakes applications.

Mentioned in

Uncertainty Quantification

Standards & Safety

The practice of accompanying AI model predictions with calibrated confidence or reliability scores to communicate the degree of uncertainty in each output. At the summit it was emphasised as a critical requirement for high-stakes applications such as flood early warning and clinical decision support.

Mentioned in

UNESCO Recommendation on the Ethics of AI

Policy & Governance

A globally adopted framework establishing ethical principles for the development and governance of AI, used as the basis for country-level Readiness Assessment Methodologies (RAMs) in more than 80 countries. It was cited at the summit as a foundational reference for national AI policy alignment.

Mentioned in

UNESCO Recommendation on the Ethics of Artificial Intelligence (2021)

Policy & Governance

A global normative instrument adopted by UNESCO that addresses ethical challenges of AI, including the spread of misinformation, synthetic content risks, and the protection of youth. Referenced at the summit as a key international policy baseline for member states developing national AI governance frameworks.

Mentioned in

Unified Payments Interface (UPI)

Applications

India's real-time interoperable digital payments platform, presented at the summit as a benchmark model of open Digital Public Infrastructure design that enables frictionless transactions across institutions. Its architecture illustrates how DPI can democratise financial services at national scale.

Mentioned in

UPI Open Ecosystem Model

Policy & Governance

India's Unified Payments Interface is cited at the summit as a successful precedent for an open, interoperable digital ecosystem that drove rapid nationwide scale, offered as a template for AI infrastructure development.

Mentioned in

V

Vector Database

Infrastructure & Compute

A database optimised for storing and querying high-dimensional vector embeddings, enabling efficient similarity search for applications such as encrypted speaker diarization and semantic retrieval. It is a core infrastructure component in modern AI pipelines involving embeddings from language or audio models.

Mentioned in

Verifiable Credentials

Standards & Safety

Cryptographically secured digital attestations of a person's skills, qualifications, or identity that can be instantly verified without reliance on a central intermediary, thereby reducing the cost of trust in digital transactions. The summit positions them as a critical component of next-generation Digital Public Infrastructure.

Mentioned in

Verifiable Digital Credentials

Tools & Frameworks

Tamper-evident, digitally signed certifications of skills or qualifications—often anchored on blockchain or other trusted platforms—that workers can carry and share across employers and borders. Summit sessions highlighted these as a mechanism to create portable, trustworthy worker profiles in an AI-transformed labour market.

Mentioned in

Vertex AI

Tools & Frameworks

Google's enterprise AI platform, highlighted at the summit for its data governance capabilities and intellectual property protections. It provides organisations with managed infrastructure for building, deploying, and monitoring machine learning models.

Mentioned in

Vertical Working Groups

Policy & Governance

Sector-specific expert bodies—such as those covering healthcare, robotics, and financial services—established in Japan's AI governance framework to develop tailored safety and regulatory guidance. They complement horizontal working groups that address cross-cutting AI safety concerns.

Mentioned in

Viksit Bharat 2047

Policy & Governance

India's national vision to achieve developed-nation status by 2047, the centenary of its independence. AI and digital infrastructure are positioned as central pillars of this long-term economic and social development agenda.

Mentioned in

Vision Language Model (VLM)

Models & Architectures

A class of multimodal AI model that jointly processes visual and textual inputs, discussed extensively in the medical AI context at the summit. Speakers noted VLMs are prone to hallucination and perform poorly under distribution shift, raising concerns about clinical deployment reliability.

Voice Cloning

Applications

AI-powered technology that synthesises a custom, natural-sounding voice from audio samples, enabling personalised text-to-speech experiences. At the summit, this was discussed in the context of Indic-language platforms creating localised voice products.

Mentioned in

Voice-First Interface

Applications

A human–computer interaction modality that prioritises spoken language as the primary input and output channel, designed to serve low-literacy users and populations relying on feature phones with limited screen interfaces. The summit identified this as the preferred access model for underserved communities.

Mentioned in

W

Wahal NLP Dataset

Data & Datasets

An open-source African speech dataset comprising approximately 11,000 hours of automatic speech recognition (ASR) and text-to-speech (TTS) data for African languages including Wolof and Sagal. It was referenced at the summit as a critical resource for building AI systems that serve underrepresented African linguistic communities.

Mentioned in

Watermarking and Content Labelling

Standards & Safety

Technical measures that embed identifiers into or attach metadata to AI-generated content, enabling its detection and attribution. Discussed at the summit as important safeguards for transparency, provenance tracking, and combating AI-generated misinformation.

Mentioned in

Wave2Vec

Models & Architectures

A pre-trained self-supervised speech representation model fine-tuned on Indian languages for downstream speech recognition tasks. At the summit it was cited by the Sentinel Mavericks team, who adapted it across five Indian languages.

Mentioned in

World Models

Models & Architectures

Foundation models that encode an understanding of physical laws and 3D spatial structure, enabling robots and embodied AI systems to reason about the real world from limited sensory input such as 2D images. They are discussed at the summit as a key architectural advance toward robust robotic perception and planning.

Mentioned in

Z

Zero Trust Architecture

Infrastructure & Compute

A cybersecurity model that eliminates implicit trust within a network by requiring continuous identity verification and strict authorisation for every user, device, and application, regardless of location. At the summit, it was discussed in the context of cloud-based AI security, with Zscaler's implementation cited as a reference model for protecting AI workloads.

Mentioned in

Zero-Trust Architecture

Standards & Safety

A security model applied at both the network and AI-model levels that requires continuous verification of every user, device, and process rather than assuming trust based on network location. The summit identified zero-trust principles as essential for securing AI deployments in sensitive government and enterprise environments.

Mentioned in