All sessions

Safe and Trusted AI Standards in the Age of Generative AI

Contents

Executive Summary

This panel discussion at India's AI Impact Summit 2026 examines the critical intersection of AI standardization, governance, and responsible innovation. With generative AI and agentic systems rapidly transforming economies and societies, panelists from standards bodies, industry, academia, and government emphasize that standards are the operational backbone for translating policy intent into implementable technical processes. The overarching consensus is that while foundational AI standards exist, the field faces the dual challenge of rapid technology evolution outpacing standardization efforts while managing the proliferation of AI across diverse sectors without stifling innovation.

Key Takeaways

  1. Standards Are Not Optional Infrastructure—They Are the Mechanism for Responsible AI Deployment

    • Organizations must adopt ISO/IEC 42001 and sector-specific standards not as external compliance burdens, but as operationally integrated governance frameworks. This is already happening (Axis Bank, major cloud providers), and laggards risk competitive disadvantage.
  2. India Must Lead in Standards Creation, Not Merely Adoption

    • India's proposal of benchmarking standards, multilingual considerations, and context-aware frameworks position it as a standards innovator. Failing to create Indian-specific standards risks exporting governance models unsuited to Indian conditions and risks.
  3. Continuous Assessment and Living Standards Are Non-Negotiable

    • The pace of AI evolution renders static compliance labels obsolete. Standards bodies, governments, and organizations must shift to continuous monitoring. India's governance guidelines are explicitly framed as "living documents" requiring regular updates.
  4. Accountability Frameworks Must Be Defined Before Widespread Agentic AI Deployment

    • With agentic systems executing irreversible transactions autonomously, the current lack of defined responsibility chains is a critical governance gap. This is an active work item in SC42 and must be resolved before deployment scales.
  5. Sector-Specific Conformity Schemes (ISO/IEC 42007 Model) Are Essential for High-Risk Applications

    • Healthcare, finance, and defense require more than organizational governance. ISO/IEC 42007 (under development) will enable industry consortia to define additional testing/certification requirements. Organizations in high-impact sectors should track this standard's development.

AI Impact Summit 2026, New Delhi


Key Topics Covered

  • Trust in AI Systems: Multi-dimensional nature of trust; domain-specific and stakeholder-specific variations
  • International Standardization Architecture: ISO/IEC JTC1 SC42 committee structure, India's role as founding member
  • AI Management Systems: ISO/IEC 42001 certification standard and emerging conformity assessment frameworks
  • Risks in Generative AI & Agentic AI: Hallucination, deepfakes, IP infringement, autonomous execution risks, accountability gaps
  • India's AI Governance Framework: November 2025 guidelines; pro-innovation approach balanced with user harm prevention
  • Sector-Specific Standards: Need for vertical standards beyond horizontal SC42 frameworks
  • Continuous Assessment & Certification: Moving from static compliance to continuous monitoring
  • Regulatory vs. Standards-Based Approaches: Different models (EU AI Act vs. UK standards-led approach)
  • Pre-Standardization Forums: Role of OECD, ML Commons, and other consortia in informing standards development
  • India's Unique Context: Multilingual systems, diverse datasets, need for context-specific standards
  • Labeling & Transparency: Proposal for visible markers indicating AI system compliance levels
  • Emerging Standards: ISO/IEC 42007 (sector-specific conformity assessment), benchmarking standards (proposed by India), agentic AI standards in development

Key Points & Insights

  1. Trust is Context-Dependent, Not Universal

    • Dr. Shrihar Chamalungu (IIT Tirupati) emphasizes that trust has 12+ properties (reliability, safety, etc.) but varies significantly by domain, stakeholder type, and system autonomy. Current standards remain at principled/policy level; sector-specific guidance and measurable definitions are urgently needed.
  2. Standards are the Operationalization Layer

    • Abhishek (Ministry of Electronics & IT) articulates that governance frameworks articulate intent, but standards translate that intent into "implementable technical and procedural processes." Without this translation, governance remains aspirational.
  3. AI Standardization Began 2015-2018, But Technology Now Moves in Quarters

    • Rohit (INSITS, JTC1 SC42 Chair) notes that SC42 began formal work in 2018, yet AI model capabilities change quarterly. Standards face a structural pace-of-change problem; a multi-decade commitment is required.
  4. Accountability Chains in Multi-Agent Systems Remain Undefined

    • A concrete unresolved question: If software is generated by 20+ AI agents and fails, who bears responsibility—the individual agent, the orchestrating engineer, or the system provider? Current standards do not answer this.
  5. Generative AI & Agentic AI Introduced Novel Risk Categories

    • Gaitri (TCS) distinguishes that GenAI risks (hallucination, deepfakes, IP infringement) differ qualitatively from traditional ML risks. Agentic AI compounds this by moving from generation to execution of irreversible transactions—a paradigm shift requiring new standards.
  6. ISO/IEC 42001 Adoption is Leading Globally, But Insufficient Alone

    • Tim (British Standards Institution) confirms 42001 is gaining traction and adoption by major providers (Axis Bank became first certified bank globally). However, Tim notes it addresses process/governance, not product-level safety for high-risk applications—hence the need for 42007 (sector-specific testing).
  7. Static Compliance Assessment is Obsolete

    • Dr. Shrihar argues that continuous assessment must replace one-time compliance. An AI system certified today may drift or change fundamentally within months, rendering static "compliance" labels misleading.
  8. India Must Create, Not Just Adopt, Standards

    • Dr. Shrihar proposes India move beyond adapting ISO/IEC standards to creating indigenous standards reflecting Indian context: multilingualism, diverse datasets, and billions of users in unique scenarios. This is both a sovereignty and safety imperative.
  9. Pre-Standardization Forums (OECD, ML Commons) are Critical

    • Rohit illustrates that OECD's AI Incident Monitoring Framework was ratified by OECD countries before becoming a new work item in SC42. ML Commons' benchmarks (AILUMINATE, jailbreak tests) provide the concrete testing tools that standards reference but don't create.
  10. Regulation & Standards Are Complementary, Not Substitutional

    • Different countries use different models: UK relies on standards + assurance; EU integrates standards into AI Act as mandatory conformity assessment. India's approach combines existing IT Act/BNS with governance guidelines + standards, avoiding new comprehensive AI legislation (for now).

Notable Quotes or Statements

"Trust has is a multi-dimensional and multifaceted thing. So it's very hard to sort of define that...trust essentially varies based on the domain based on the stakeholders based on the autonomy of the system."
Dr. Shrihar Chamalundu, IIT Tirupati

"Standards are the operational backbone of safe and trusted AI. Policies articulate the intent, but standards define how to translate that intent into implementable technical and procedural processes."
Abhishek, Ministry of Electronics & IT

"Technology is moving very fast and as standardization bodies we have to find ways to keep pace with it. That is the biggest challenge because...it's not a question of a year it's a question of a quarter."
Rohit, INSITS / JTC1 SC42 Chair

"The risks are like the capabilities are also exploding. Responsibilities is everybody's pie—model providers, deployers, distributors, implementers. It's everybody's responsibility to have responsible AI practices."
Gaitri, TCS

"How do I know that [an AI solution like ChatGPT] is trustworthy? How do I know that I can rely on this? At this point of time [we] do not have established mechanisms."
Dr. Shrihar Chamalundu

"We cannot compare each and every sector at the same stage or at the same level...sectoral regulators [RBI, SEBI, CCI] have come up with their own guidelines."
Abhishek, Ministry of Electronics & IT


Speakers & Organizations Mentioned

Panelists:

  • Dr. Shrihar Chamalundu – IIT Tirupati; Chair of AI-Assisted Software Development Group in JTC1 SC42
  • Rohit – INSITS (National Standards Mirror Committee of US); Chair of JTC1 SC42 (ISO/IEC Joint Technical Committee on AI Standardization)
  • Tim – British Standards Institution (BSI); AI Market Development Lead
  • Gaitri – Tata Consultancy Services (TCS); Responsible AI Framework development
  • Abhishek – Ministry of Electronics & IT (MeitY), India; AI India Mission
  • Reina G – Moderator (affiliation not explicitly stated, but appears to be from standards/policy body)

Government & Policy Bodies:

  • Government of India – Released AI Governance Framework (November 5, 2025)
  • Ministry of Electronics & IT (MeitY) – India AI Mission, toolkit development (13 projects initiated)
  • Bureau of Indian Standards (BIS) – India's national standards body, liaison with ISO/IEC
  • Principal Scientific Advisor's Office – Released governance guidelines

International Standards Bodies:

  • ISO/IEC JTC1 SC42 – Joint Technical Committee on Artificial Intelligence (established 2018)
  • ISO (International Organization for Standardization)
  • IEC (International Electrotechnical Commission)

Consortia & Forums:

  • OECD (Organization for Economic Co-operation and Development) – AI guidelines, incident monitoring framework
  • ML Commons – Benchmarking consortia (AILUMINATE, jailbreak benchmarks)

Companies/Entities:

  • Axis Bank – First bank globally to achieve ISO/IEC 42001 certification
  • Major cloud/AI providers – Referenced as 42001 certified (ChatGPT, etc., implied)

Regulatory Bodies Referenced:

  • RBI (Reserve Bank of India) – Financial sector AI guidelines
  • SEBI (Securities and Exchange Board of India) – Capital markets AI guidelines
  • CCI (Competition Commission of India) – Competitive implications of AI

Technical Concepts & Resources

Key Standards Mentioned

  • ISO/IEC 42001Management System for Artificial Intelligence; first certifiable AI management system standard (released ~1.5 years prior to talk, i.e., mid-2024). Covers organizational governance, risk management, lifecycle approach.

  • ISO/IEC 42005System Impact Assessment Framework for AI systems (mentioned as adopted by organizations)

  • ISO/IEC 42007Sector-specific conformity assessment and certification frameworks (under development; builds on ISO's conformity assessment infrastructure). Allows industry consortia to define additional testing/certification for high-risk applications.

  • ISO/IEC 42106Benchmarking of AI systems (mentioned as proposed by India; in publication stage)

  • ISO/IEC 27001Information Security Management System (referenced as established, 10-20 years use globally; data protection and cybersecurity anchor for AI standards)

  • ISO/IEC Standards for AI Vocabulary – Foundational work by SC42; ensures common language across stakeholders

  • OECD AI Incident Monitoring Framework – Ratified by OECD countries; brought into SC42 as new work item proposal (2024 Delhi plenary hosted by BIS)

Frameworks & Guidelines

  • India AI Governance Framework (Government of India, November 5, 2025)

    • Pro-innovation approach balanced with user harm prevention
    • Living document; regularly updated
    • Defines vision, approach, short-term and long-term goals
    • Emphasizes accountability, explainability, data access, compute access
    • Sector-specific regulators (RBI, SEBI, CCI) developing supplementary guidelines
    • Does NOT prescribe new AI-specific legislation; relies on existing IT Act, BNS (Bharatiya Nyaya Sanhita), and related laws
  • TCS Responsible AI Framework

    • SAFE Tenants: Secure & Reliable; Accountable; Fair & Ethical; Transparent; Identity & Privacy Protection

Technical Risks & Concepts

  • Hallucination – Confident fabrication of facts (distinct from traditional model inaccuracy)
  • Deepfakes – Synthetic media generated by GenAI
  • IP Infringement – Copyright/licensing violations in training/output
  • Agentic AI Risks – Autonomous execution of irreversible transactions (API calls, system access); distinct from generative risks
  • Model Drift – Degradation of model performance over time
  • Red Teaming – Systematic testing with adversarial prompts to identify biased, toxic, or unacceptable behavior
  • Jailbreaking – Techniques to bypass safety constraints; ML Commons published benchmark for this
  • Benchmarks:
    • AILUMINATE – ML Commons red-teaming benchmark
    • Jailbreak Benchmark – ML Commons adversarial testing benchmark

Governance & Regulatory Concepts

  • Conformity Assessment – Process of verifying compliance (referenced as 80-year-old infrastructure in aerospace/aviation; being adapted for AI via ISO/IEC 42007)
  • Notified Bodies – Designated organizations authorized to assess conformity (EU AI Act model)
  • Data Protection Compliance – DPDP Act (Data Protection Policy), privacy policy validation
  • Accountability Chain – Who is responsible across the AI value chain: developer → deployer → manager → auditor

Tools & Initiatives (Under Development)

India AI Mission Toolkits (13 projects initiated):

  • Ethical AI frameworks
  • Watermarking tools
  • Labeling tools
  • Defect detection tools
  • Machine learning governance tools

Regulatory Acts Referenced (India)

  • IT Act – Information Technology Act
  • BNS (Bharatiya Nyaya Sanhita) – New Criminal Code of India; covers harm from any source (AI or non-AI)
  • DPDP Act – Data Protection Policy Act

Additional Context

Timeline of AI Standardization

  • ~2015-2016: Early discussions on standardization needs for AI
  • 2017: ISO and IEC decision to establish dedicated AI standardization committee
  • Early 2018: JTC1 SC42 formally begins work on AI standards
  • 2024 (April): OECD AI Incident Monitoring Framework brought into SC42 via Delhi plenary (hosted by BIS)
  • ~Mid-2024 (~1.5 years prior to talk): ISO/IEC 42001 certification standard released
  • November 5, 2025: India releases AI Governance Framework
  • 2026 (April): This summit discussion occurs

Implicit Policy Tensions & Open Questions

  1. Static vs. Continuous Compliance: How can certification remain meaningful when AI systems change quarterly?
  2. India-Specific vs. Global Standards: Should India create parallel standards or configure global standards to local context?
  3. Prescriptive vs. Suggestive Governance: Should AI guidelines be enforceable law or advisory? Answer: Sector-dependent, per panelists.
  4. Transparency/Labeling: Should AI products carry visible compliance labels? India proposed benchmarking standards; outcome TBD.
  5. Agentic AI Accountability: Who is liable when multi-agent systems fail autonomously?
  6. Pre-Competitive Collaboration: How can standards bodies balance competitive interests (e.g., model providers) with public safety?

Gaps & Future Work

  • Sector-specific standards for healthcare, finance, agriculture, defense still in early stages
  • Agentic AI standards (orchestration, interoperability, safety) in active development (SC42 focus)
  • Continuous assessment mechanisms not yet standardized
  • India-specific multilingual AI standards proposed but not yet finalized
  • Labeling/transparency standards under deliberation; India's benchmarking standard approaching publication
  • Quantum computing implications for AI security mentioned as a future consideration but not elaborated
  • Incident response & remediation standards emerging (via OECD input)

End of Summary