All sessions

Embedding Trust in AI Innovation: Governance and Quality Infrastructure

Contents

Executive Summary

This AI summit panel discussion addresses the critical challenge of building trustworthy AI systems within quality infrastructure and conformity assessment. Speakers from industry, accreditation bodies, policymakers, and standards organizations present real-world applications of AI in inspection and sensory evaluation, while identifying regulatory, accreditation, and standardization barriers that must be overcome to scale AI deployment safely and reliably across supply chains.

Key Takeaways

  1. Scalable AI deployment requires parallel development of regulatory frameworks, certification standards, and accreditation processes — technology alone is insufficient without governance infrastructure.

  2. Establish trust in AI results through standardized validation frameworks, not vendor dependence — industry needs independent assessment methods equivalent to traditional calibration and certification models.

  3. AI enhances human expertise but does not replace accountability — maintain human oversight and scientific responsibility while leveraging AI for speed, accuracy, and consistency.

  4. India's opportunity is to build AI-native quality infrastructure from the ground up rather than retrofit legacy systems, with emphasis on accessibility, affordability, and consistency across all pin codes and regions.

  5. Harmonization across borders is critical — joint guidelines (e.g., on AI bill of materials led by US, India, France), synchronized standards development (ISO, IEC), and shared test beds (like NIST's Dopatra) reduce fragmentation and enable global trust.

Key Topics Covered

  • AI in Conformity Assessment: Digital twin models for warehouse inspection, achieving 80% reduction in inspection time and 99% data accuracy
  • Regulatory Frameworks: Gaps in current regulations designed for manual inspection; need for holistic system-level regulatory approaches for AI-assisted inspection
  • Accreditation & Certification Challenges: Difficulty establishing trust and validation frameworks for AI-generated results; fragmentation across accreditation bodies
  • AI Applications Beyond Inspection: Electronic nose/tongue sensory evaluation systems, copilot integration for productivity, knowledge AI for document analysis
  • Digital Quality Infrastructure (DQI): India's strategic initiative to leapfrog legacy systems using AI and digital-first design
  • Cyber Security & AI Risk: Poisoning attacks, model extraction, evasion attempts; need for explainability and AI bill of materials
  • Standards Development: ISO 42001, ISO 23894 (risk management), ISO 4206 (testing validation), emerging standards for verification and Indian language-centric AI
  • Quality Infrastructure Strategy: India's approach to making quality accessible, affordable, consistent, and democratic while achieving global competitiveness

Key Points & Insights

  1. Real-world AI Success in Inspection: One implementation reduced inspection time from 4-5 days to hours with 99% data accuracy (vs. 80-90% for manual methods) and created safer working conditions, demonstrating immediate operational value.

  2. Regulatory Gap is a Critical Blocker: Current regulations are designed for manual inspection; AI-assisted inspection lacks a "holistic or system view" from regulators, making reports legally challengeable and hindering scalability across jurisdictions.

  3. Trust Framework Needed: Industry needs a standardized way to validate AI results independent of AI vendors. The question of "How do I trust results from AI?" lacks a clear answer comparable to traditional calibration certification frameworks.

  4. Accreditation Body Role Expansion: Accreditation bodies (that verify verifiers) must evolve to assess how conformity assessment bodies (CABs) integrate AI while maintaining competence, impartiality, and technical knowhow.

  5. Synthetic Data & Privacy Trade-offs: Due to data protection regulations (DPDP Act in India), organizations must rely on synthetic data, creating need for standardized synthetic data validation practices and common platforms.

  6. AI Doesn't Replace Human Judgment: Even in advanced applications (e.g., electronic sensory evaluation), AI enhances but does not replace human expertise. Scientific accountability remains with humans.

  7. Cyber Security Specific to AI Systems: Beyond classical vulnerabilities, AI systems face poisoning attacks, model extraction, and evasion attempts. Explainability and auditability are essential for root cause analysis and incident response.

  8. India's Leapfrog Strategy: Rather than incremental improvement, India aims to move directly to AI-native, digital-ready quality infrastructure (similar to UPI's success in payments), rather than copying legacy approaches from Germany or Japan.

  9. Language & Bias Concerns: Most AI standards are English-centric; India needs testing frameworks for Indic language AI models and methods to avoid bias in multilingual contexts (22 official languages).

  10. Interconnected Governance Ecosystem: Standards, accreditation, regulation, and policy must be coordinated globally; different regions (EU, UK, US) are taking different approaches (prescriptive regulation vs. standards-based assurance), but all recognize the need for linked governance.


Notable Quotes or Statements

  • On regulatory gaps: "Even we produce these reports since there is a missing regulatory framework the legality of that report is challengeable."

  • On trust frameworks: "How do I trust the results coming out of AI? I need a way to basically establish a language a framework where I can say that the results from AI is validated."

  • On AI vs. human judgment: "Does it replace humans? No. It only helps to enhance the understanding and judgment... but the scientific accountability remains with humans."

  • On India's strategy: "We need a system which is going to leapfrog what happened in the data paradigm where we move directly to 3G 4G... like we did with UPI, why not in quality infrastructure?"

  • On transparency: "It should not be a black box... the AI bill of material... help in your supply chain and maintaining the visibility and the ingredients what is there in your AI system."

  • On accreditation consistency: "Quality infrastructure is going to accelerate our Vikshit Bharat journey... Quality is not seen as a barrier. It is available in a very democratic way."


Speakers & Organizations Mentioned

  • Quality Council of India (QCI) — Leading DQI (Digital Quality Infrastructure) initiative
  • Bureau of Indian Standards (BIS) — Developing AI-related standards for Indian context
  • British Standards Institution (BSI) — International standards and certification expertise
  • Ministry of Electronics and Information Technology (India) — Cyber security and AI assurance policy
  • NIST (USA) — Developing risk management frameworks and test beds (Dopatra)
  • ISO/IEC — International standards bodies (ISO 42001, ISO 23894, ISO 4206, ISO 42119 series)
  • EU Standardization Bodies (CEN/CENELEC) — Developing standards aligned with EU AI Act
  • Microsoft — Referenced as AI engine provider; copilot integration examples
  • Unnamed food/consumer products testing company — Electronic nose/sensory evaluation case study
  • Inspection/warehouse services company — Digital twin inspection case study

Technical Concepts & Resources

Standards Referenced

  • ISO 42001: Management systems for AI; foundational block for governance
  • ISO 23894: AI risk management systems (under development)
  • ISO 4206: Certification body rules ensuring consistent quality globally
  • ISO 42119 (Parts 1-N): AI testing and validation standards
    • Part 2: AI testing framework (equivalent to ISO 29119 for software)
    • Part 3: Verification and validation of AI systems (India-led)
    • Planned parts: red teaming, prompt testing, hallucination metrics
  • ISO 29119: Software testing standards (analog for traditional software)
  • NIST Risk Management Framework: US government approach to AI governance
  • NIST Dopatra: Test bed for placing and validating AI systems for cyber vulnerabilities
  • EU AI Act: First comprehensive AI regulation; linked to CEN/CENELEC standards
  • NIS Risk Management Framework (UK): Referenced in UK government regulations

Technical Methodologies & Approaches

  • Digital Twin Models: 3D reconstruction for volumetric calculation and inspection
  • Electronic Nose & Electronic Tongue: Chromatographic data analysis using AI to identify volatile compounds and sensory profiles
  • AI Bill of Materials (AI BoM): Supply chain transparency for AI ingredients (joint guidelines by US, India, France)
  • Smart Scan & Document Processing: Keyless accreditation systems using OCR and intelligent data extraction
  • Voice-Enabled Agents: Natural language interfaces for audit guidance and best practice retrieval
  • Smart APIs: Seamless integration between institutional and national systems
  • Synthetic Data Validation: Methods for validating synthetic datasets when privacy regulations restrict real data use
  • Red Teaming & Prompt Testing: Security validation approaches for generative AI systems
  • Remote Audits: Digital assessment methods for AI systems (not requiring on-site factory visits)

Key Metrics & Data Points

  • 80% reduction in inspection time (4-5 days → hours)
  • 99% data accuracy (vs. 80-90% for manual methods)
  • 275,000 odorant compounds in electronic nose database
  • 4 AI agents currently developed as part of India's DQI
  • 22 official languages in India requiring multilingual AI validation

Cybersecurity Dimensions for AI

  1. Poisoning Attacks: Training data manipulation
  2. Model Extraction: Unauthorized access to model weights/configuration
  3. Evasion Attempts: Adversarial inputs to bypass model decisions
  4. Explainability/Auditability: Transparency for incident investigation and root cause analysis

Note: This transcript contains some repetitive sections and technical artifacts from the audio capture, which have been normalized in this summary. The core governance and technical messages have been preserved with precision.