All sessions

AI and Governance: Finding the Right Balance for Innovation

Contents

Executive Summary

This panel discussion examines AI assurance as a critical bridge between ethical AI principles and practical governance frameworks, positioning it as a market-driven business imperative rather than regulatory burden. Speakers from Microsoft, Holistic AI, BSI, and the Alan Turing Institute discuss how organizations across financial services, healthcare, and other high-consequence sectors are operationalizing AI assurance through standards like ISO 42001, and explore strategic collaboration opportunities between the UK and India to harmonize AI governance approaches globally.

Key Takeaways

  1. Assurance is a Competitive and Commercial Advantage: Certification against ISO 42001 and other standards is increasingly a procurement requirement and customer expectation. Organizations without evidence of governance will lose contracts and market access, particularly in regulated sectors.

  2. Governance is the Accelerator, Not the Brake: When properly designed as a risk-based, proportionate process (not a blanket checklist), AI governance enables faster, more confident deployment by identifying and mitigating risks early and building stakeholder trust.

  3. Start with Inventory, Risk Appetite, and Policy Framework: Organizations new to AI assurance should: (a) identify all AI use cases in operation, (b) define organizational risk appetite and acceptable consequences, (c) align procurement and supply chain policies to a minimum assurance baseline, and (d) apply graduated scrutiny accordingly.

  4. Real-World Monitoring and Model Lifecycle Management Are Critical but Underdeveloped: The field has made progress on pre-deployment testing, but post-deployment monitoring, model drift detection, and retirement protocols remain immature. This is a high-priority gap, especially for high-stakes applications.

  5. UK-India Collaboration on Standards Harmonization and Shared Sandboxes Could Become a Global Model: Joint work on supply chain responsibility mapping, real-world testing infrastructure, and crosswalks between international and national standards could benefit both economies and serve as a template for other nations navigating AI governance.

Key Topics Covered

  • AI Assurance in Practice: Moving from high-level ethical principles to quantifiable, auditable governance
  • Standards & Certification Landscape: ISO 42001, ISO 27001, NIST AI Risk Management Framework (v2), EU AI Act alignment
  • Sector Maturity Variations: Financial services and healthcare leading; emerging sectors lagging
  • Agentic AI Risks: Data exfiltration, trust gaps, and security challenges with autonomous AI agents
  • Business Case for Assurance: Procurement demands, supply chain trust, customer confidence, regulatory compliance
  • Skills & Organizational Capacity: Bridging gaps between AI builders and governance teams
  • Model Lifecycle Management: Monitoring, drift detection, retirement protocols
  • UK-India Collaboration Opportunities: Harmonized standards, shared skills, real-world testing at scale
  • Policy Frameworks: Procurement-based requirements, risk-based prioritization, sandboxes, and regulatory innovation
  • Emerging Challenges: Quantum computing integration, rapid technology evolution vs. slow regulatory cycles

Key Points & Insights

  1. Assurance as a Market Driver, Not Just Regulation: The primary demand for AI assurance is coming from supply chains and procurement processes, not regulatory mandates. Large enterprises and regulated financial institutions require certification and evidence of governance as a condition of partnership and customer trust.

  2. Operationalization is the Critical Gap: The majority of organizations have high-level ethical principles but struggle to translate them into measurable, repeatable, auditable practices. Successful assurance requires mapping abstract principles to specific testing requirements, documentation standards, and monitoring protocols.

  3. Sector Maturity Correlates with Regulatory History and Consequence: Financial services and healthcare lead in AI assurance adoption due to existing compliance infrastructure (post-2008 regulations), high reputational/financial stakes, and liability exposure. Industries without similar regulatory precedent lack both institutional knowledge and perceived urgency.

  4. ISO 42001 Certification is Rapidly Scaling: India is second globally in accredited certifications (after the US); the UK is fourth. Adoption is driven by market pull (customers demanding it) rather than regulatory push, with Axis Bank becoming the first bank globally to achieve certification.

  5. Risk-Based Governance is Essential for Scalability: Organizations cannot apply uniform scrutiny to all AI systems. Successful governance frameworks identify high-risk, high-impact use cases for intensive testing and low-risk cases for lighter-touch oversight, enabling innovation while maintaining control.

  6. Assurance by Design, Not Post-Hoc: The most effective implementations embed assurance considerations from the outset—data collection, testing protocols, documentation—rather than treating it as a compliance checkbox at the end of development.

  7. Agentic AI Introduces New Attack Surfaces: Autonomous and semi-autonomous agents that communicate with other agents and external services create novel security and privacy risks (data exfiltration, trust gaps) that traditional ML assurance frameworks may not fully address. This requires updated governance protocols.

  8. Model Deterioration and Continuous Monitoring Are Non-Negotiable: Assurance is not a one-time certification event. Systems require ongoing monitoring, periodic reassessment (frequency depends on use case risk), and defined retirement protocols to account for model drift, data distribution shifts, and evolving threat landscapes.

  9. India's Ecosystem Uniquely Positions It for Supply Chain Clarity and Real-World Testing: India's depth in tech services, linguistic/cultural diversity, and rapid AI deployment create opportunities to define who does what across complex supply chains (model developers vs. deployers vs. integrators) and to pioneer efficient, large-scale real-world post-deployment monitoring.

  10. Harmonization Between UK and India Standards Could Reduce Complexity: With diverging regulatory directions globally (EU AI Act, US approaches, etc.), aligned standards and crosswalks between international frameworks and Indian law could accelerate adoption, facilitate trade, and prevent companies from navigating conflicting requirements.


Notable Quotes or Statements

  • Carson Maple (Alan Turing Institute): "Democratization of AI means it's in the hands of everybody. That brings problems as well as some great things... We need to start thinking about how we govern and assure AI." (On why AI assurance matters as adoption scales)

  • Raj Patel (Holistic AI): "The companies that we see best moving fastest with AI are not the ones with the most sophisticated use case. It's the ones that have the most sophisticated or most tangible AI governance workflow." (On governance as an enabler of innovation)

  • Natasha Kempton (Microsoft): "Unless you have the receipts in the form of certification against an international standard, it's like saying you're a car manufacturing company with safety engineers, but no one actually proves that the car has brakes and airbags." (On why assurance certification matters to customers)

  • Sue Daley (TechUK): "We've been talking at TechUK about AI assurance for... 10 years... and now it actually feels like this is actually happening, right? It's actually having real world impact." (On the shift from theoretical to practical)

  • Raj Patel (Holistic AI): "AI governance is the brakes in that [car]. A car can only go quickly because it has brakes." (On governance as a prerequisite for speed and safety)

  • Tim McGarr (BSI): "The pace of change in AI is frankly bonkers... you need something like [sandboxes] to move regulation at the pace required." (On regulatory innovation and agility)


Speakers & Organizations Mentioned

Role/TitleNameOrganization
ModeratorSue DaleyTechUK
Panelist, KeynoteCarson MapleAlan Turing Institute; University of Warwick National Hub for Edge AI
PanelistNatasha KemptonMicrosoft (VP, Chief Responsible AI Officer)
PanelistRaj PatelHolistic AI (VP, AI Transformation)
PanelistTim McGarrBSI – British Standards Institution
Event Co-organizerTess BuckleyTechUK
Event Co-organizerLawrenceTechUK

Key Institutions Referenced:

  • TechUK (UK tech trade body)
  • Alan Turing Institute (National Institute for Data Science and AI, UK)
  • BSI (British Standards Institution)
  • UCL (University College London)
  • Microsoft
  • Holistic AI (AI governance SaaS, spun out of UCL)
  • ML Commons
  • Axis Bank (first bank globally with ISO 42001 certification)
  • JP Morgan, Unilever, GSK, GE Healthcare, Salesforce, Infosys

Regulatory/Policy Bodies Referenced:

  • UK Government (AI Assurance Roadmap; AI Growth Labs)
  • EU (EU AI Act; regulatory approach)
  • Singapore
  • Indian Government (AI Governance Guidelines; AI Impact Summit)
  • FDA (sandbox origins)

Technical Concepts & Resources

Standards & Frameworks

Standard/FrameworkKey Details
ISO 42001Management system standard for AI governance; analogous to ISO 27001 (security); most rapidly adopted AI standard globally. Certifies organizational approach, not specific products.
ISO 27001Information security management standard; provides precedent for 42001 structure and adoption patterns.
NIST AI Risk Management Framework (v2)Guidance on assessing AI systems for: safety, security/resilience, validity, appropriateness of assessment, and auditability.
EU AI ActRegulatory framework driving notified body requirements and sandbox structures; influences global harmonization.
ISO 4219Referenced as emerging standard (specific scope not detailed in transcript).
ISO software standards (prior generation)Long-established standards predating AI-specific guidance.
ML Commons AI IlluminateGlobal benchmark effort providing practical de facto standards and testing protocols.

Concepts & Methodologies

  • AI Assurance: Process of collecting justified evidence that AI systems are trustworthy, safe, secure, fair, and compliant with applicable standards and regulations.
  • Agentic AI: Autonomous or semi-autonomous agents capable of planning, using tools, and communicating with other agents (e.g., holiday planning agents, customer service bots).
  • Responsible AI Principles: High-level ethical commitments (fairness, reliability, safety, privacy, security, inclusivity, accountability, transparency) that must be operationalized into measurable controls.
  • Model Context Protocol (MCP): Agent-to-agent communication standard; simpler than full A2A architectures.
  • Agent-to-Agent (A2A) Communication: More powerful inter-agent communication protocols emerging as agentic AI scales.
  • Data Exfiltration Risk: Privacy and security concern specific to autonomous agents that communicate across systems.
  • Model Drift / Model Deterioration: Decline in model performance over time due to data distribution shifts or changing real-world conditions; requires ongoing monitoring.
  • Crosswalks: Mapping documents showing how compliance with international standards aligns with specific national regulations/laws.
  • Risk-Based Governance: Proportionate allocation of governance effort based on use case risk level (high-risk → intensive testing; low-risk → lighter-touch).
  • Assurance by Design: Embedding assurance considerations (data governance, testing protocols, documentation) from initial development rather than post-hoc.
  • Procurement-Based Standards: Using procurement policies and supply chain requirements to mandate minimum assurance baselines.
  • Regulatory Sandboxes: Controlled environments where organizations can test new AI applications with regulatory oversight before full market release.
  • Notified Body: EU AI Act designation for third-party organizations authorized to certify compliance (analogous to medical device certification).
  • Continuous Monitoring & Reassessment: Ongoing surveillance of deployed models with periodic re-validation cycles (frequency depends on risk profile; e.g., email autocomplete ~12 months; high-stakes systems ~3 months or continuous).

AI Models & Technologies Referenced

  • OpenAI's o1, Claude (Anthropic), Grok (xAI): Referenced as examples of rapid capability scaling and rebranding ("biggest rebranding exercise in one week").
  • M365 Copilot (Microsoft): Flagship generative AI product used to illustrate real-world assurance practices and customer procurement demands.
  • GPT: Mentioned in context of supply chain scenarios (e.g., outputs from GPT integrated by Infosys).

Key Research & Publications

  • AI Governance Paper (2021): Academic publication in Nature Machine Intelligence establishing governance principles; cited as foundational framework for current assurance efforts.

Guidance & Resources

  • BSI Tools: Free guides, self-assessment tools, and training materials for 42001 and EU AI Act compliance.
  • TechUK Reports: Research on AI assurance across specific markets and industries (particularly 2024 report on AI assurance).
  • UK Pavilion: Exhibition space (Hall 14) with additional resources on AI governance.
  • Indian AI Governance Guidelines: Published at India AI Impact Summit; includes recommendations for governance groups and cross-functional expert committees.

Emerging Research Areas

  • Responsible Quantum Computing: Work underway between UK and National Quantum Computing Center to apply AI assurance principles to quantum systems.
  • Real-World Testing at Scale: Post-deployment monitoring and continuous reassessment; identified as underdeveloped and critical for high-consequence applications.
  • Supply Chain Responsibility Mapping: Defining role boundaries between model developers, integrators, deployers, and monitors in complex multi-stakeholder systems.

This summary reflects the transcript as provided and does not include claims beyond those explicitly stated by speakers.