All sessions

Building Sovereign and Responsible AI Beyond Proof of Concepts

Contents

Executive Summary

This talk addresses why 70% of AI pilots fail to reach production, arguing that successful AI deployment requires moving beyond technical functionality to address trust, sovereignty, sustainability, responsibility, and measurable value. The speakers present "AI in 4D"—a framework analyzing AI systems through four critical lenses—and demonstrate why ignoring any dimension creates systemic risks and failure modes.

Key Takeaways

  1. AI in 4D is Non-Negotiable for Production: Treat Sovereignty, Green/Sustainable, Responsible, and Valuable as four independent but interconnected requirements. Failure in any one dimension derails deployment; success requires all four working together.

  2. Shift from Pilot Mentality to Production Readiness: Stop asking "does it work technically?" and start asking "who controls it? Who benefits? What are the environmental costs? What could go wrong?" Governance, sustainability, and ethics assessments must precede, not follow, technical prototyping.

  3. Trade-off Decisions Must Be Explicit and Documented: Organizations cannot optimize all dimensions equally. Create a framework listing potential harms, map them to principles, and deliberately prioritize what matters most to your context. Document why you made those choices.

  4. Government Policy is a Public Good, Not a Barrier: Private companies cannot unilaterally solve trust, bias, or sovereignty issues. Governments should establish baseline standards (risk-based regulation like EU AI Act), enable data sharing infrastructure (smart data, open APIs), and fund sovereign AI models (as Serbia and France are doing).

  5. Measure or Lose Credibility: Define KPIs for sustainability (carbon per inference), fairness (demographic parity metrics), user adoption, and business value before deployment. Vague commitments to "responsibility" are unactionable; quantified targets are.

Key Topics Covered

  • The Proof-of-Concept Problem: Only 30% of global AI projects reach production; pilots focus narrowly on functionality while ignoring broader systemic factors
  • Trust as a Foundation: AI incidents are growing exponentially (600 documented incidents in December 2025 alone per OECD AI Observatory), undermining public confidence
  • The 4D Framework: Four interdependent dimensions—Sovereignty, Green (Sustainability), Responsible AI, and Valuable AI—required for scalable deployment
  • Sovereignty in AI: Control over data, models, security infrastructure, and governance; not limited to national borders but extends to organizational and individual levels
  • Sustainable AI ("Green AI"): Environmental and economic costs must be addressed together; systems that cannot scale sustainably cannot scale at all
  • Responsible AI: Ethics, governance, bias detection, fairness, human-centered design, and security as prerequisites for trust
  • Valuable AI: Systems must deliver measurable real-world benefits aligned with stakeholder needs; financial metrics alone are insufficient
  • Trade-off Awareness: Organizations must explicitly identify and justify prioritization choices among competing principles
  • Regulatory Landscape: Varying approaches globally (EU AI Act with risk-based tiers, UK regulation on third-party suppliers, India's emerging data protection framework)
  • Policy & Governance Gaps: Private sector struggle with uncertainty; government frameworks needed to establish baseline safety and ethical standards

Key Points & Insights

  1. The 70% Failure Rate Stems from Narrow Scope: Organizations test "does it function?" but ignore governance, alignment with societal values, sustainability costs, and actual user adoption—six critical failure modes beyond technical performance.

  2. Trust Erosion Through Real Harms: OECD AI Observatory documents exponential growth in AI-related harms (voice cloning scams, AI-generated books with visible prompts, biased facial recognition at borders). These incidents are not theoretical risks but documented evidence of deployment failures.

  3. Sovereignty is About Control, Not Just Geography: Not merely hosting data domestically; encompasses who accesses data, who can update models, auditability, and independence from foreign government leverage. A justice system unable to audit or control model updates cannot reliably serve citizens.

  4. Economic and Environmental Costs Are Intertwined: Systems requiring unsustainable power/water consumption become financially impossible to deploy. A public health AI requiring more compute than available power supply failed not on technical grounds but on resource viability.

  5. Value Must Be Defined Relative to Context: UAE's goal of 12 million people performing work equivalent to 120 million is contextually appropriate; replicating this in India (with high unemployment) would create social harm rather than value. Value is not universal.

  6. Responsible AI Enables Rather Than Constrains: Human-centered design, bias detection, and fairness frameworks create usability and trust. Traffic optimization that diverted congestion to lower-income neighborhoods was technically successful but socially harmful and ultimately failed due to community backlash.

  7. No Single Dimension Substitutes for Others: Audience survey revealed most believe responsible AI and value are most critical, but sovereignty and sustainability cannot be deprioritized. All four interact; trade-offs must be explicit and justified.

  8. Regulatory Fragmentation Creates Private Sector Uncertainty: EU AI Act provides risk-based guidance; UK focuses on critical infrastructure suppliers; India's 2025 Data Protection and Personalization Law takes effect October 2026 with 18–24 month implementation windows. Absence of clear frameworks in some jurisdictions leaves companies vulnerable.

  9. Platform Approaches Beat Custom Solutions: Vendor lock-in (PepsiCo demanding exclusivity from an agentic vending machine provider) mirrors historical IP hoarding that prevented scaling. UPI succeeded by building shared infrastructure; enterprise AI often remains siloed.

  10. Measurement Translates Principles to Action: Organizations must convert abstract commitments ("we will be ethical") into measurable KPIs for sustainability, user outcomes, fairness metrics, and security. Without quantification, progress is unverifiable and funding is difficult to justify.

Notable Quotes or Statements

  • "Only 30% of all AI projects actually go into production." — Highlights the scale of the proof-of-concept problem and motivates the entire framework discussion.

  • "If a AI system can't scale sustainably, then it won't scale at all." — (Omid) Encapsulates the integration of environmental and economic viability; sustainability is not a nice-to-have.

  • "600 different incidents in the world [in December 2025 alone]... 600 different times that people were harmed." — Grounds the abstract risk discussion in documented human impact via OECD AI Observatory data.

  • "Trust is lost in terms of sovereignty, the likelihood is that the system will fail." — Emphasizes that sovereignty failures are not just governance issues but existential deployment risks.

  • "You have to think really carefully about what the value is of the system itself. Because without thinking about that you end up building a system that you cannot measure the value of and then ultimately what it would do is that it would just become a dead weight." — (Omid) Captures the circular logic of undefined value leading to unmeasurable outcomes and project death.

  • "Why would you build AI to replace people's jobs in India when there's already a lot of people [unemployed]?" — (Omid) Illustrates how value must be contextually grounded, not universally prescriptive.

  • "If you don't have an understanding of [where data goes and who accesses it], the likelihood of you trusting that system is very low and therefore it would be susceptible to failure." — (Omid) Links data transparency directly to system resilience.

Speakers & Organizations Mentioned

  • Theresa Wise (KosS?) — Co-presenter; focuses on responsible AI frameworks and deployment to government sectors in UK, Canada, US
  • Omid — Co-presenter; emphasizes sovereignty, sustainability, and human-centered design; appears to work in government AI contexts
  • OECD AI Observatory — Source of harm/incident monitoring data
  • EU — EU AI Act (risk-based regulation framework)
  • UK Government — Emerging regulation on third-party suppliers critical to infrastructure
  • Government of India — Data Protection and Personalization Law (effective October 2026)
  • Government of Serbia — Building sovereign LLMs domestically
  • Government of France — Mistral AI (sovereign LLM initiative)
  • Government of UAE — AI ambition framing (12M → 120M equivalent productivity)
  • Prime Minister Modi — Referenced statement on human-centered AI design
  • Microsoft — Building massive data centers with power consumption equivalent to Los Angeles
  • Companies Mentioned: PepsiCo, Coca-Cola, Amazon, Microsoft, Google, Infosys, Accenture, Kynos, UPI ecosystem
  • Historical Reference: Silicon Valley Bank (SVB) — discussed venture capital concentration in AI

Technical Concepts & Resources

  • AI in 4D Framework: Four-dimensional analysis lens comprising:

    • Sovereignty (control, security, model provenance, auditability)
    • Green/Sustainable AI (carbon cost, water usage, power consumption, cost-efficiency linkage)
    • Responsible AI (ethics, bias, fairness, human-centered design, governance, security)
    • Valuable AI (measurable real-world benefits, stakeholder alignment, long-term societal impact)
  • OECD AI Harms Monitor: Public database tracking AI incidents, harms, and hazards globally; cited 600 incidents in December 2025

  • EU AI Act: Risk-based regulatory framework with four tiers:

    • Low-risk (back-office automation, minimal requirements)
    • Medium-risk
    • High-risk (critical infrastructure, direct people impact, extensive transparency requirements)
    • Prohibited use cases (explicitly forbidden applications)
  • AI Models & Systems Referenced:

    • ChatGPT and equivalent generative AI tools
    • Large Language Models (sovereign LLMs: Mistral [France], Serbian models in development)
    • Agentic AI / AI agents (autonomous systems for vending machines, traffic management, benefits eligibility, complaint triage, radiology analysis)
    • Facial recognition systems (bias issues documented at borders)
  • Regulatory & Policy Frameworks:

    • India's Data Protection and Personalization Law (effective October 2026; 18–24 month implementation phase)
    • UK regulation on third-party suppliers (emerging)
    • EU AI Act (in force)
  • Responsible AI Components:

    • Bias detection and fairness metrics
    • Explainability / interpretability
    • Audit trails and model transparency
    • Human-centered design principles
    • Security and data access controls
    • Governance structures (accountability, escalation processes, risk management)
  • Metrics & Measurement:

    • KPIs for sustainability (carbon per inference, water usage)
    • Fairness metrics (demographic parity, false positive/negative rates by group)
    • User adoption and accessibility measures
    • Business value metrics (time saved, error reduction, but contextualized to stakeholder needs)
  • Data Governance:

    • Smart data / open APIs (referenced as government infrastructure to enable multi-organization data sharing)
    • Data sovereignty (control, access, provenance)
    • Open banking as a precedent model (extension beyond financial services)
  • Frameworks & Resources Offered:

    • White paper by speakers (available via link below transcript, shared on LinkedIn) with 8–10 actionable items per dimension
    • Responsible AI framework (checklist approach for ethics, trust, security, governance)
    • AI policy template (defining organizational AI use, prioritization, constraints)

Note: Transcript includes technical difficulties with QR code display and minor audio/presentation issues but does not affect content integrity.