Shaping Trustworthy AI for Tomorrow
Contents
Executive Summary
This multi-speaker summit session examines trustworthy AI from geopolitical, governmental, industrial, research, and policy perspectives. The central thesis is that trustworthy AI cannot be achieved in isolation—it requires international cooperation across standards, supply chains, governance frameworks, and values. Speakers from Norway, Brazil, Japan, and India present complementary approaches emphasizing that technical robustness alone is insufficient; institutional legitimacy, geopolitical trust, and human-centered values must underpin AI development.
Key Takeaways
-
Trustworthy AI requires cooperation across three layers (technical, institutional, geopolitical) and cannot be achieved through technical means or isolated national regulation alone. International frameworks, shared standards, and mutual trust between states are prerequisites.
-
Fragmentation creates vulnerability: Geopolitical competition for technological dominance, weaponized supply chains, and governance conflicts undermine trust. Small nations (like Norway) cannot be self-sufficient and must pursue digital serenity through strategic cooperation and alternatives, not isolation.
-
Protective governance for vulnerable populations is non-negotiable: Child safety, data protection, prevention of algorithmic bias, and safeguards against election interference require proactive design constraints and regulatory oversight—not just innovation or user education.
-
Values (inclusivity, equality, human rights, sovereignty) must be operationalized from constitution to implementation: Frameworks that are merely aspirational fail. Success requires translating values into specific technical requirements, institutional accountability mechanisms, and behavioral standards.
-
Scale and practical deployment matter as much as research: Trustworthiness is not just what happens in the lab—it's what happens when systems are deployed at scale. Partnerships between public sector, private industry, academia, and civil society are essential to real-world trustworthiness.
Key Topics Covered
- Geopolitical dimensions of AI trust: Supply chain concentration, technological fragmentation, and weaponization of interdependence
- Three-layer trust model: Technical trust (safety/reliability), institutional trust (legitimate governance), and geopolitical trust (international cooperation)
- Public sector AI governance: Norway's six-element framework for trustworthy AI in government
- Maritime industry applications: Practical AI implementation for safety, efficiency, and decarbonization
- Research directions: Scaling trustworthy AI, green AI, neuromorphic computing, and data reduction strategies
- Policy frameworks: EU AI Act implementation, national AI strategies (Japan, Brazil, India), and age restrictions for children online
- Democratic safeguards: Protecting against AI-enabled election interference and information manipulation
- Values-based governance: Inclusivity, equity, human rights, and sovereignty in AI development
- Digital serenity and resilience: Building alternatives and control mechanisms for small nations in a geopolitically fragmented world
Key Points & Insights
-
Trustworthy AI is inherently geopolitical: "There is no such thing as trustworthy AI in one country alone." Technology structures power, and fragmented geopolitical blocks create segmented trust. Standards divergence, governance conflicts, and supply chain concentration create systemic vulnerabilities that cannot be solved through technical means alone.
-
Three-layer trust architecture is necessary: Technical trust (precision, explainability, fairness, safety, privacy, sustainability) is foundational but insufficient. Institutional trust (legitimate, accountable governance) and geopolitical trust (states cooperating rather than weaponizing interdependence) are essential—without the third layer, the first two collapse.
-
Global AI supply chains have concentrated control: Advanced chips, foundation models, and critical minerals are produced in very few locations. This creates strategic dependencies and "trust vulnerabilities"—if access can be restricted overnight due to geopolitical tension, trust in AI systems becomes fragile. Resilience requires cooperation across jurisdictions and political systems.
-
AI affects democracy across borders but regulation remains national: Digital platforms shape public discourse, elections, and information flows globally. Elections don't need to be hacked to be destabilized—tempo amplification and algorithmic visibility suffice. Because impact is global but regulation is national, isolated regulation is insufficient.
-
Norway's six-element framework for public sector AI: (1) Legitimate, trusted institutions; (2) Shared public values (equality, fairness, accountability); (3) Robust governance (legislation + EU AI Act + soft law); (4) High-quality public data; (5) Well-informed, digitalized population; (6) Targeted, multidisciplinary research. Each element is interdependent—failure in one undermines the whole structure.
-
Child safety online is a critical governance priority: Children lack protection in the digital world equivalent to physical spaces. Large tech companies gather extensive data and use dark patterns, algorithmic dependencies, and manipulative design. Norway proposes age limits (15 years) for social media and regulatory controls on platform design, while balancing the need for digital competence education.
-
AI as "heavy industry" requires environmental and resource governance: Data centers consume massive amounts of water and energy. Mining for critical minerals has geopolitical and environmental consequences. Solutions include moving from large to small models, neuromorphic computing, optical computing, data reduction, and potentially taxation of digital resources to fund environmental restoration.
-
Japan's approach: balancing innovation with risk mitigation through trust: Japan recognizes declining birth rates and aging population make AI adoption crucial, but public anxiety about technical risks (misjudgment, bias), social risks (discrimination, privacy infringement), and national security risks (cyber attacks) must be addressed. The AI Basic Plan pursues "trust AI" as essential to both reducing public concern and enabling active utilization.
-
India's framework emphasizes optimism, inclusion, and constitutional values: Rather than regulatory containment, India takes an expansive view of governance that drives adoption and capacity building. Constitutional principles (fairness, equality) are embedded in AI governance frameworks. Digital public infrastructure (Aadhaar, UPI) provides foundational infrastructure for inclusive AI deployment at scale.
-
Brazil's approach prioritizes sovereignty, responsible behavior, and digital infrastructure: AI is not purely a technology problem but requires responsible, ethical, transparent behavior. Digital public infrastructure and data governance safeguard democratic values and human rights while enabling personalized, proactive public services.
Notable Quotes or Statements
Nils (Geopolitical opening):
"There is no such thing as trustworthy AI in one country alone... Technology is no longer just innovation policy. It is foreign policy. It is security policy and increasingly it is trust policy."
Nils (On fragmentation):
"If AI systems are developed within fragmented geopolitical blocks, trust becomes segmented. If standards diverge, interoperability declines. If governance frameworks conflict, legitimacy erodes."
Heather Broomfeld (Norwegian Digitalization Agency, on the tight rope):
"We're walking a tight rope here of trying to get a balance for how do we harness this really powerful technology while at the same time ensuring that it respects trust, remains trustworthy because losing trust in the Norwegian public administration is quite frankly just simply not an option."
Minister Karyama Tong (Norway's Minister of Digitalization):
"I really believe that children and young people have the right to be as protected in the digital world as in the physical world. And as by now they are not."
Minister Karyama Tong (On digital serenity for small nations):
"Digital serenity is not about one thing. Digital serenity is about a lot of things. It's many layers. It's a whole value change... Norway can't be self-sufficient because it is about lot of things. Uh but um it's important for us to have the ability to act, to have alternatives."
Professor Martin Dalen (Trust Center):
"If you have trust in something, that thing should be trustworthy. And if you develop something that is trustworthy, I or anyone else should have trust in that thing."
Professor Martin Dalen (On AI as heavy industry):
"AI is heavy industry. It's basically hunting for critical minerals. It's very thirsty because these data centers is using a lot of water and it's also using a lot of energy."
Amlan Mahanti (India, on values-based approach):
"These values are embedded in our constitution. These principles of fairness and equality are constitutional rights that Indian citizens have and there's no reason to believe this doesn't apply to AI."
Speakers & Organizations Mentioned
Government & Policy:
- Nils (Opening speaker, geopolitical framing)
- Heather Broomfeld – Director General, Norwegian Digitalization Agency
- Karyama Tong – Minister of Digitalization and Public Governance, Norway
- Yukio Termura – Japanese government representative
- Amlan Mahanti – Technology lawyer and AI adviser to Indian government
- Sindu Gangadaran – Indian government representative
- Lana Ronarate – Brazilian government representative (Ministry of Management and Public Services)
- Prime Minister Modi (referenced) – India
- President Macron (referenced) – France
Industry & Private Sector:
- Hildigun McLaren – Senior Vice President, Technology Office, Kongsburg Maritime
- Kongsburg Maritime – Global maritime technology provider (8,000+ employees, 35 countries, equipment on ~30,000 vessels)
- SAP (mentioned via Sindu Gangadaran)
Research & Institutions:
- Professor Martin Dalen – University of Oslo; leads Norwegian Center for Trustworthy AI (Trust)
- Trust (Norwegian Center for Trustworthy AI) – Research center, organized the session
- University of Oslo – Host institution for Trust center
International Frameworks & Initiatives:
- UN (United Nations)
- OECD
- GPA (Global Partnership on AI)
- Hiroshima Process (first global framework on responsible AI)
- EU AI Act (referenced for transposition into Norwegian and other national laws)
- BRICS (Brazil-Russia-India-China-South Africa grouping, mentioned)
Technical Concepts & Resources
AI Governance & Trust Frameworks:
- Three-layer trust model: Technical trust + institutional trust + geopolitical trust
- Norwegian six-element framework: Legitimate institutions, shared values, robust governance, high-quality data, digitalized population, targeted research
- EU AI Act – Primary regulatory reference for transposition into European and aligned jurisdictions
- AI Basic Plan (Japan) – National strategy balancing innovation and risk mitigation
- India's AI Guidelines – Constitution-based, values-driven governance framework
- Brazil's National AI Plan – Emphasis on sovereignty, data governance, and human rights
- Hiroshima Process – First global consensus framework on responsible AI
Technical & Computational Approaches:
- Large vs. small models – Movement toward edge intelligence and smaller models to reduce compute
- Neuromorphic computing – Hardware inspired by biological neural systems
- Optical computing – Using photons instead of electrons for computation
- Data reduction & refinement – Addressing excessive data collection and storage
- Foundation models – Large language/multimodal models trained by small number of companies
- Remote operations & autonomous systems – Maritime industry applications
- Dynamic positioning – Vessel automation technology
- Green AI – Energy-efficient AI development for climate transition
Infrastructure & Resources:
- High-performance computing (HPC) – National capacity building (Norway developing access)
- Data centers – Resource-intensive, water-heavy, energy-intensive infrastructure
- Critical minerals & semiconductors – Geopolitically concentrated supply chains (advanced chips, rare earths)
- Cloud infrastructure – Concentrated in few providers
- Digital public infrastructure – Aadhaar, UPI (India); national data platforms (Norway, Brazil)
Regulatory & Governance Tools:
- Standards and interoperability – Cross-border technical alignment
- Transparency mechanisms – Explainability and disclosure requirements
- Dual-use AI guidelines – Military/security context regulation
- Soft law mechanisms – Guidelines, standards, voluntary frameworks
- Age limits on social media – Content protection (Norway proposing age 15+)
- Dark pattern regulation – Constraints on manipulative design
- KPIs & measurement frameworks – Environmental and governance performance indicators
- Taxation of digital resources – Proposed funding mechanism for environmental remediation
Research & Measurement:
- Accuracy & precision measurement – Quantifying AI system performance
- Fairness & bias detection – Inclusive, non-discriminatory outcomes
- Privacy & security testing – Robustness against adversarial attacks
- Climate-sensitive disease modeling – AI for climate health research (Trust project in Rwanda)
- Hydrodynamic optimization – Maritime industry data analytics example
- Propeller performance monitoring – Underwater noise reduction & wear prediction
Case Study Applications:
- Maritime industry: Dynamic positioning, propulsion optimization, autonomous vessels, remote support
- Public sector: Welfare service delivery, transparency in government decision-making
- Healthcare: Pandemic response (COVID-19 oxygen supply chain tracking)
- Education & agriculture – Indian government focus areas for inclusive AI deployment
Note on Accuracy: This summary is based entirely on the provided transcript. All claims, attributions, and statements reflect what was explicitly stated in the talk. No external sources were consulted, and no claims were invented beyond what is present in the source material.
