How Trust and Safety Drive Innovation and Sustainable Growth
Contents
Executive Summary
This panel discussion explores the apparent paradox of a deregulatory moment in AI governance coinciding with widespread emphasis on trust and safety principles. The panelists argue that trust is not merely a regulatory concern but an economic driver of innovation and adoption, and that appropriate governance structures—whether through regulation, principles, or industry practices—are complementary to, not opposed to, technological advancement. The discussion reveals that effective AI governance requires coordinated, multi-stakeholder approaches tailored to specific harms rather than blanket prescriptive regulation.
Key Takeaways
-
Trust and safety are not obstacles to innovation—they are prerequisites. Organizations that demonstrate trust (through transparency, risk mitigation, and accountability) will outcompete those that don't. This is true in any regulatory environment.
-
AI governance should be layered and tailored, not monolithic. Combine sector-specific regulation (where clear harms exist), outcome-driven principles, codes of practice, transparency requirements, and independent oversight. One legal instrument cannot address all AI risks.
-
Transparency and provenance tools are critical infrastructure. Systems that reveal how AI models were trained, what data they use, and how they make decisions enable both regulators and users to assess trustworthiness and enforce accountability. Software bill of materials and similar provenance mechanisms deserve investment.
-
Agency (user control and recourse) should replace reliance on consent alone. Effective governance ensures users can understand what a system does, maintain ongoing control, and have meaningful ways to exit or correct harms—not just pre-authorized one-time consent.
-
Independent, well-resourced regulators and civil society are essential to trust. Markets need watchdogs with the technical expertise, legal authority, and funding to investigate, enforce, and publicly communicate about AI harms. Neither technology alone nor market forces alone will protect public interest.
Key Topics Covered
- Trust as an economic driver: Why consumer and enterprise trust is essential for AI adoption and business sustainability
- The regulation vs. innovation dichotomy: Why this framing is false; how thoughtful governance actually enables innovation
- Existing legal frameworks: How GDPR, data protection, and sector-specific laws apply to AI without dedicated AI legislation
- Identifying and prioritizing harms: Challenges in prospectively identifying which risks warrant regulatory intervention
- Regulatory coordination: Inter-agency and international cooperation (e.g., ICO-Ofcom coordination on Grok case)
- Trust and safety innovation: Four promising approaches: provenance tools, agency, privacy-enhancing technologies, and well-funded regulators
- Global regulatory divergence: Why a "Brussels effect" is not yet happening in AI, and how different jurisdictions are taking context-specific approaches
- Supply chain governance: Managing risk across the entire AI value chain, not just at deployment
- Burden-shifting in compliance: How solutions like cookie consent failed by placing burden on users rather than accountable actors
Key Points & Insights
-
Trust enables adoption, not vice versa: People will not use technology they don't trust. The Benjamin Harrison electricity anecdote illustrates that familiarity and confidence must precede widespread use. This is true for AI as an economic matter, regardless of regulatory mandates.
-
Regulation can be fuel for innovation: Contrary to the perceived regulation-vs.-innovation binary, thoughtful regulation (like product liability laws) actually enables markets by creating common standards and reducing the transaction cost for consumers to trust products. This applies to AI as well as traditional industries.
-
Existing laws already apply to AI: The UK, for example, does not have a dedicated AI law, yet data protection (GDPR) and sector-specific regulations provide a "de facto regulatory regime." The issue is not absence of law but clarity and enforcement.
-
The transparency and accountability gap: Existing laws (e.g., US equal employment law) may be violated by AI systems, but opacity makes violation hard to detect and prove. A transparency disclosure regime and impact assessments are needed to give meaning to laws already on the books—this is a complementary, "light touch" addition.
-
Prospective vs. prescriptive regulation is challenging: Harms are still coalescing; international consensus on AI harm archetypes (e.g., via ISO, international AI safety reports) is incomplete. Prescriptive legislation may become obsolete quickly. Outcome-driven, principle-based frameworks (with agile implementation mechanisms like codes of practice) are more resilient.
-
Context matters; one-size-fits-all regulation doesn't work: What constitutes harm or appropriate governance varies by cultural and societal context. Singapore's approach (regulating clear harms like deep fakes in elections, leaving other areas to sectoral regulation and market-driven assurance) reflects this reality.
-
Emerging regulatory convergence without formal harmonization: Despite no "Brussels effect," smart solutions are spreading organically (e.g., Colorado and other US states adopting high-risk mitigation concepts from the EU AI Act; transparency laws in California and New York echoing EU AI Act provisions). Learning and adaptation are happening peer-to-peer.
-
Regulatory fragmentation requires coordination: The Grok case demonstrated that addressing a single AI harm may require coordination between multiple regulators (ICO for data protection, Ofcom for online safety). Absence of a single AI regulator necessitates proactive inter-agency communication and shared expectations.
-
Supply chain governance is underexamined: Managing risk across the entire AI supply chain (models, platforms, tools, services, applications) is complex but critical. Drawing on cybersecurity's decades of experience with supply chain risk is valuable.
-
User burden-shifting is ineffective and unjust: The cookie consent example shows that shifting compliance burden to individual users (via disclosure and choice) fails when users lack time, information, and meaningful alternatives. Accountability must rest with organizations and regulators, not users.
Notable Quotes or Statements
-
"We won't use it if we don't trust it." — Moderator (opening analogy about President Harrison and the light switch), illustrating that adoption requires confidence.
-
"The real story is one of adoption and that has been the overwhelming theme of the summit this year. And for people to adopt this technology, they need to trust it." — Alex Ree Given (Center for Democracy and Technology), emphasizing that trust is the economic driver.
-
"Responsible thoughtful regulation can be fuel for innovation." — Alex Ree Given, challenging the regulation-vs.-innovation framing.
-
"Without some type of disclosure regime that requires transparency in these high-risk scenarios... you actually don't get the remedy that people really need under existing law." — Alex Ree Given, explaining why transparency is necessary to make existing laws meaningful for AI systems.
-
"Regulation is a little bit fragmented... we need to be working very very closely." — John Edwards (UK ICO), on the necessity of inter-regulatory coordination.
-
"These organizations need to understand that they can be switched off, that they are not actually all powerful." — John Edwards, on the ICO's power to restrict services that violate UK norms (e.g., Grok on TikTok).
-
"Every country has a unique context and it's the job of the government to figure out what's harmful." — Denise Wong (PDPC Singapore), defending context-specific governance.
-
"The law cannot solve the problem. But actually maybe another technology can." — Denise Wong, advocating for privacy-enhancing technologies alongside regulation.
-
"We didn't misdiagnose the harm, we misdiagnosed the remedy." — Alex Ree Given, on cookies/consent, illustrating that solutions must address both the problem and the mechanism of accountability.
-
"Solutions that acknowledge the harm are tailored but also take that burden off individual users. So you're empowering users but not burdening them." — Alex Ree Given, synthesizing effective governance design.
Speakers & Organizations Mentioned
| Speaker | Title/Role | Organization |
|---|---|---|
| Alex Ree Given | CEO | Center for Democracy and Technology (CDT) |
| Amanda Craig | General Manager | Responsible AI Policy, Microsoft |
| John Edwards | Information Commissioner | UK Information Commissioner's Office (ICO) |
| Denise Wong | Deputy Commissioner | Privacy and Data Protection Commission (PDPC), Singapore |
| Trevor (Moderator) | Moderator | International Association of Privacy Professionals (IAPP) |
Other organizations/bodies mentioned:
- European Union (EU AI Act negotiations and implementation)
- Global Privacy Assembly (GPA)
- Ofcom (UK communications regulator)
- ISO (International Standards Organization)
- OCOM (assumed reference to Ofcom)
- Colorado state legislature (high-risk AI mitigation law)
- California state legislature (transparency law)
- New York state legislature (transparency law)
- Utah state legislature (regulatory sandboxes for AI)
Technical Concepts & Resources
-
GDPR (General Data Protection Regulation): Foundational EU data protection law (over 7 years in effect); globally influential, adopted/adapted by 120+ countries.
-
EU AI Act: Pending comprehensive AI regulation with provisions for high-risk mitigation, fairness requirements (Article 10), regulatory sandboxes, and transparency. Implementation is subject to recent omnibus rollback proposals.
-
Data Protection Impact Assessment (DPIA): Regulatory tool requiring organizations to assess and document privacy/data risks before deployment.
-
Privacy-Enhancing Technologies (PET): Technical solutions (e.g., federated learning, differential privacy) that secure personal data without full disclosure. Cited as rapidly advancing and moving into production use.
-
Software Bill of Materials (SBOM): Cybersecurity practice (applicable to AI) that tracks dynamic components of a system to enable accountability and traceability. Suggested as a model for "agentic AI" systems.
-
Provenance tools: Systems that reveal the origin and processing history of data and model outputs; critical for transparency and accountability.
-
Federated learning: Technique for training AI models on distributed data without centralizing personal information.
-
Codes of practice: Regulatory mechanism (e.g., Singapore's social media codes) that provide flexibility vs. prescriptive law; can be updated more quickly than legislation.
-
Regulatory sandboxes: Safe spaces for SMEs and innovators to test compliance with regulations with forgiveness/wiggle room.
-
High-risk/sensitive use scenarios: Framework (e.g., Microsoft's categorization) identifying applications with consequential life impact (employment, education, legal treatment), psychological/physical harm, or human rights implications.
-
Agentic AI: Autonomous AI systems making decisions across multiple dynamic components; governance of such systems requires attention to supply-chain risk.
-
Deep fakes: AI-generated synthetic media (audio, video, images) mimicking real people; subject to targeted regulation in Singapore and investigation (Grok case) in UK.
-
Fair Machine Learning / Algorithmic Fairness: Techniques to detect and mitigate bias (e.g., age-based discrimination in resume screening); relates to existing equal employment law.
-
International AI Safety Report / ISO discussions: Emerging taxonomies of AI harms and risk archetypes; sources of convergence on harm identification.
Additional Context
Historical framing:
- 2021–2022: Bletchley Park (UK) hosted the first "AI Safety Summit," focusing on existential risks.
- 2023: Paris hosted "AI Action Summit."
- 2024 (present event): "AI Impact Summit," with shift toward adoption, practical governance, and innovation.
Regulatory mood:
- Deregulatory sentiment is prevalent; rollbacks of EU AI Act expectations are underway.
- Paradoxically, trust and safety messaging is omnipresent on conference floor, suggesting public and industry appetite for frameworks that reduce uncertainty.
Key lesson from prior technology:
- Cookie consent and GDPR-era cookie banners are cited as a cautionary tale: the harm was correctly identified (privacy erosion via tracking), but the remedy (user consent) shifted burden to individuals without providing meaningful choice or market correction. This informs current AI governance design.
