The Sociotechnical Turn in AI Governance
Contents
Executive Summary
This AI impact summit session articulates a critical shift in how AI governance should be conceptualized—moving beyond purely technical solutions to integrate social, institutional, and human considerations. Multiple speakers from the Netherlands, Sweden, and India argue that treating AI as a sociotechnical phenomenon (rather than a purely technical one) is essential for trustworthy, equitable, and contextually appropriate AI deployment. The session emphasizes that AI governance is fundamentally about governing power, responsibility, and collective choice, not just technical robustness.
Key Takeaways
-
Sociotechnical governance is about governing power, not just technology. AI governance must address who decides what fairness, safety, and trust mean—questions that require societal deliberation, not technical specification alone.
-
"Why?" before "how?" Before implementing AI solutions, organizations must ask whether AI is appropriate for a given problem. Fear of missing out is driving premature adoption; contextual appropriateness should drive deployment.
-
Include local voices and informal workers in AI design. Technology developed without input from those who will use or be affected by it (especially in Global South contexts) will fail to account for real constraints. Women cab drivers, informal workers, and marginalized populations must shape systems affecting them.
-
Build institutional safeguards and transparency mechanisms. Public algorithm registries, impact assessments, human oversight points, and regulatory clarity (like the EU AI Act) reduce harms when technical systems interact with power and vulnerable populations.
-
Invest in domestic AI capacity and multistakeholder ecosystems. Countries and regions cannot outsource AI governance to dominant foreign companies. Public AI factories, ELSA labs, and talent development programs that combine academic, industrial, and civic expertise build both capability and legitimacy.
Key Topics Covered
- Sociotechnical frameworks for AI governance — integrating technical and social expertise in AI design from inception
- Power concentration and geopolitical implications — how AI development concentrated in specific regions/companies excludes perspectives from Global South
- Institutional failures and accountability — case study of Netherlands' automated benefit fraud system and lessons learned
- AI as embedded choice, not neutral technology — rejecting determinism and framing AI as shaped by deliberate human decisions
- The "why" question — prioritizing societal needs assessment before technical implementation
- Trustworthiness vs. blind trust — building calibrated, appropriate trust rather than assuming technology neutrality
- Public-private-civil society collaboration — multistakeholder approaches to AI governance (ELSA labs, regulatory sandboxes)
- Data limitations and constructed reality — critiquing the assumption that more data = better outcomes
- Labor, marginalization, and digital systems — examining how ride-hailing platforms fail women drivers and informal workers
- Alternative philosophical traditions — moving beyond Western/Cartesian individualism ("I think therefore I am") toward Ubuntu/collective approaches ("I am because we are")
- Ecosystem-based talent development — the Dutch AI innovation center model combining universities, industry, and public sector
- Strategic autonomy vs. dependency — investing in domestic AI capacity to avoid lock-in by dominant foreign companies
Key Points & Insights
-
AI is not inevitable or neutral. Virginia Dicknam emphasizes that AI "doesn't happen to us" like weather—it is deliberately designed through specific choices about data, incentives, and governance. Current approaches are not inevitable; alternatives exist.
-
The "why" question precedes the "how." Engineers typically learn to solve problems (how), not to ask foundational questions about whether AI is needed (why). Sociotechnical governance begins with asking whether AI should be deployed in a given context before optimizing technical specifications.
-
Data is constructed, historical, and biased. Data availability is not an objective window on reality; it reflects what was deemed worth measuring. Focusing exclusively on data-rich problems creates a "datification" bias where unmeasured problems become invisible.
-
Institutional failure reveals sociotechnical complexity. The Netherlands' automated benefit fraud system worked technically but failed socially—it was opaque, discriminatory, and harmed vulnerable families. The failure was not algorithmic malfunction but rather a breakdown in how the system interacted with institutional power, legal safeguards, and citizen welfare.
-
Power and accountability are central governance questions. Sociotechnical governance isn't about controlling technology; it's about governing power distribution, responsibility allocation, and collective decision-making. Questions like "Who decides what fairness means?" and "Who bears the costs?" cannot be delegated to technical specialists.
-
Global South perspectives are systematically excluded. Technology is developed by nations and companies with no experience of broken infrastructure, unreliable power, or informal labor. Women cab drivers in India lack route knowledge for new routes but ride-hailing algorithms provide no preview—a basic usability gap reflective of design without local context.
-
Human-AI collaboration requires human agents embedded in systems. Rather than automating decisions entirely, effective systems (especially for vulnerable populations) must preserve human judgment points. Users and workers trust humans more than algorithms and need human managers accessible when technology decisions affect their livelihoods.
-
Philosophical traditions shape AI architecture. Western AI development encodes Cartesian individualism ("I think therefore I am"), producing systems optimized for individual utility. Ubuntu philosophy ("I am because we are") or collective traditions would produce structurally different systems prioritizing interdependence and community.
-
Trustworthiness requires calibrated, not maximal, trust. Appropriate trust means neither blind faith in technology nor wholesale rejection. It requires transparency (public algorithm registries), participatory design, and impact assessments that help users and stakeholders make informed decisions about system reliance.
-
Multistakeholder collaboration is not a luxury—it is necessary for legitimate governance. Public-private partnerships, regulatory sandboxes, and ELSA labs that bring researchers, civil society, government, and industry together prevent fragmented governance and ensure diverse perspectives inform both innovation and regulation.
Notable Quotes or Statements
"AI doesn't happen to us. It's not the weather, not an uncontrollable force. It's something that we people design, that we decide, and is shaped by our choice, by our data, by the incentives and the governance that we put—or failed to put—on it."
— Virginia Dicknam
"If you think that technology is the solution to all your problems, you definitely don't understand technology. But even worse, you don't understand your problems."
— Virginia Dicknam (citing unnamed source)
"Innovation without accountability is not progress. It is risk shifted onto those who can least afford it."
— Harry Pis, Ambassador for AI, Netherlands
"This was not a technology failure in the narrow sense. The algorithms functioned as designed. It was a social technical failure—a failure to understand how a technical system interacts with institutional power, administrative culture, legal safeguards, and the lived realities of vulnerable citizens."
— Harry Pis (describing Netherlands' benefit fraud case)
"AI is building countries that come from a philosophical and academic tradition based on Cartesian 'I think therefore I am.' We've encoded individualism into our systems. If we had started from Ubuntu—'I am because we are'—we would have built completely different systems."
— Virginia Dicknam
"We are taking an extremely lazy approach to AI research. We take the idea that we need more data, more processing power, and that's what we have now. But real innovation requires thinking about alternatives: What does AI mean when we think about distribution, inclusion, diversity?"
— Virginia Dicknam
"Humans still trust humans more than they trust technology. Women drivers prefer to interact with human managers rather than rely on app algorithms to tell them where to go or how their ratings are affected."
— Pavi Bunson (on ride-hailing platform design)
"The future of AI governance will not be shaped by any single country, institution, or research tradition. It will be built step by step, together, by people willing to speak honestly about what is not working."
— Harry Pis
Speakers & Organizations Mentioned
Key Speakers
- Virginia Dicknam — Professor of Responsible AI, UMAI University (Sweden); affiliated with Pacific AI Lab, University of Amsterdam; computer scientist and sociotechnical advocate
- Harry Pis — Ambassador for AI, Government of the Netherlands; co-chair of working group on AI for economic growth and social good (AI Impact Summit); leads Netherlands AI task force
- Lisa (surname not provided in transcript) — Senior Policy Officer for AI, Government of the Netherlands
- Pavi Bunson (or similar spelling) — Impact Officer, Inclusive Cultures Lab, University of Amsterdam; researcher on digital labor and ride-hailing platforms
- Martin D. Reich — Director, Innovation Center for Artificial Intelligence (ICAI), Netherlands
Organizations & Initiatives
- UMAI University (Sweden)
- University of Amsterdam / Pacific AI Lab
- Government of the Netherlands — Ministry of Foreign Affairs, AI Task Force
- Innovation Center for Artificial Intelligence (ICAI) — Netherlands-based multistakeholder collaboration hub
- ELSA Labs (Ethical, Legal, and Societal Aspects labs) — 12+ labs operating across Netherlands focusing on domains: media & democracy, public safety, sustainable food systems, healthcare, police, defense, mobility/logistics, services, infrastructure, food
- UT University's Data School — developed impact assessment for algorithms and human rights
- EU (European Union) — referenced for AI Action Plan, AI Act, and Euro HPC initiative
- United Nations — referenced for governance processes and civil society roles
- India — hosting nation; referenced for AI governance guidelines
Case Studies & Systems Referenced
- Netherlands tax/benefit fraud automated risk profiling system — cautionary example of sociotechnical failure affecting vulnerable families
- Uber and Ola (ride-hailing platforms in India) — examples of technology excluding local context and worker needs
- Google Maps — mentioned as example of technology user trust failures
- GTPNL — Dutch large language model built on trusted data by public institutions
- Public Algorithm Registry (Netherlands) — documents 1,350+ algorithms from 320+ public bodies
Technical Concepts & Resources
Frameworks & Methodologies
- Sociotechnical systems approach — analyzes AI as embedded in institutional, social, and technical contexts simultaneously
- Impact assessment for algorithms and human rights — assessment tool developed with UT University's Data School, updated for generative and agentic AI and EU AI Act compliance
- Public-private partnership (PPP) models — collaborative governance approach
- Regulatory sandboxes — controlled environments where companies can develop and test AI compliance with regulations like the EU AI Act
- ELSA Labs structure — multistakeholder collaboration bringing researchers, public institutions, and civil society to shared challenges
AI/Technical Terms Referenced
- Generative AI and agentic AI — mentioned as emerging concerns requiring updated assessment frameworks
- Deep learning — discussed in context of training on historical data and encoding past biases
- Algorithms and automated decision systems — general focus on opaque, discriminatory deployment
- Large language models (LLMs) — referenced via GTPNL example
- Chatbots — mentioned as systems requiring human oversight and human escalation paths
Policy & Regulatory References
- EU AI Act — regulatory framework shaping Netherlands' governance approach
- EU AI Action Plan — strategic direction for European AI development
- India AI Governance Guidelines — national guidance framework referenced as aligned with European trustworthiness-centered approach
- India AI Impact Summit — venue for this session; noted as central to shaping global thinking on emerging technologies
Philosophical & Theoretical References
- Cartesian dualism ("I think therefore I am") — Western philosophical tradition encoded in AI systems
- Ubuntu philosophy ("I am because we are") — alternative collective tradition mentioned as foundation for different AI architectures
- Datification — treating all phenomena as data-reducible, creating bias toward measurable problems
Investment & Funding Programs (Netherlands)
- ICAI structure — 59 labs and units across Netherlands with 300M+ euros funding, 158 industry partners, 653+ PhD/postdoc researchers
- NLR AI — national education lab for AI, supported by €80M from national growth fund
- Euro HPC initiative — public AI factory in Netherlands providing computing access to researchers, startups, and public institutions
- First AI Unit — launched late 2025, combining multiple labs focused on sustainable development goals
Note: The transcript contains some audio artifacts (repetitions, transcription errors) that have been interpreted contextually. Some speaker names are partially unclear from transcription; attributions reflect best interpretation from provided text.
