AI as critical infrastructure for continuity in public services
Contents
Executive Summary
This panel discussion examines how AI can serve as critical infrastructure for public services while maintaining trust, security, and inclusivity across borders. The speakers—representing government, international standards bodies, civil society, and private sector—argue that trusted AI requires not just advanced technology but also robust governance frameworks, interoperable standards, data sovereignty protections, and meaningful community participation. The overarching message is that trust in AI must be deliberately designed, governed, implemented, and continuously earned—it cannot be assumed.
Key Takeaways
-
Trusted AI is a design problem, not a technology problem. The biggest implementation gaps are not in GPUs or models, but in data governance, organizational alignment, standards awareness, and human readiness. Technology advancement outpaces institutional and social capacity.
-
Standards + Capacity = Enablement. Awareness of ITU's 500 AI standards is low; organizations need training to use these building blocks rather than starting from scratch. Standards without capacity-building initiatives will remain underutilized.
-
Inclusivity breeds legitimacy. Multistakeholder participation before deployment, community input on linguistic and cultural context, and transparent accountability mechanisms create real public trust—not post-hoc fixes or top-down mandates.
-
Data is the constraint, not innovation. 80% of AI pilots fail at scale due to data silos and governance gaps, not algorithmic limitations. Solving data accessibility and governance is more urgent than advancing model sophistication.
-
The human factor is both the biggest barrier and greatest opportunity. Change management, clear communication, workforce retraining, and addressing job security fears are prerequisites for AI adoption—arguably more important than regulatory compliance or technical standards.
Key Topics Covered
- Cyber Security & Data Protection — safeguarding critical infrastructure (energy, water, health systems) and national data sovereignty
- National AI Initiatives — Poland's development of domestic language models (Bl and Biki) and public sector AI deployment
- Global AI Standards — ITU's standardization work (200+ approved standards, 500 in pipeline) for interoperability and data exchange
- Multistakeholder Governance — inclusive policy-making involving governments, civil society, technical experts, and private sector
- Community-Driven Ecosystems — building trust through local participation, linguistic diversity, and feedback mechanisms
- Regulatory Alignment — EU AI Act as a framework for international trade and business confidence
- Infrastructure & Resilience — foundational requirements: control, explainability, and uptime in critical systems
- Adoption Barriers — data silos, organizational misalignment, skills gaps, and human trust deficits
- Change Management — communicating AI benefits without creating workforce anxiety or burnout
Key Points & Insights
-
Critical Infrastructure Requires Holistic Protection: Cyber security, data safety, and AI trustworthiness are inseparable; protecting energy, water, health, and local government systems demands coordinated investment at national and local levels.
-
Interoperability Standards Lower Barriers: ITU's ~500 AI standards (covering data formats, APIs, protocols, model lifecycle definitions, and conformance testing) enable systems developed in different regions to communicate seamlessly, reducing investment costs and increasing efficiency.
-
Standards Alone Are Not Enough: Even with standards in place, there is widespread unawareness of their existence and application. Organizations need capacity building to articulate problems, plan implementations, and translate initiatives into operational projects.
-
Data Governance Is the Primary Bottleneck: In practice, ~80% of AI pilots in India fail to reach production; the root cause is siloed, unready data and lack of data governance—not technology limitations. Data must be accessible, governed, and scaled for real-world deployment.
-
Trust Has Three Foundational Layers:
- Control — Do you own and control your data and infrastructure? Is jurisdictional sovereignty protected?
- Explainability — Can you understand why AI made a decision across all layers (data, network, governance)?
- Resilience — Will AI systems remain operational 24/7 in critical services (healthcare, emergency response)?
-
Inclusion Determines Legitimacy: Multistakeholder participation (government, civil society, technical experts, private sector) in AI governance before deployment—not after—creates legitimacy and trust. Transparent processes, public comment periods, and accessible documentation strengthen confidence.
-
Linguistic & Contextual Diversity Must Be Central: AI solutions deployed in languages or contexts unfamiliar to 50–80% of populations break trust. Inclusivity requires participation from affected communities in both innovation and policy cycles, not just top-down implementation.
-
Regulatory Clarity Enables Business Confidence: The EU AI Act (implemented 2026), while demanding compliance, provides a "playbook" that standardizes requirements globally. Clear rules (versus regulatory uncertainty) help businesses plan and invest across borders; India and other nations should view this as opportunity, not burden.
-
Human Barriers Outweigh Technical Ones: Organizational misalignment (legal, IT, business units failing to coordinate), workforce anxiety ("Will AI replace me?"), and poor change communication slow adoption far more than technology constraints. Messaging must emphasize quality of work and task relief, not productivity metrics that trigger burnout.
-
Local Trust Cannot Be Delegated Globally: While global standards and governance frameworks set the foundation, trust is ultimately built at the community level through local feedback loops, accessible documentation, and responsive oversight mechanisms. Policy decisions must have resonance both "down" from global to local and "up" from communities to decision-makers.
Notable Quotes or Statements
-
Minister Roshinski (Poland): "Cyber security is the crucial point. We cannot imagine how we can run the business if we have no energy, no water, and our data is not protected."
-
Atsuko Akuda (ITU): "We have over 200 already approved AI standards and 200 more in the pipeline... there are many different standards available for everyone."
-
Changatai (on governance): "Inclusivity breeds legitimacy and thereby trust... if all stakeholders give their point of view, you result in policies that have greater buy-in."
-
Pamote (on implementation reality): "Almost 80% of those pilots don't make it to production... the key reason is data is siloed, data is not ready for AI scale."
-
Moderator (closing): "Trusted AI is not built by technology alone. It requires public leadership, interoperable standards, resilient infrastructure, shared accountability, and human readiness... trust must be designed, governed, implemented, and continuously earned."
-
Edita (on change management): "Don't tell users they will be more productive. Maybe the quality of work will be better... we must be very careful what wording we use regarding AI adoption."
Speakers & Organizations Mentioned
- Minister Roshinski — Polish government representative; discussed Poland's digital governance and national LLM initiatives
- Atsuko Akuda — International Telecommunication Union (ITU); standardization expert
- Changatai — Civil society/governance perspective; referenced Internet Governance Forum (IGF)
- Madas — Community-driven digital ecosystems; linguistic diversity and local value creation focus
- JJ — Polish Chamber of Commerce; India–EU–Poland trade perspective
- Mario/Marush — Millennium (IT services company); distributed development and AI compliance solutions
- Pamote — Infrastructure expert; data sovereignty and resilience focus
- Edita — Change management and user adoption specialist
- Organizations: ITU (International Telecommunication Union), Internet Governance Forum (IGF), EU (European Union), Poland, India
Technical Concepts & Resources
-
ITU AI Standards: ~200 approved, ~300 in pipeline (500 total)
- Data formats, APIs, communication protocols
- AI model lifecycle definitions
- Conformance and testing specifications
- Harmonized terminology, vocabulary, and reference architecture
-
National Language Models: Poland's Bl and Biki LLMs (developed with academia and private sector)
-
EU AI Act — Regulatory framework implemented 2026; compliance requirements and sandbox solutions for businesses
-
AI Compliance Suite — Millennium's tool for helping organizations navigate regulatory requirements and AI tool selection
-
Data Governance Challenges:
- Data silos and lack of readiness for production scale
- No clear governance frameworks around data access and use
- Pilot-to-production gap (~80% failure rate)
-
Critical Infrastructure Domains: Energy, water systems, health, local government services
-
Multistakeholder Governance Model: Modeled on internet governance (public comment periods, open consultations, accessible documentation)
-
Key Assessment Mechanisms: Impact assessments, independent oversight bodies, accessibility standards
Implementation Notes
This transcript reflects a high-level policy and strategy discussion rather than technical deep-dives. The emphasis is on governance, trust-building, and organizational readiness as prerequisites for AI adoption in public services. The discussion is notably grounded in cross-regional perspectives (EU, Poland, India) and highlights the gap between technological capability and institutional/social readiness to deploy AI responsibly at scale.
