How to Build Secure AI: Essential Development Guidelines
Contents
Executive Summary
The UK government, through the National Cyber Security Centre (NCSC) and Department for Science, Innovation, and Technology, developed an international technical standard (EN 304223) addressing cybersecurity risks specific to AI systems. The standard establishes 13 principles across five lifecycle phases (design, development, deployment, maintenance, and end-of-life) to provide baseline security requirements for organizations building and deploying AI systems globally, created through multistakeholder consultation across governments, industry, academia, and the public sector.
Key Takeaways
-
A New International Standard for AI Security Is Now Available: EN 304223 provides concrete, globally-consulted baseline security requirements for AI systems across their entire lifecycle and is freely available through ETSI, making it a practical reference for organizations worldwide.
-
Security Must Be Built Into AI from Design, Not Added After Deployment: The framework emphasizes "secure by design" principles requiring security considerations across all five lifecycle phases (design → development → deployment → maintenance → end-of-life), not as an afterthought.
-
Baseline Requirements Enable Broader Adoption Than Aspirational Standards: By setting minimum mandatory requirements ("shall do") alongside optional enhancements ("should do"), the standard balances accessibility for resource-constrained organizations with room for security-conscious leaders to exceed baselines.
-
AI-Specific Threats Require New Security Thinking: Traditional cybersecurity measures alone are insufficient; organizations must understand and mitigate AI-specific attacks like adversarial input manipulation, training data poisoning, and model inversion that have no equivalents in conventional software security.
-
Standards Are Most Effective When Developed Through Transparent, Multistakeholder Processes: The iterative consultation involving governments, industry, academia, and civil society across all continents produced a standard with genuine credibility and practical applicability far beyond what a single government could achieve unilaterally.
Key Topics Covered
- AI-Specific Cybersecurity Threats: Adversarial inputs, data poisoning, model inversion, membership inference, and prompt injection attacks
- Technical Standards Development Process: How international standards bodies (ETSI, ISO, ITU, etc.) develop specifications through consensus-based, transparent processes
- 13 Security Principles Across AI Lifecycle: Organized framework spanning design, development, deployment, maintenance, and decommissioning phases
- Multistakeholder Approach: Collaboration with governments, industry, academia, and public sector across multiple continents
- Standards Bodies and Global Frameworks: Role of ETSI, ISO, ITU, 3GPP, W3C, IETF in cybersecurity standardization
- Implementation and Conformity Assessment: Supporting documentation including technical specifications, implementation guides, and conformity assessment mechanisms
- Regulatory Alignment: Relationship between voluntary standards and regulatory requirements (e.g., EU AI Act)
- Rapid Standardization Strategy: Balancing speed of development with quality to address fast-moving AI landscape
- Data Minimization and Ethical Considerations: Limitations of security standards in addressing broader data ethics and privacy concerns
- Emerging AI Behaviors and Standard Adaptation: Challenge of keeping standards current as AI capabilities rapidly evolve
Key Points & Insights
-
AI Security Requires Unique Mitigations Beyond Traditional Cybersecurity: While traditional software security threats apply to AI systems, AI introduces distinct attack vectors (adversarial inputs, data poisoning, model inversion, membership inference, and prompt injection) requiring specialized security measures not addressed in legacy security frameworks.
-
13 Principles Establish Baseline, Not Maximum Standards: The framework deliberately sets achievable minimum baseline security requirements (using "shall" for mandatory provisions and "should" for voluntary ones) rather than ideal standards, recognizing that many organizations lack resources for comprehensive implementation while ensuring broad adoption.
-
Standards Enable Interoperability and Compatibility Across the Ecosystem: Technical standards are fundamental to allowing different technologies, companies, and jurisdictions to work together securely; they provide the "underpinning" that enables innovation and compatibility (analogous to how different phone networks globally can communicate).
-
Multistakeholder Collaboration Across All Continents Ensured Global Applicability: The standard was developed with input from over 20 cybersecurity agencies, hundreds of organizations, and stakeholders from every continent (except Antarctica) representing governments, industry, academia, and the public sector, providing credibility and global relevance beyond any single government's perspective.
-
Free, Accessible Standards Are Essential for Equitable Security: The choice to publish through ETSI (which produces free standards) rather than ISO (which charges for standards) ensures that small companies, startups, and organizations in developing nations can access critical security guidance regardless of budget constraints.
-
Lifecycle Approach Addresses Gap in End-of-Life AI System Management: Unlike existing literature focusing heavily on development and deployment, this standard uniquely addresses the final lifecycle phase (end-of-life disposal of data and models), though this area requires further research and practical guidance.
-
Rapid Standardization Is Possible with Proper Process Design: The UK government achieved standard development within ~2-3 years (versus typical 4-5 year timelines) by using targeted processes through ETSI while maintaining quality through rigorous consultation and multistakeholder input, demonstrating that speed and quality are not mutually exclusive.
-
Standards Must Balance Voluntary Adoption with Regulatory Alignment: Standards are most effective when voluntary, but this standard was upgraded from Technical Specification (TS 104223) to European Norm (EN 304223) to align with EU AI Act requirements, showing how standards can facilitate regulatory compliance while remaining globally applicable.
-
Continuous Updating and Adaptive Framework Necessary for Emerging Technology: The speaker acknowledged that standards inherently lag technological development and proposed supplementary guidance (annexes) rather than full rewrites to maintain relevance as AI capabilities and threats evolve without waiting years for formal standard updates.
-
Security Standards Focus Narrowly on Data/System Protection, Not Broader Ethical Issues: The standard deliberately scopes itself to cybersecurity of AI systems rather than data minimization principles, ethics, or consent issues—acknowledging these are important but fall under separate standards and ethical frameworks being developed elsewhere.
Notable Quotes or Statements
-
On Standards as Foundation for Interoperability: "Standards are useful for compatibility and interoperability reasons... Everything works purely because different companies that have different IP and different technologies all are working together using standards."
-
On Baseline vs. Maximum Standards: "This is baseline. You could go further. However, from our perspective is that we want as many organizations at whatever level you have to meet these requirements... this will at the very least provide you those good baseline security requirements."
-
On Speed vs. Quality Trade-off: "We knew that through this particular process we could develop a standard reasonably quickly but not at the cost of the quality of the standard because there is a risk if you develop a standard too quickly it's just poor quality."
-
On Accessibility: "Standards should be free, that should be available to anyone. Whether you're a big company with billions of dollars or whether you're a small company or just a person in their basement, there should be available to everyone."
-
On AI's Unique Challenge to Standards: "AI is one of those technologies that... just at least in these last few years new things are cropping up, new ways of actually how you use AI... there are probably areas where we would want to do more work."
-
On Multistakeholder Approach: "This is our advice but also it is in part or more so other people's advice but just us as UK government providing that kind of that drive through the different organizations."
Speakers & Organizations Mentioned
Government & Government Agencies:
- UK National Cyber Security Centre (NCSC)
- UK Department for Science, Innovation, and Technology
- Cybersecurity and Infrastructure Security Agency (CISA, US Government)
- 20+ other national cybersecurity agencies (globally represented)
- State Bank of India (questioner)
- Singapore Government (referenced work on AI verification)
Standards Development Organizations:
- ETSI (European Telecommunications Standards Institute) — primary organization for this standard
- ISO (International Organization for Standardization)
- ITU (International Telecommunications Union, UN-based)
- 3GPP (Third Generation Partnership Project) — mobile standards
- W3C (Worldwide Web Consortium) — web standards
- IETF (Internet Engineering Task Force) — internet standards and protocols
- CEN/CENELEC (European standards bodies)
Other Organizations:
- Paradigm Initiative (questioner)
- Universities and research centers
- Think tanks
- Academic institutions globally
Technical Concepts & Resources
AI-Specific Security Threats Identified:
- Adversarial Inputs: Maliciously crafted data designed to trick AI models into producing incorrect/harmful outputs
- Data Poisoning: Insertion of corrupted or malicious data into training sets to compromise model integrity
- Model Inversion & Membership Inference: Attacks that extract sensitive information about training data from model outputs
- Prompt Injection: Exploiting input prompts to override system rules or produce unintended outputs
Standard Documents:
- EN 304223 (European Norm) / TS 104223 (Technical Specification) — Primary standard with 13 principles and security requirements
- TR 104128 (Technical Report) — Implementation guide with practical examples for different AI system types (chatbots, fraud detection, LLM providers, open-access models)
- TS 104216 (Conformity Assessment) — In development; provides frameworks for assessing and proving compliance with security requirements (self-assessment or third-party verification options)
The 13 Security Principles (Organized by Lifecycle Phase):
| Phase | Principles |
|---|---|
| Design | 1. Raise awareness of AI security threats/risks; 2. Design for security + functionality; 3. Evaluate threats & manage risks; 4. Enable human responsibility |
| Development | 5. Identify, track, protect assets; 6. Secure infrastructure; 7. Secure supply chain; 8. Document data/models/prompts; 9. Conduct testing/evaluation |
| Deployment | 10. Communication & processes with end users/affected entities; (overlaps with principle 9) |
| Maintenance | 10. (Continued communication); 11. Maintain updates/patches/mitigations; 12. Monitor system behavior |
| End-of-Life | 13. Ensure proper data & model disposal |
Key AI Technology Areas Covered:
- Telecommunications security
- Quantum technologies & post-quantum cryptography (PQC)
- Semiconductors
- AI systems broadly
- Internet security and protocols
Related Regulatory Frameworks:
- EU AI Act — Referenced as regulatory driver for EN (vs. TS) designation
- Multistakeholder Governance Model — Emphasis on consensus-based, transparent, open processes across governments, industry, academia, and public
Standards Development Methodologies:
- Industry-led, multistakeholder consensus processes
- Global consultation periods (~4 months minimum)
- Iterative refinement based on feedback
- Mapping to existing international frameworks and standards for complementarity
- Conformity assessment mechanisms (self or third-party)
Emerging Concepts:
- Secure by Design: Security built into AI development from inception, not retrofitted
- Baseline vs. Aspirational Standards: Mandatory ("shall") vs. voluntary ("should") provisions
- Lifecycle Security: Comprehensive approach from design through decommissioning
Note on Transcript Quality: This transcript contains notable audio/transcription artifacts (repetition of phrases, unclear audio sections marked by breaks). The summary prioritizes the speaker's intended meaning based on context and clarifications provided during Q&A. The presentation was delivered at the Delhi AI Impact Summit by a UK government cybersecurity official whose name was not clearly identified in the transcript.
