The European Telecommunications Standards Institutepublishes new standard setting baseline cybersecurity requirements for AI systems
The European Telecommunications Standards Institute has released a new European Standard on AI cybersecurity, aiming to provide a common baseline for protecting AI systems against emerging digital threats across their entire lifecycle.
The European Telecommunications Standards Institute (ETSI) has published a new standard, ETSI EN 304 223, establishing baseline cybersecurity requirements for artificial intelligence models and systems. Announced on 15 January 2026, the standard is described by ETSI as the first globally applicable European Standard dedicated specifically to AI cybersecurity.
ETSI EN 304 223 builds on earlier work contained in a Technical Specification, but differs in that it has undergone formal review and approval by national standards organisations. This process gives the standard a stronger status and wider international relevance, extending its potential use beyond Europe.
The standard is designed to address cybersecurity risks that are specific to AI systems. ETSI notes that while traditional software has long required security safeguards, AI introduces new challenges that existing approaches do not fully cover. These include risks such as data poisoning, indirect prompt injection, model manipulation, and vulnerabilities arising from complex data pipelines and operational practices.
To respond to these challenges, ETSI EN 304 223 sets out a structured framework of security requirements that apply across the full lifecycle of an AI system. It defines 13 principles and requirements grouped into five phases: secure design, secure development, secure deployment, secure maintenance, and secure end of life. This lifecycle-based approach aligns with widely used AI development models, supporting consistency with other international standards and guidance.
The standard applies to AI systems intended for real-world deployment, including those based on deep neural networks and generative AI technologies. It is intended to be used across the AI supply chain, providing a common reference point for developers, vendors, system integrators, and operators responsible for deploying and maintaining AI systems.
ETSI highlights that the standard draws on input from a broad range of stakeholders, including international organisations, public authorities, and experts from the cybersecurity and AI communities. This collaborative approach is intended to ensure that the requirements are both technically sound and practically applicable across different sectors.
Further work is planned to complement the standard. ETSI announced that an upcoming Technical Report, ETSI TR 104 159, will apply the principles of EN 304 223 specifically to generative AI. This follow-up document is expected to focus on issues such as deepfakes, misinformation and disinformation, confidentiality risks, and copyright and intellectual property concerns, offering more detailed guidance where needed.
Why does it matter?
According to ETSI, the new standard is intended to support the development of AI systems that are resilient, trustworthy, and secure by design, particularly as AI becomes more deeply embedded in critical services and infrastructure.
