New ETSI standard sets global baseline for securing AI systems

ETSI has released a groundbreaking new standard setting the first global cybersecurity baseline for AI systems. ETSI TS 104 223 outlines clear, actionable requirements across the entire AI lifecycle, aiming to make AI not just powerful, but secure and trustworthy.

New ETSI standard sets global baseline for securing AI systems

The European Telecommunications Standards Institute (ETSI) has released a landmark technical specification, ETSI TS 104 223 V1.1.1, establishing the first comprehensive set of baseline cybersecurity requirements for artificial intelligence (AI) models and systems. As AI becomes integral to critical infrastructure and daily life, the new specification addresses the urgent need for security measures tailored to the unique risks AI technologies present.

Unlike traditional IT systems, AI systems are exposed to distinct vulnerabilities such as data poisoning, adversarial attacks, model inversion, and indirect prompt injection. ETSI’s new standard focuses specifically on these threats, offering a structured framework that guides organisations through securing AI across its full operational lifecycle: secure design, secure development, secure deployment, secure maintenance, and secure end of life.

The specification imposes detailed obligations on key stakeholders – developers, system operators, data custodians, end-users, and affected entities – emphasising proactive threat modelling, secure software supply chain management, continuous behavioural monitoring, and responsible data and model disposal. Among the key requirements are the need for role-specific AI security training, the inclusion of human oversight capabilities in system design, and strict protection of sensitive training and operational data.

ETSI TS 104 223 is closely aligned with internationally recognised frameworks, including ISO/IEC 27001 for information security, the NIST AI Risk Management Framework, and guidance from the European Union’s AI Act (Regulation 2024/1689). It also references cybersecurity initiatives from bodies such as ENISA, the World Economic Forum, and the G7 Hiroshima Process Code of Conduct for advanced AI systems.

The specification stresses that cybersecurity for AI cannot be an afterthought. Organisations must integrate security considerations from the earliest design stages through deployment and ongoing operation. It also highlights the necessity for transparency, urging developers and operators to document data sources, model behaviours, and potential failure modes to support risk management and compliance efforts.

By codifying these requirements, ETSI aims to provide a practical and enforceable foundation for AI cybersecurity at a time when the reliability and trustworthiness of AI systems are under increasing public and regulatory scrutiny. ETSI TS 104 223 is expected to serve as a key reference for industries deploying AI technologies, helping ensure that AI systems are not only functional and efficient but also resilient and secure.

Go to Top