Second International AI Safety Report 2026 warns of expanding capabilities and uneven risk management
A new international report reviews advances in general-purpose AI and highlights growing risks, from cyber misuse to reliability failures, while urging evidence-based policymaking.
The second International AI Safety Report 2026 has been released, offering an updated assessment of what today’s most advanced general-purpose AI systems can do, what risks they pose, and how those risks might be managed. The report was prepared with guidance from more than 100 independent experts nominated by over 30 countries and international organisations, including the EU, OECD, and the UN.
The report focuses on general-purpose AI systems, meaning models that can perform a wide range of tasks rather than being limited to one function. It examines so-called emerging risks, which are risks linked to the most capable systems. Some of these risks are already visible in real-world harms, while others remain uncertain but potentially severe.
It identifies three broad categories of risk: malicious use, malfunctions, and systemic risks.
Malicious use includes scams, fraud, blackmail, and the creation of non-consensual intimate imagery using AI-generated content. AI tools are also being used to identify software vulnerabilities and generate malicious code. In biological and chemical domains, advanced systems can provide detailed technical information that could potentially lower barriers for misuse, though developers have introduced additional safeguards in response.
Malfunctions refer to technical failures. Current systems may fabricate information, produce incorrect code, or provide misleading advice. As AI systems become more autonomous, human oversight becomes more difficult. The report notes that while safeguards reduce failure rates, they remain insufficient for many high-stakes uses.
Systemic risks concern broader societal impacts. These include potential labour market disruption, shifts in demand for certain occupations, and risks to human autonomy. Early evidence shows no overall employment decline, but some reduction in demand for early-career roles in AI-exposed fields. The report also cites concerns about ‘automation bias,’ where users may trust AI outputs without sufficient scrutiny, and growing use of AI companion applications.
On capabilities, the report notes continued rapid progress, particularly in complex reasoning tasks, mathematics, science, and software engineering. However, performance remains uneven. Systems may solve advanced problems yet struggle with simpler tasks. Future progress toward 2030 remains uncertain and could slow, plateau, or accelerate.
The report highlights technical and institutional challenges in managing risks. Evaluations conducted before deployment do not always predict real-world behaviour, and developers often retain proprietary control over key information. While many companies have published voluntary safety frameworks, only a limited number of jurisdictions have begun formalising risk management requirements in law.
The report concludes that layered safeguards, combined with societal resilience measures such as improved detection tools and stronger institutional capacity, will be necessary. It does not propose specific policies, but aims to provide policymakers with a consolidated evidence base to inform regulatory and governance decisions in a rapidly evolving technological environment.
