OECD publishes new report on governing with artificial intelligence in government
The OECD has released a new report, Governing with Artificial Intelligence: The State of Play and Way Forward in Core Government Functions, which analyses 200 use cases across 11 government functions to assess how AI is reshaping public administration.

The Organisation for Economic Co-operation and Development (OECD) has published a comprehensive new report, Governing with Artificial Intelligence: The State of Play and Way Forward in Core Government Functions, approved and declassified by the Public Governance Committee on 5 September 2025. The study examines how artificial intelligence (AI) is being adopted across government operations, identifies benefits and risks, and proposes a framework for ensuring trustworthy use of the technology in public administration.
The report highlights that governments around the world are increasingly turning to AI to modernise internal processes, deliver public services, and enhance decision-making. Drawing on analysis of 200 use cases across 11 core government functions, it finds that AI applications are most visible in public service delivery, justice administration, and civic participation. More limited progress has been made in areas such as tax administration, civil service reform, and policy evaluation. Many initiatives remain at the pilot stage, reflecting both the opportunities and the challenges governments face in scaling AI deployment.
According to the OECD, the benefits of AI in government fall into four main categories. First, the automation of repetitive processes and the tailoring of services to individual needs can improve efficiency and responsiveness. Second, AI can strengthen decision-making and forecasting capabilities, helping governments allocate resources more effectively and respond to emerging issues. Third, the technology can support accountability and anomaly detection, for example in fraud prevention or regulatory oversight. Finally, AI has the potential to provide new opportunities for citizens and businesses by opening access to government-developed systems.
At the same time, the report stresses that risks are significant. Biased algorithms, insufficient transparency, and over-reliance on AI could undermine rights, erode trust, or entrench systemic errors. Public service workforce displacement is another concern, particularly if governments use AI to replace rather than augment civil servants’ work. Conversely, failing to adopt AI also carries risks, as it may widen the gap between public and private sector capacities and limit governments’ ability to meet growing citizen demands.
Implementation challenges remain a central theme of the OECD’s analysis. Skills shortages, outdated legacy systems, difficulties in data sharing, and financial constraints all hinder the scaling of successful initiatives. Although many governments have developed AI strategies, the report notes that concrete guidance for implementation is often lacking. Monitoring and evaluation mechanisms are also weak, limiting the ability to measure outcomes or detect risks effectively.
To address these issues, the OECD proposes a framework based on three pillars. Enablers such as governance structures, digital infrastructure, data management, funding, and workforce skills are needed to support adoption. Guardrails—including clear rules, accountability measures, transparency requirements, and oversight bodies—should ensure responsible use. Finally, engagement mechanisms with citizens, civil society, and businesses are essential for designing user-centred and responsive AI systems. The report encourages governments to prioritise applications that deliver high benefits with manageable risks, and to invest in the measurement of both efficiency gains and potential harms.
The report also provides detailed sectoral analyses, examining AI’s current status, challenges, and future potential in tax administration, public financial management, regulatory design, civil service reform, public procurement, anti-corruption, policy evaluation, civic participation, public service delivery, law enforcement, disaster risk management, and justice administration. In each area, the OECD points to both practical examples and lessons learned that can inform broader adoption.
By situating governments not only as regulators and investors but also as users and developers of AI, the OECD underscores the importance of building internal capacities for AI governance. While the report recognises that many efforts are still in their early stages, it emphasises that postponing adoption could leave governments dependent on external actors and unable to shape the trajectory of AI use in the public sector. The study forms part of the OECD’s Horizontal Project on Thriving with AI, which seeks to provide evidence and policy guidance for economies and societies adapting to the rapid technological transformation