OECD introduces reporting system for G7 Hiroshima AI code of conduct implementation

The OECD has launched the first global framework to monitor how organisations apply the G7 Hiroshima AI Code of Conduct, promoting transparency and accountability in advanced AI development. This voluntary reporting system offers a standardised way to share AI risk management practices and encourages global consistency in AI governance.

OECD introduces reporting system for G7 Hiroshima AI code of conduct implementation

The Organisation for Economic Co-operation and Development (OECD) has introduced the first global framework to monitor how organisations implement the G7 Hiroshima AI Process International Code of Conduct for organisations developing advanced AI systems. The framework aims to support transparency, accountability, and trust in the development and deployment of advanced AI worldwide.

The initiative builds on the G7 Hiroshima AI Process, which was launched during Japan’s G7 Presidency in 2023. That process established the International Guiding Principles and the International Code of Conduct for organisations working with advanced AI. In 2024, under Italy’s G7 Presidency, the G7, supported by the OECD, developed the Reporting Framework to allow organisations to voluntarily share how they are applying the Code of Conduct.

The framework offers a standardised method for organisations to report their AI risk management practices. It covers aspects such as risk assessment, incident reporting, and transparency measures, making it possible to compare approaches across organisations. Participation is voluntary and focuses on organisations developing the most advanced AI systems, including foundation models and generative AI. Organisations are asked to complete a structured questionnaire covering areas such as risk identification, information security, transparency, governance, content authentication, AI safety research, and the advancement of human and global interests.

Reports submitted through the framework will be publicly available on a dedicated OECD website. This public disclosure aims to strengthen transparency and allow stakeholders to compare risk management practices among different organisations. Organisations are encouraged to update their reports annually, with the first round of submissions expected by 15 April 2025. While organisations participating will be listed under the Hiroshima AI Process Brand on the OECD.AI webpage, the OECD emphasises that this does not amount to an endorsement or certification.

Development of the framework involved input from leading AI companies, academia, and civil society, and was piloted with 20 organisations across 10 countries. Major firms such as Amazon, Anthropic, Fujitsu, Google, Microsoft, NEC, NTT, OpenAI, Salesforce, and Softbank have already pledged to take part in the inaugural reporting cycle.

The OECD’s Reporting Framework represents the first global mechanism for collecting comparable information on corporate AI risk management. It is designed to strengthen international efforts to build safe, secure, and trustworthy AI systems. By aligning with the Hiroshima Code of Conduct and integrating with other risk management systems, the framework promotes consistency and interoperability across different international AI governance efforts.

Although the framework is not legally binding and differs from regulatory measures like the EU AI Act or U.S. state legislation, it shares a focus on core areas such as risk management and transparency. It draws inspiration from the OECD AI Principles, which have already shaped many major AI policy developments worldwide.

Go to Top