PAI Code of Practice meeting addresses AI risk forecasting and harmful manipulation

A taskforce meeting on the General-Purpose AI Code of Practice examined how companies should assess risks, including forecasting future capabilities and identifying harmful manipulation scenarios.

PAI Code of Practice meeting addresses AI risk forecasting and harmful manipulation

The European Commission’s Signatory Taskforce working on the General-Purpose AI (GPAI) Code of Practice held its third meeting, focusing on safety and security measures for advanced AI systems.

The discussions centred on how companies developing large-scale AI models should assess and manage risks. One key topic was forecasting future risks. This involves estimating when AI systems might reach higher levels of capability that could pose systemic risks.

Participants considered methods for producing these forecasts, including combining estimates from different companies to create an overall industry view. These forecasts could be updated regularly and used to inform risk assessments.

Another topic was harmful manipulation, which refers to the potential for AI systems to influence users in ways that may cause harm. The task force discussed how to define realistic scenarios where such risks could occur, for example, through chatbots, applications, or widely distributed AI-generated content.

These scenarios are intended to guide how AI systems are tested. By simulating real-world conditions, developers can better assess whether systems might contribute to harmful outcomes.

The meeting also included input from research organisations, which presented approaches to forecasting and risk evaluation. Further work is expected to refine these methods and clarify how they will be applied in practice.

Go to Top