EU researchers propose reach as a new measure of systemic risk for general-purpose AI models

A new report by the European Commission’s Joint Research Centre (JRC) proposes that the number of people using a general-purpose AI (GPAI) model – its ‘reach’ – could serve as a key indicator of systemic risk under the EU AI Act. The study offers a technical framework for measuring and reporting model reach to help regulators identify AI systems whose widespread use may influence society on a large scale.

EU researchers propose reach as a new measure of systemic risk for general-purpose AI models

The European Commission’s Joint Research Centre (JRC) has published a scientific study that could reshape how the European Union assesses the risks of artificial intelligence. The report, General-Purpose AI Model Reach as a Criterion for Systemic Risk, argues that the extent to which an AI model is used – its ‘reach’ – should be taken into account when determining whether it poses systemic risks under the EU’s Artificial Intelligence Act.

The concept of “reach” is defined as the number of people who directly interact with a model, for example, through chatbots, programming interfaces, or image-generation tools. The report’s authors, led by Paul Röttger and Dirk Hovy, propose that large-scale use alone can make even non-advanced AI models a potential source of societal harm. The study was released in 2025 as part of a broader series of external scientific reports supporting the implementation of Chapter V of the AI Act, which regulates general-purpose AI systems.

According to the report, systemic risks may arise not only from cutting-edge capabilities—such as models capable of generating harmful code – but also from the widespread influence of commonly used AI tools. When millions of people interact daily with biassed or misleading models, the cumulative effects could subtly alter public understanding, reinforce stereotypes, or spread misinformation.

To capture this type of risk, the researchers propose a practical reporting system in which AI providers would submit monthly data on the number of users accessing their models through interfaces and application programming interfaces (APIs). These metrics, already tracked by most companies, would then be aggregated by the European AI Office to determine when a model’s reach becomes large enough to warrant classification as a model with systemic risk.

The study outlines several possible thresholds. Smaller providers with only a few thousand users would not be required to report, while models reaching millions of people could trigger additional oversight. The goal, according to the authors, is to balance comprehensive coverage with minimal administrative burden.

The JRC report also highlights how this approach fits within the legal framework of the EU AI Act. Recitals 110 and 111 of the Act already state that systemic risks increase with both model capabilities and model reach, and Annexe XIII explicitly mentions reach as a possible basis for classification. By operationalising this concept, the JRC aims to provide regulators with measurable data to support future enforcement.

However, the study recognises several challenges. Expanding the definition of systemic risk to include model reach could increase the number of providers subject to regulation and require regulators to develop expertise in monitoring societal impacts such as bias and misinformation. The authors also stress that political considerations may influence how far the EU wishes to extend oversight beyond frontier models.

Despite these uncertainties, the report marks a significant step in the EU’s effort to build an evidence-based approach to AI governance. It complements other JRC studies on AI capabilities, compute thresholds, and safety benchmarks, all aimed at helping the EU translate the principles of the AI Act into practical regulatory tools.

By introducing the idea of ‘reach’ as a measurable criterion, the European Commission’s researchers are expanding the conversation about what makes AI risky – not just how powerful it is, but how widely it is used.

Go to Top