Council of Europe adopts first-ever international treaty on AI

On 17 May, The Council of Europe Framework Convention on AI and human rights, democracy, and the rule of law was adopted in Strasbourg during the annual ministerial meeting of the Council of Europe’s Committee of Ministers.

Council of Europe adopts first-ever international treaty on AI

The Council of Europe has adopted the first international treaty aimed at ensuring the respect of human rights, the rule of law, and democracy in the use of AI AI systems. The treaty establishes a comprehensive legal framework that addresses the risks associated with AI systems while promoting responsible innovation.

A risk-based approach

The convention has adopted a risk-based approach, similar to the EU AI Act, which applies to the entire lifecycle of AI systems, from design and development to use and decommissioning. This approach requires careful consideration of any potential negative consequences of using AI systems.

Council of Europe Secretary General Marija Pejčinović stated, ‘The Framework Convention on Artificial Intelligence is a first-of-its-kind, global treaty that will ensure that Artificial Intelligence upholds people’s rights. It is a response to the need for an international legal standard supported by states on different continents which share the same values to harness the benefits of Artificial Intelligence while mitigating the risks. With this new treaty, we aim to ensure a responsible use of AI that respects human rights, the rule of law and democracy.’

Scope and compliance

The treaty applies to AI systems used in both the public and private sectors. It offers parties two compliance options: they can either be directly bound by the convention’s provisions or take other measures to meet the treaty’s standards while adhering to their international obligations regarding human rights, democracy, and the rule of law. This flexibility accommodates the diverse legal systems worldwide.

Transparency and accountability

Key provisions of the treaty include transparency and oversight requirements tailored to specific contexts and risks. Parties must implement measures to identify, assess, prevent, and mitigate possible risks, including evaluating the need for moratoriums, bans, or other appropriate actions when AI uses may conflict with human rights standards.

The treaty mandates accountability and responsibility for any adverse impacts of AI systems, ensuring respect for equality, including gender equality, the prohibition of discrimination, and privacy rights. It also ensures the availability of legal remedies for victims of human rights violations related to AI and procedural safeguards, such as notifying individuals when they are interacting with AI systems.

Protecting democracy

To safeguard democratic institutions and processes, the treaty requires parties to adopt measures ensuring that AI systems are not used to undermine democratic principles, including the separation of powers, judicial independence, and access to justice.

National security

The treaty’s provisions do not extend to national security activities, but these activities must still respect international law and democratic institutions. The convention does not apply to national defence matters or research and development, except when AI system testing may interfere with human rights, democracy, or the rule of law.

Implementation and oversight

To ensure effective implementation, the convention establishes a follow-up mechanism in the form of a Conference of the Parties. Each party is also required to set up an independent oversight mechanism to oversee compliance with the treaty, raise awareness, stimulate public debate, and conduct multistakeholder consultations on AI usage.

Global participation

The convention, the result of two years of work by the intergovernmental Committee on Artificial Intelligence (CAI), involves 46 Council of Europe member states, the European Union, and 11 non-member states, including Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru, the United States, and Uruguay. The private sector, civil society, and academic representatives participated as observers. The convention is open to non-European countries as well, and it will be legally binding for signatory states.

Criticisms

Despite its historic significance, some view the treaty as falling short of its intended impact. Concerns have been raised regarding the treaty’s effectiveness, with suggestions that it primarily reaffirms existing practices rather than introducing substantive regulatory measures. The EU data watchdog recently expressed concerns about potential compromises in human rights standards due to pressure from foreign business interests. The EDPS believes that the treaty does not go far enough in addressing the risks and challenges posed by AI. The text has been significantly weakened from its original version during negotiations at the CoE’s ad hoc committee in charge of the convention. The EDPS described it as a ‘missed opportunity to lay down a strong and effective legal framework’ for protecting human rights in AI development.

What is next?

The framework convention will be opened for signature in Vilnius, Lithuania, on 5 September during a conference of Ministers of Justice.

Go to Top