top of page

Council of Europe Convention on Artificial Intelligence

  • Writer: Edmarverson A. Santos
    Edmarverson A. Santos
  • Jun 18
  • 14 min read

Council of Europe Convention on Artificial Intelligence marks a pivotal legal milestone in the regulation of artificial intelligence within a human rights framework. Adopted in Vilnius on September 5, 2024, and codified as CETS No. 225, this binding international treaty is the first of its kind to establish clear obligations for the design, development, and use of AI systems. It seeks to ensure that AI does not undermine democracy, the rule of law, or fundamental freedoms.


The Convention arrives at a critical moment. As AI systems become increasingly integrated into decision-making across public administration, security, healthcare, and education, the risks to individual autonomy, data protection, non-discrimination, and civic participation have intensified. The need for an enforceable framework was urgent—one that could balance innovation with safeguards rooted in human dignity.


Unlike ethical guidelines or sectoral regulations, the Council of Europe Convention on Artificial Intelligence binds its parties to adopt concrete legislative and administrative measures. These measures must align AI activities with international human rights obligations throughout the entire AI lifecycle—from design to deployment and decommissioning.


The Convention's objectives are threefold:

  • First, it aims to prevent and mitigate negative impacts of AI on human rights, democratic integrity, and access to justice.

  • Second, it promotes responsible technological advancement through transparency, accountability, and oversight.

  • Third, it provides a foundation for multilateral cooperation, allowing member and non-member states alike to align their domestic policies with shared European values.


The Council of Europe has long been a standard-setting institution in fields such as data protection, anti-discrimination, and digital governance. This Convention builds on that tradition, integrating legal precedents such as the European Convention on Human Rights (1950), the Convention 108+ on Data Protection (1981), and the jurisprudence of the European Court of Human Rights. In this sense, it is both forward-looking and anchored in an established legal culture.


By framing AI governance through the triad of human rights, democracy, and the rule of law, the Convention sets itself apart from market-oriented or purely technical frameworks. It asserts a normative vision of technology as something that must serve public interest—not displace or dilute it.


The article that follows will explore the treaty’s scope, its legal mechanisms, key rights-based obligations, risk mitigation tools, and its global relevance.


Scope and Legal Definitions


The Council of Europe Convention on Artificial Intelligence establishes a precise and comprehensive framework by defining both the scope of its application and the key terminology necessary for legal clarity and international harmonization. This section plays a foundational role in ensuring the Convention is adaptable, enforceable, and interoperable across jurisdictions.


Defining “Artificial Intelligence System”

The Convention defines an “artificial intelligence system” as a machine-based system that, based on explicit or implicit objectives, infers from input how to produce outputs such as predictions, content, recommendations, or decisions that may influence physical or virtual environments. This definition—formally set out in Article 2—covers a wide spectrum of technologies, from basic algorithmic tools to highly adaptive autonomous systems. It also distinguishes AI systems by their varying degrees of autonomy and their ability to adapt after deployment.


This flexible yet precise definition ensures that the Convention remains relevant across diverse technical applications, including future developments in machine learning and neural network models. It allows for both current use cases (e.g., facial recognition, predictive policing, automated hiring) and those still emerging.


Legal and Jurisdictional Scope

The Convention’s scope is delineated in Article 3 and is structured to apply broadly to activities within the entire lifecycle of AI systems—but with important distinctions regarding actors and exceptions.


Covered Activities:


The Convention applies to:

  • All activities within the AI lifecycle that could interfere with human rights, democracy, or the rule of law. This includes design, development, deployment, and decommissioning.

  • Public authorities or private entities acting on behalf of public bodies, ensuring state responsibility even when outsourcing occurs.

  • Private actors, where national declarations specify how obligations will be applied. States must clarify if they extend the Convention’s principles to the private sector through horizontal regulations or other legal means.


Key Exemptions:


  • National security: States are not required to apply the Convention to activities conducted solely for national security purposes, provided they remain consistent with international law.

  • Research and development: The Convention does not apply to R&D phases unless the activities affect or could affect human rights or democratic structures—such as during testing with human subjects.

  • National defense: Explicitly excluded, placing military AI systems outside the Convention’s jurisdiction.


These limitations strike a balance between protecting fundamental rights and respecting state sovereignty and strategic interests.


Obligations Across Sectors

To operationalize the Convention’s broad scope, each Party must adopt tailored legislative, administrative, or other measures depending on the severity and likelihood of adverse AI impacts. This requirement ensures that:


  • Low-risk uses are not overregulated.

  • High-risk applications are subject to stringent controls, even if used in non-public sectors.


Such a differentiated approach promotes legal proportionality while reinforcing the Convention’s central objective: ensuring that AI technologies, regardless of their origin or purpose, are aligned with democratic and rights-based standards.


Strategic Flexibility

Importantly, the Convention permits countries to submit formal declarations detailing their national implementation strategies—especially regarding the private sector. This mechanism provides legal transparency and allows for varied national models, as long as they are consistent with the Convention’s object and purpose.


In conclusion, the Convention’s definition of AI and its carefully constructed scope establish a legal bedrock for future interpretation, compliance, and enforcement. By addressing both the technical and institutional dimensions of AI governance, it ensures that legal certainty and adaptability can coexist within a unified human rights framework.


Foundational Rights and Obligations


The Council of Europe Convention on Artificial Intelligence lays down a core set of legal principles aimed at aligning AI development and deployment with Europe’s fundamental values. These foundational rights and obligations form the normative heart of the Convention, ensuring that artificial intelligence systems not only serve technological progress but also safeguard dignity, justice, and institutional integrity.


Protecting Human Rights and Rule of Law

Article 4 requires each State Party to adopt or maintain measures ensuring that activities throughout the AI lifecycle comply with international and domestic human rights obligations. These include rights enshrined in treaties such as the European Convention on Human Rights, the International Covenant on Civil and Political Rights, and other human rights instruments referenced in the preamble.


AI systems must not result in unlawful surveillance, arbitrary decision-making, or discrimination. The Convention emphasizes the duty of states to prevent such harms—not only to respond after they occur.


Article 5 reinforces this by obliging States to ensure that AI is not used to erode:


  • Democratic institutions, such as parliaments, courts, or electoral systems;

  • Access to justice, including fair trial rights;

  • Public discourse, protecting individuals’ ability to form and express opinions freely.


These provisions aim to counter the misuse of AI in ways that could undermine judicial independence, civic freedoms, or political participation.


Key Legal Principles for AI Systems

Articles 6 to 13 establish common principles that apply to all Parties and across all stages of the AI system lifecycle. These form a legal baseline for AI regulation under the Convention.


1. Human Dignity and Autonomy (Art. 7)

AI systems must not reduce individuals to passive data points. States are required to protect autonomy by ensuring informed interactions and safeguarding decision-making capacity.


2. Transparency and Oversight (Art. 8)

States must guarantee that AI systems—especially those used in sensitive areas—operate transparently. This includes:


  • Disclosing when AI-generated content is presented;

  • Establishing oversight bodies capable of auditing algorithmic decisions.


3. Accountability and Responsibility (Art. 9)

The Convention mandates that clear responsibility must be assigned when AI systems cause harm. This includes documenting decision chains and ensuring that those affected can challenge outcomes.


4. Equality and Non-Discrimination (Art. 10)

AI systems must respect equality and actively avoid reinforcing structural biases. This includes:


  • Prohibiting discrimination under international and national law;

  • Taking affirmative action to address digital inequalities, including gender-based disparities.


5. Privacy and Data Protection (Art. 11)

State Parties must ensure that AI systems comply with privacy standards, such as those outlined in Convention 108+ and GDPR. AI must not be used to bypass consent or infringe on personal data rights.


6. Reliability and Trust (Art. 12)

The Convention calls for technical and legal standards that promote the reliability of AI systems, including secure design, robustness against manipulation, and consistent performance across different conditions.


7. Safe Innovation (Art. 13)

States are encouraged to support innovation through supervised testing environments or "regulatory sandboxes"—but only under strict oversight to avoid endangering rights or democratic values.


Rights-Based AI Governance in Practice

These obligations are not abstract. They apply concretely to areas such as:


  • Facial recognition used in law enforcement;

  • Automated systems in judicial sentencing;

  • AI-driven profiling in hiring or social benefits;

  • Chatbots used for public services or customer interactions.


The Convention's structure obliges States to continuously assess how these technologies interact with the rights of vulnerable groups, including children, persons with disabilities, and marginalized communities.


Institutional Obligations

Each Party must also ensure that adequate institutional infrastructure exists to enforce these rights. This includes empowering independent regulators, enabling access to redress, and ensuring cross-sector coordination between AI developers, public authorities, and human rights institutions.


By codifying these foundational obligations, the Council of Europe Convention on Artificial Intelligence provides a robust legal framework to steer AI development in a way that protects people—not merely as data subjects or consumers, but as rights-holding individuals in democratic societies.


Risk Management and Accountability Mechanisms


The Council of Europe Convention on Artificial Intelligence introduces a structured and rights-based approach to the governance of risks posed by AI systems. Instead of adopting a purely technical or market-driven model, the Convention places human rights, democracy, and the rule of law at the center of its risk management strategy.


States are legally required to implement preventative, proportionate, and continuous mechanisms to monitor and mitigate adverse impacts across the AI lifecycle.


Risk and Impact Management Framework

Under Article 16, State Parties must adopt or maintain a comprehensive framework for identifying, assessing, preventing, and mitigating risks and harms arising from AI systems. This framework must be applied to both existing and potential impacts and operate across the system's entire lifecycle—design, deployment, operation, and decommissioning.


Key requirements include:

Element

Obligation

Contextual Relevance

Evaluate the intended use and setting of the AI system.

Severity and Probability

Weigh the seriousness and likelihood of rights impacts.

Stakeholder Input

Consider perspectives of those affected, especially at-risk groups.

Iterative Monitoring

Apply risk assessments continuously, not only at launch.

Documentation

Maintain clear records of risks, decisions, and responses.

Pre-deployment Testing

Require testing for high-impact systems before public use.

This dynamic model goes beyond compliance checklists. It introduces continuous oversight and iterative review, responding to the evolving nature of AI technologies.


Legal Accountability and Redress

The Convention strengthens accountability through enforceable legal remedies. Articles 14 and 15 set out concrete obligations to ensure that individuals can access justice if they are harmed by AI-driven decisions or denied rights.


Article 14 – Right to Remedy

States must guarantee accessible and effective remedies when human rights are violated by AI systems. This includes:


  • Informing affected individuals about how AI influenced decisions;

  • Ensuring that information is sufficient for them to contest outcomes;

  • Providing clear pathways to file complaints with competent authorities.


Article 15 – Procedural Safeguards

States must uphold fair procedures when AI is used in areas that significantly affect rights. For example:


  • People must be able to challenge automated decisions in social welfare, immigration, or law enforcement.

  • Individuals must be notified when they are interacting with AI rather than a human, especially in public-facing services.


Together, these articles create a legal infrastructure that ensures AI does not operate in a regulatory vacuum. Instead, systems must be designed with contestability and due process in mind.


Oversight and Enforcement Bodies

To ensure these obligations are meaningful, the Convention mandates the creation of independent oversight mechanisms. Under Article 26, each Party must:


  • Establish or designate authorities with the mandate to monitor AI compliance;

  • Guarantee that these bodies are independent, impartial, and adequately resourced;

  • Promote coordination between multiple regulatory or human rights institutions when needed.


These mechanisms act as gatekeepers, ensuring that AI practices remain lawful, transparent, and responsive to public concerns. They can include national human rights institutions, data protection authorities, or AI-specific regulators.


Flexibility for Bans and Moratoria

Recognizing the potential for irreversible harm, the Convention allows for preemptive regulatory action. Under Article 16(4), States are required to assess:

“The need for a moratorium or ban or other appropriate measures in respect of certain uses of artificial intelligence systems where it considers such uses incompatible with the respect for human rights, the functioning of democracy or the rule of law.”

This clause creates space for precautionary measures in areas like:

  • Predictive policing;

  • Mass biometric surveillance;

  • AI-based decision-making in asylum or social assistance cases.

Such flexibility empowers States to protect their populations even in rapidly evolving technological landscapes.


Summary Table: Risk & Accountability Requirements

Convention Article

Key Focus Area

Mandatory Actions Required

Article 14

Remedies

Access to redress, contestability, documentation

Article 15

Procedural Safeguards

Notification, fair procedures, appeal rights

Article 16

Risk Management

Context-aware, documented, stakeholder-inclusive

Article 26

Oversight Mechanisms

Independent enforcement, regulatory coordination

Article 16(4)

Bans and Moratoria

Review of AI uses incompatible with rights

By embedding these risk and accountability provisions into the Convention’s binding legal structure, the Council of Europe has laid the groundwork for a human-centered model of AI governance—one where innovation must always be balanced by institutional responsibility and legal remedy.


Implementation, Global Cooperation, and Limitations


The Council of Europe Convention on Artificial Intelligence outlines detailed mechanisms to ensure that its principles are not only legally binding but effectively implemented at national and international levels. This final operational layer of the Convention addresses institutional readiness, multistakeholder engagement, transnational cooperation, and the legal boundaries of the instrument itself. The Convention is forward-looking, encouraging adaptability while safeguarding its core values.


National Implementation and Domestic Legal Alignment

To fulfill their obligations, Parties must integrate the Convention’s provisions into their domestic legal systems. This includes enacting or amending legislation, creating regulatory bodies, and training personnel involved in AI oversight. Several articles guide this process:


  • Article 17 prohibits any form of discrimination in implementing the Convention.

  • Article 18 requires States to consider the specific needs of children and persons with disabilities, reflecting a commitment to inclusivity and intersectional protection.

  • Article 20 emphasizes promoting digital literacy across all social groups, ensuring that individuals understand how AI systems function and how to exercise their rights in AI-mediated environments.


The Convention also encourages national authorities to use regulatory sandboxes (see Article 13) to test AI systems in controlled settings, balancing innovation with public safeguards.


Public Participation and Transparency

Effective governance of artificial intelligence cannot occur in isolation. The Convention calls for broad social dialogue and participation:


  • Article 19 requires each State to conduct public discussions and multistakeholder consultations on major AI-related issues. These consultations must consider social, legal, economic, ethical, and environmental dimensions.


This provision helps ground national AI strategies in democratic legitimacy and public trust, especially when decisions involve technologies with wide social impact, such as surveillance or automated welfare systems.


International Cooperation and Knowledge Sharing

Recognizing that AI governance must operate beyond national borders, the Convention fosters global cooperation through:


Article 25 – International Cooperation

  • Parties must cooperate in achieving the Convention’s goals.

  • Information on positive and negative impacts of AI, including risks from private sector or research activities, should be shared between States.

  • Non-member States are encouraged to align their national laws with the Convention and may later accede to it, enhancing its global reach.


Article 23 – Conference of the Parties

This follow-up body plays a central role in:

  • Reviewing implementation;

  • Addressing disputes;

  • Making recommendations for legal or technical amendments;

  • Engaging with civil society and technical experts through public hearings and structured dialogue.


Article 26 – Oversight Mechanisms

Each Party must designate independent mechanisms for AI oversight. These bodies must be:

  • Legally empowered;

  • Operationally independent;

  • Adequately funded and staffed.

When multiple authorities are involved (e.g., data protection agencies, competition regulators, human rights institutions), the Convention encourages coordination to avoid regulatory gaps or fragmentation.


Legal Flexibility and Constraints

To preserve the Convention’s universalist aspirations while respecting national legal systems, certain flexibility clauses are included:

Article

Provision Summary

Article 27

Allows coexistence with bilateral or regional treaties, as long as they do not contradict the Convention.

Article 31

Permits accession by non-member States, subject to unanimous agreement of existing Parties.

Article 34

Limits reservations: only federal states may invoke specific opt-outs regarding constitutional allocation of powers.

Article 35

Allows States to denounce the Convention, with a three-month notice period.

These mechanisms ensure the Convention remains adaptable while preventing misuse or dilution of its legal effect.


Limitations and Exclusions

Despite its comprehensive scope, the Convention contains explicit limitations to preserve clarity and avoid overreach:


  • National security activities are excluded (Art. 3.2), provided they comply with international law.

  • National defense is fully outside the Convention’s scope (Art. 3.4), exempting military applications of AI from the treaty.

  • Research and development activities not yet deployed are also excluded—except when testing may impact human rights (Art. 3.3).


These exclusions acknowledge the sensitive nature of defense and security matters while maintaining alignment with human rights standards.


Summary Table: Key Implementation and Cooperation Mechanisms

Article

Focus Area

Main Requirements

17–18

Inclusive Implementation

Protect against discrimination; address vulnerable groups

19

Public Participation

Multistakeholder consultation for AI governance

20

Digital Literacy

Promote understanding of AI risks and rights

23

Conference of the Parties

Monitoring, amendments, dispute resolution

25

International Cooperation

Exchange information, support non-parties

26

Oversight Mechanisms

Independent bodies with legal authority and resources

31

Accession

Open to non-member states under defined conditions

34–35

Legal Flexibility

Controlled use of reservations and denunciation

By combining binding obligations with flexible tools for implementation and cooperation, the Council of Europe Convention on Artificial Intelligence sets a precedent for responsible, human-centric AI regulation. It positions democratic governance, legal enforceability, and international solidarity as essential components in the global response to the challenges and opportunities posed by artificial intelligence.


Also Read

Explore more articles on related topics:


Conclusion: Legal Significance and Future Implications


The Council of Europe Convention on Artificial Intelligence stands as a landmark in international legal governance. It is the first binding multilateral treaty to directly address the human rights, democratic, and rule of law implications of artificial intelligence. More than a policy document, the Convention imposes legal obligations on States, promoting responsible AI governance that is both future-ready and rooted in foundational legal principles.


Its legal significance lies in four core features:


  1. Binding Nature: Unlike soft law instruments (e.g. ethics guidelines or recommendations), this treaty requires ratifying countries to implement enforceable national laws and administrative measures.

  2. Comprehensive Scope: It regulates the full AI lifecycle—from design to decommissioning—and applies across public and private sectors, where applicable. The Convention covers a wide array of technologies and use cases without naming specific tools, ensuring long-term relevance.

  3. Rights-Centered Approach: The Convention goes beyond technical regulation by prioritizing dignity, equality, accountability, and access to justice. It places human rights at the center of all AI-related decision-making.

  4. Global Influence: While rooted in European legal traditions, the Convention invites participation from non-member states. Its cooperative mechanisms and open accession model aim to influence global AI standards—offering a counterbalance to more market-oriented or authoritarian regulatory frameworks.


Looking forward, the Convention’s full impact will depend on implementation fidelity, regulatory capacity, and political will. States must not only transpose the treaty into national law but ensure independent oversight, effective redress mechanisms, and public participation in AI governance. For countries with limited institutional capacity, international cooperation and technical assistance will be essential.


The Convention also leaves room for future expansion through supplementary protocols. These may address specific technologies (e.g. biometric surveillance), sectoral applications (e.g. criminal justice, education), or technical standards. It is likely that as new challenges emerge, the Convention will serve as a foundational legal baseline for regional and global efforts.


In sum, the Council of Europe Convention on Artificial Intelligence reflects a decisive shift in how societies approach technological governance—not as a matter of innovation alone, but as a legal and ethical obligation to protect democratic life and human dignity in the age of algorithms. It sets a precedent that others may follow, shaping the future of AI regulation in a way that puts humanity first.


References:


  1. Council of Europe (2024). Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (CETS No. 225). Adopted in Vilnius on 5 September 2024.

    Source: https://www.coe.int/en/web/artificial-intelligence

  2. European Convention on Human Rights (1950)

  3. Universal Declaration of Human Rights (United Nations, 1948)

  4. International Covenant on Civil and Political Rights (1966)

  5. International Covenant on Economic, Social and Cultural Rights (1966)

  6. UNESCO (2021). Recommendation on the Ethics of Artificial Intelligence.

  7. European Commission (2024). EU Artificial Intelligence Act (Provisional Agreement).

  8. OECD (2023). OECD Framework for the Classification of AI Systems.

Comments


Logo.png
  • LinkedIn
bottom of page