top of page

Autonomous Weapons and International Law

  • Writer: Edmarverson A. Santos
    Edmarverson A. Santos
  • Jun 19
  • 15 min read

Updated: Jun 24

I. Introduction: The Emergence of Autonomous Weapons Systems (AWS)


Autonomous weapons and international law are increasingly intersecting as military technology evolves rapidly. Autonomous weapons systems (AWS), defined as weapons capable of selecting and engaging targets without direct human intervention, represent a significant shift in how force is projected in armed conflict. Unlike traditional weapons, which require human operation at every step, AWS function based on complex algorithms and artificial intelligence that enable them to make lethal decisions independently. This development challenges the foundations of accountability, legality, and ethical warfare.


Several major powers—including the United States, Russia, China, Israel, and the United Kingdom—are investing heavily in autonomous military technologies. Current systems like the U.S. Army's Advanced Targeting and Lethality Automated System (ATLAS) and Russia's Vikhr ground combat vehicle demonstrate the strategic emphasis placed on speed, precision, and autonomy. While many of these systems still operate with a human in the loop, the trajectory is clearly moving toward full autonomy. According to SIPRI (Stockholm International Peace Research Institute), at least 30 countries are already deploying or testing weapons with significant autonomous capabilities.


The strategic appeal of AWS is undeniable. These systems promise faster decision-making, increased battlefield efficiency, and a reduced need for human soldiers in high-risk environments. Proponents argue that machines, unburdened by fear or fatigue, could potentially comply more rigorously with the laws of armed conflict than humans. They also cite cost-efficiency and reduced casualties among military personnel as strong justifications for their development.


However, the rise of AWS has triggered widespread concern among legal scholars, ethicists, and human rights organizations. Critics question how machines can reliably distinguish between combatants and civilians, or evaluate the proportionality of an attack. More fundamentally, they raise alarms about delegating life-and-death decisions to algorithms that lack moral agency or legal responsibility. The opacity of machine learning processes—commonly referred to as “black box” systems—further complicates the attribution of accountability in cases of unlawful harm.


International organizations, including the International Committee of the Red Cross (ICRC) and Human Rights Watch, have called for robust regulation or even outright bans on fully autonomous weapons. They argue that existing legal frameworks, particularly international humanitarian law (IHL) and international human rights law (IHRL), are ill-equipped to address the novel challenges posed by AWS. For example, the right to life enshrined in the International Covenant on Civil and Political Rights (ICCPR) may be jeopardized when lethal force is executed by an algorithm that cannot be held accountable.


This article analyzes the current state of autonomous weapons and international law. It explores the relevant legal regimes, the gaps in accountability, and the potential paths forward. The goal is to provide a clear and authoritative understanding of how international legal standards can or should evolve to govern AWS before their deployment becomes widespread and irreversible.


II. Legal Frameworks Governing AWS: IHL vs. IHRL


The emergence of autonomous weapons systems (AWS) demands a reassessment of how international legal regimes apply in armed conflict and beyond. Two primary legal frameworks are relevant: International Humanitarian Law (IHL), which regulates conduct during warfare, and International Human Rights Law (IHRL), which protects individuals at all times, including during armed conflict. Both systems offer distinct principles, and their interaction shapes the legal status of AWS. However, tensions arise when their provisions conflict, especially in determining accountability, legality, and the right to life.


International Humanitarian Law: Rules in Armed Conflict

International Humanitarian Law—also known as the law of armed conflict or the law of war—applies during international and non-international armed conflicts. Rooted in treaties such as the Geneva Conventions and their Additional Protocols, as well as customary international law, IHL regulates the means and methods of warfare and protects persons not or no longer participating in hostilities.


Key IHL principles relevant to AWS include:

Principle

Definition

Distinction

Parties must distinguish between combatants and civilians.

Proportionality

Attacks must not cause excessive civilian harm relative to military advantage.

Necessity

Force must be necessary to achieve a legitimate military objective.

Humanity

Prohibits inflicting unnecessary suffering or superfluous injury.

Autonomous weapons must be capable of applying these principles independently. This raises technical and ethical questions. Can AWS reliably differentiate combatants from civilians in dynamic environments? Can they assess proportionality, which often requires value-based human judgment? While advocates claim that AI may eventually perform such assessments better than humans, current technology falls short of this standard.


Moreover, IHL requires weapon reviews under Article 36 of Additional Protocol I. States must assess whether new weapons comply with their international obligations before deployment. Yet, many countries lack transparent or consistent review mechanisms, and these assessments rarely incorporate human rights considerations.


International Human Rights Law: Protections at All Times

International Human Rights Law applies continuously—in peacetime and during conflict. It is codified in instruments like the International Covenant on Civil and Political Rights (ICCPR), the European Convention on Human Rights (ECHR), and regional treaties. The ICCPR’s Article 6 recognizes the right to life as a non-derogable right, even in emergencies.


Key provisions include:


  • Right to life (ICCPR, Article 6): States must not arbitrarily deprive individuals of life.

  • Due process and accountability: Individuals must have access to legal remedies and state actions must be reviewable.


AWS raise significant concerns in this framework. Delegating lethal decisions to algorithms challenges the requirement that life may only be taken in non-arbitrary circumstances and after legal justification. Furthermore, the opacity of machine learning systems complicates the assignment of responsibility, thereby undermining the right to an effective remedy.


Human rights bodies, such as the Human Rights Committee, have clarified that the right to life includes obligations for transparency, accountability, and foreseeability—all of which AWS may fail to satisfy.


The Debate: Concurrent Application or Lex Specialis?

The relationship between IHL and IHRL is debated in legal scholarship and practice. Some argue that IHL, as lex specialis (the more specific law), displaces human rights law in armed conflict. Others advocate for the concurrent application of both regimes, with IHRL providing additional protections when feasible.


The International Court of Justice (ICJ) has addressed this issue in several opinions. In its 1996 Nuclear Weapons Advisory Opinion, the ICJ affirmed that IHRL continues to apply in wartime but must be interpreted in light of IHL. Similarly, in the Wall Advisory Opinion (2004) and the DRC v. Uganda case (2005), the Court upheld that IHRL and IHL are complementary.


The current consensus among international bodies, including the UN Human Rights Committee and the ICRC, supports the concurrent application model. This view emphasizes that the right to life, due process, and accountability do not vanish in armed conflict but must adapt to its conditions.


Legal Tensions in Practice

Despite theoretical clarity, tensions remain in practice:

Issue

IHL Position

IHRL Position

Lethal Force Use

Permitted if targeting principles are met

Must not be arbitrary; legal justification required

Civilian Harm

Incidental harm allowed if proportionate

Protection from arbitrary or disproportionate state violence

Legal Responsibility

Focus on state and command accountability

Individual remedy and access to justice are central

Review Mechanisms

Weapons review under Article 36

Requires human rights impact assessment

These divergences complicate legal assessments of AWS. For example, if an AWS mistakenly targets civilians due to biased training data, IHL might examine proportionality and foreseeability, while IHRL would emphasize the arbitrariness of the deprivation of life and the state’s duty to prevent it.


III. Human Rights Law and the Use of AWS


The deployment of autonomous weapons systems (AWS) poses critical challenges to the enforcement of international human rights law (IHRL), particularly the right to life and the right to legal accountability. As AWS evolve beyond human oversight in making targeting decisions, their use introduces legal uncertainty over how human rights norms apply in technologically mediated acts of violence. Even in armed conflict, IHRL continues to operate alongside international humanitarian law (IHL), and its application becomes crucial when lethal force is used outside traditional battlefield conditions or where IHL alone provides insufficient safeguards.


The Right to Life and Arbitrary Deprivation

The cornerstone of IHRL in this context is the prohibition against arbitrary deprivation of life, primarily articulated in Article 6 of the International Covenant on Civil and Political Rights (ICCPR). The Human Rights Committee has interpreted this article as imposing both negative and positive obligations on states: to refrain from unlawfully taking life and to establish effective legal and procedural safeguards to prevent such violations.


Autonomous systems inherently complicate this obligation. AWS operate through algorithms trained on large datasets, often opaque even to their developers. These systems may make lethal decisions without human intervention, based on parameters not easily interpretable in real time. This opacity challenges the very notion of what constitutes a lawful or “non-arbitrary” deprivation of life under international law.


To assess whether AWS comply with the ICCPR, the Human Rights Committee in General Comment No. 36 has emphasized the importance of:


  • Legality: The use of lethal force must be prescribed by law.

  • Necessity: Force must be used only when strictly necessary to protect life.

  • Proportionality: The harm inflicted must be proportionate to the threat.

  • Accountability: Victims must have access to effective remedies.


If an AWS kills without a clear legal framework, human supervision, or post-incident accountability, such actions likely violate the ICCPR—even if they technically comply with IHL.


Predictability and Human Oversight

Predictability is essential to legality under human rights law. Systems that cannot explain their behavior or provide traceable decision-making fail to meet international standards. Unlike conventional soldiers or even remote-controlled drones, AWS may act based on complex, self-evolving algorithms. The lack of transparency—commonly referred to as the “black box problem”—means no human operator can fully anticipate when, how, or why a strike will occur once the system is activated.


This unpredictability undermines due process protections and hinders investigations into unlawful killings. It also affects military commanders, who are responsible under both IHL and IHRL for the consequences of decisions made in their chain of command. If they cannot foresee or control the system’s behavior, attributing legal responsibility becomes nearly impossible.


Algorithmic Discrimination and Bias

Human rights law prohibits discrimination in the application of force, including the indirect effects of algorithmic design. Training datasets often reflect societal biases, and when used in military applications, such biases can lead to disproportionate targeting of certain groups. For instance, image recognition systems trained on flawed or unbalanced data may misclassify individuals as combatants based on race, gender, or geographic origin.


This raises serious concerns about violations of both the right to life and the right to non-discrimination under IHRL. Discriminatory targeting—intentional or not—may also violate the principle of equality before the law (ICCPR, Article 26), further eroding international legal standards.


Accountability, Transparency, and Remedies

A fundamental element of human rights law is the availability of legal remedies for victims of rights violations. Article 2 of the ICCPR obliges states to provide effective remedies for violations of protected rights. In the context of AWS, accountability becomes diffuse:


  • Who is responsible for an unlawful killing—the state, the commanding officer, the programmer, or the manufacturer?

  • What legal process exists for families of victims to seek justice?

  • Can a machine’s decision be scrutinized in a court of law?


Currently, no international framework provides clear answers. This legal vacuum risks enabling impunity and undermines both state obligations and individual rights. Without enforceable oversight mechanisms, AWS could operate in legal gray zones, particularly in transnational operations, counterterrorism missions, or in conflict zones with weak governance.


Table: Human Rights Risks of AWS Use

IHRL Principle

Risk Posed by AWS

Right to life (ICCPR Art. 6)

Arbitrary or unaccountable use of lethal force

Due process and remedy (Art. 2)

Lack of redress mechanisms and legal traceability

Non-discrimination (Art. 26)

Algorithmic bias leading to disproportionate harm to specific groups

Transparency and oversight

Inability to explain or audit autonomous targeting decisions

State accountability

Legal ambiguity over responsibility for machine-made lethal decisions

Emerging Legal Opinions and Institutional Responses

Several human rights institutions and legal scholars have advocated for the prohibition or strict regulation of AWS. The UN Special Rapporteur on extrajudicial killings has warned that allowing machines to make life-and-death decisions crosses a red line for international law. Similarly, the European Parliament has passed resolutions calling for a ban on fully autonomous weapons lacking meaningful human control.


These positions emphasize that without robust legal standards and oversight mechanisms, AWS use contradicts the basic structure of international human rights law. States have an obligation to develop and enforce regulations that prevent arbitrary and discriminatory outcomes—and this duty is non-negotiable.


IV. Accountability and the Legal Vacuum


The growing integration of autonomous weapons systems (AWS) into military operations highlights a profound accountability crisis in international law. These systems introduce a new layer of legal complexity, where decisions to use lethal force are delegated to machines operating with minimal or no human oversight. When AWS cause unlawful harm—such as disproportionate civilian casualties or attacks on protected persons—the current legal framework struggles to answer the central question: who is legally responsible?


The Problem of Diffused Responsibility

Traditional rules of state responsibility, rooted in the International Law Commission’s Draft Articles on State Responsibility, hold that a state is liable for internationally wrongful acts committed by its organs or agents. In the case of AWS, the causal chain between the state and the action becomes increasingly opaque. Unlike a soldier or even a drone operator, an autonomous system may function independently once deployed. Its targeting decisions might derive from machine learning models trained on massive, often untraceable data sets.


As a result, the attribution of a wrongful act becomes difficult. Potentially liable actors include:


  • State authorities who authorized the deployment of AWS.

  • Commanders who activated the system.

  • Programmers who designed the targeting algorithms.

  • Manufacturers who built the physical platforms.

  • Private contractors who contributed components or datasets.


This distribution of responsibility raises the risk of impunity. Without a clear doctrine assigning legal liability for autonomous decisions, states may claim that no human acted unlawfully—shifting blame to the machine, which has no legal personality under international law.


Legal Accountability under International Humanitarian and Human Rights Law

Both IHL and IHRL require legal accountability when rights are violated. Under IHL, commanders are held responsible for the actions of their subordinates through the doctrine of command responsibility. However, this doctrine assumes human agency. When an AWS acts without direct orders and in ways not foreseeable by its operator, applying command responsibility becomes tenuous.


Under IHRL, Article 2 of the ICCPR imposes a duty on states to ensure effective remedies for violations, including arbitrary killings. The European Court of Human Rights and other regional tribunals have reiterated the importance of transparency and redress, particularly in cases involving the use of lethal force. But legal remedies depend on the ability to identify the perpetrator, assess the lawfulness of the action, and determine intent—all of which are difficult when the actor is a non-human system.


Challenges of Post-Harm Investigation

A central tenet of accountability is the ability to investigate harm after it occurs. AWS complicate this process in several ways:

Challenge

Explanation

Opaque decision-making

Algorithms often operate as “black boxes,” making their reasoning inaccessible.

Dynamic learning behavior

Systems that adapt over time may act in unpredictable ways not foreseen by developers.

Lack of audit trails

Inadequate logging or data preservation hampers forensic review.

Jurisdictional gaps

Cross-border deployments blur legal lines of authority and control.

When an AWS causes civilian casualties, the absence of explainable decision processes prevents victims or observers from knowing how the harm occurred. This undermines the right to truth and remedy under human rights law and impedes enforcement of IHL norms.


The Role of Article 36 Weapons Reviews

Article 36 of Additional Protocol I to the Geneva Conventions requires states to determine whether a new weapon complies with international law before its deployment. While this provision theoretically applies to AWS, the implementation across states is uneven. Many governments either do not conduct formal reviews or limit them to technical evaluations without broader legal or ethical scrutiny.


Additionally, Article 36 does not specify how to assess compliance with human rights law or address accountability structures. As such, current review mechanisms do not adequately mitigate the legal vacuum AWS pose.


Calls for Regulation and Clarification

Recognizing this accountability gap, scholars, human rights advocates, and intergovernmental bodies have called for international regulation of AWS. Key proposals include:


  • Mandating “meaningful human control” over all lethal targeting decisions.

  • Creating a new treaty under the Convention on Certain Conventional Weapons (CCW) to specifically address AWS regulation and accountability.

  • Reinforcing Article 36 reviews with mandatory human rights impact assessments.

  • Establishing liability norms that allocate responsibility across the entire AWS development chain, from designers to military commanders.


Despite widespread discussions, no binding international agreement currently addresses the accountability structure for AWS. The United Nations Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems has made progress in defining the problem but has not reached consensus on legal obligations.


Chart: Current Accountability Gap in AWS Deployment

Stage

Responsible Actor

Current Legal Clarity

Development & Programming

Private developers, contractors

No specific legal liability framework

Deployment Decision

Military/political leaders

Covered by general state responsibility

Activation in Conflict

Commanders

Unclear under command responsibility

Autonomous Action

AWS (no legal personality)

No accountability framework

Post-Harm Investigation

State or international body

Often obstructed by opacity

Also Read


V. Toward Regulation: Future Paths and Policy Options


The rapid development of autonomous weapons systems (AWS) has outpaced international legal and institutional frameworks. While existing international humanitarian law (IHL) and international human rights law (IHRL) offer partial guidance, they were not designed to address the unique legal, ethical, and accountability challenges posed by lethal machines acting without direct human control. As a result, there is growing consensus among legal scholars, military experts, and civil society that international regulation is necessary—before the deployment of AWS becomes widespread and irreversible.


Regulatory Options under Existing Legal Mechanisms

One immediate path forward involves reinforcing and expanding existing mechanisms:


  • Strengthening Article 36 Reviews: All states developing or deploying new weapons must conduct legal reviews under Article 36 of Additional Protocol I to the Geneva Conventions. However, these reviews are often opaque and inconsistent.


    To ensure AWS compliance:

    • Reviews must explicitly assess human rights law.

    • Legal audits should evaluate accountability, discrimination risks, and explainability.

    • Reviews should be publicly documented to improve transparency and international confidence.

  • Clarifying State Obligations: States must affirm that the use of AWS falls under the jurisdiction of both IHL and IHRL, including peacetime operations and cross-border deployments. Codifying this dual application can help address legal uncertainty and reinforce human rights protections even during armed conflict.


Treaty-Based Regulation: Toward a Binding International Instrument

Growing concerns over AWS have triggered calls for a new legally binding treaty. In recent years, the UN Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems, operating under the framework of the Convention on Certain Conventional Weapons (CCW), has considered options for multilateral regulation.

Despite years of discussion, no consensus has been reached.


A treaty could include:

Treaty Element

Purpose

Definition of AWS

Establish a legally binding and universally accepted definition

Prohibition of Fully Autonomous Lethal Use

Ban systems without meaningful human control over lethal targeting

Accountability Mechanisms

Assign responsibility across development, deployment, and operation stages

Independent Review Body

Monitor compliance and investigate violations

Victim Remedies and Reparations

Guarantee the right to remedy under international law

Such a treaty could be modeled after existing disarmament frameworks like the treaties banning landmines (Ottawa Treaty) and cluster munitions (Oslo Convention), both of which emerged from civil society pressure and international coalitions of like-minded states.


The “Meaningful Human Control” Standard

A widely supported policy option among experts and advocacy groups is the implementation of a “meaningful human control” standard. This principle insists that lethal decision-making must remain under active and informed human oversight at all critical stages:


  • Pre-mission planning and deployment

  • Real-time engagement and target selection

  • Post-operation review and accountability


This standard serves as a compromise between a complete ban and unrestricted development. It acknowledges the usefulness of automated support systems while preventing fully autonomous use of lethal force.


Implementing meaningful human control requires technical guidelines and policy commitments. Key operational principles might include:

Requirement

Explanation

Human judgment in target selection

Humans must verify targets based on real-time context and rules of engagement

Ability to override or deactivate AWS

Operators must retain technical and legal authority to cancel attacks

Transparent algorithmic behavior

Systems must be explainable and auditable

Contextual awareness

Decisions must consider proportionality, distinction, and necessity

National and Regional Initiatives

While global consensus remains elusive, individual states and regional blocs have taken proactive steps:


  • The European Parliament has called for an international ban on AWS lacking meaningful human control and urged the EU to adopt a strong common position.

  • Germany, Austria, and Chile, among others, support preventive regulation to ensure compliance with humanitarian and human rights law.

  • The Netherlands and France promote transparency and ethical use of AI in military applications, without committing to a full ban.


Such leadership can help establish customary international law and catalyze multilateral agreements.


The Role of Civil Society and the Scientific Community

Organizations like Human Rights Watch and the Campaign to Stop Killer Robots have led advocacy efforts, raising public awareness and pushing for preemptive regulation. Their work has pressured governments to address ethical concerns and helped frame AWS not merely as technical tools but as systems that carry profound legal and moral consequences.


Additionally, the scientific community plays a vital role. AI developers and researchers have signed open letters calling for bans on lethal autonomous systems and urging responsible innovation in line with humanitarian principles.


Chart: Summary of Policy Paths

Policy Option

Advantages

Challenges

Strengthened Article 36 Reviews

Builds on existing obligations

Lacks standardization and transparency

Binding International Treaty

Comprehensive and enforceable regulation

Difficult multilateral negotiations

Meaningful Human Control Requirement

Technically feasible and ethically grounded

Needs precise operational definition

National and Regional Legislation

Enables leadership and precedent-setting

Risk of fragmented legal standards

Civil Society and Expert Advocacy

Builds public support and awareness

Limited direct policymaking power

The legal and ethical challenges posed by autonomous weapons systems are not hypothetical—they are already reshaping the landscape of modern warfare. Without clear, enforceable regulation, the deployment of AWS risks undermining fundamental principles of international law, including accountability, human dignity, and the protection of life. Future policy must integrate legal precision, technological awareness, and moral clarity. The establishment of robust international norms—preferably through a legally binding treaty and reinforced by transparent state practice—is essential to prevent a future in which lethal force is exercised without human judgment or legal consequence.


References

  1. Ma, E. H. (2020). Autonomous Weapons Systems Under International Law. New York University Law Review, 95(5), 1435–1474.[PDF Source: NYU Law Review Volume 95 Issue 5]

  2. Human Rights Watch. (2014). Shaking the Foundations: The Human Rights Implications of Killer Robots.https://www.hrw.org/report/2014/05/12/shaking-foundations/human-rights-implications-killer-robots

  3. Human Rights Watch. (2012). Losing Humanity: The Case Against Killer Robots.https://www.hrw.org/report/2012/11/19/losing-humanity/case-against-killer-robots

  4. United Nations Human Rights Committee. (2018). General Comment No. 36 on Article 6 of the International Covenant on Civil and Political Rights, on the Right to Life.U.N. Doc. CCPR/C/GC/36.https://undocs.org/CCPR/C/GC/36

  5. Crootof, R. (2015). The Killer Robots Are Here: Legal and Policy Implications. Cardozo Law Review, 36(5), 1837–1915.https://cardozolawreview.com/the-killer-robots-are-here-legal-and-policy-implications/

  6. Boulanin, V., & Verbruggen, M. (2017). Mapping the Development of Autonomy in Weapon Systems. Stockholm International Peace Research Institute (SIPRI).https://www.sipri.org/publications/2017/other-publications/mapping-development-autonomy-weapon-systems

  7. United Nations Secretary-General António Guterres. (2018). Speech to the General Assembly, New York, 25 September 2018. [UNGA 73rd Session].https://www.un.org/sg/en/content/sg/speeches/2018-09-25/address-73rd-general-assembly

  8. Etzioni, A., & Etzioni, O. (2017). Pros and Cons of Autonomous Weapons Systems. Military Review, May–June 2017, 72–74.https://www.armyupress.army.mil/Portals/7/military-review/Archives/English/pros-and-cons-of-autonomous-weapons-systems.pdf

  9. Asaro, P. (2012). On Banning Autonomous Weapon Systems: Human Rights, Automation, and the Dehumanization of Lethal Decision-Making. International Review of the Red Cross, 94(886), 687–709.https://international-review.icrc.org/articles/banning-autonomous-weapon-systems-human-rights-automation-and-dehumanization-lethal

Logo.png
  • LinkedIn
bottom of page