Deepfakes in International Law: Legal Status and Gaps
- Edmarverson A. Santos

- 2 days ago
- 33 min read
1. Introduction: Deepfakes as a Legal Problem in International Law
Deepfakes in international law represent one of the most complex regulatory challenges created by contemporary artificial intelligence technologies. The rapid evolution of synthetic media capable of convincingly replicating human appearance, voice, and behavior has destabilized long-standing legal assumptions about truth, authenticity, evidence, and responsibility. Unlike earlier forms of digital manipulation, deepfakes are not merely altered content; they are algorithmically generated or transformed representations that can be indistinguishable from genuine audiovisual material. This qualitative leap forces international law to confront a phenomenon that operates across borders, jurisdictions, and legal regimes, often without clear attribution or accountability.
At its core, the legal difficulty posed by deepfakes lies not in their technical novelty but in their systemic impact on foundational principles of international law. International legal frameworks rely on shared understandings of factual reality to regulate armed conflict, protect human rights, assign state responsibility, and adjudicate violations. Deepfake technology directly undermines this reliance by eroding epistemic trust in visual and auditory evidence, a form of evidence that has traditionally carried strong probative value in diplomatic practice, international criminal proceedings, and humanitarian investigations. When images, videos, and audio recordings can no longer be presumed authentic, the operational capacity of international law is weakened.
The problem is further compounded by the transnational nature of deepfakes. Synthetic media can be produced in one jurisdiction, disseminated through global platforms, and cause harm in multiple states simultaneously. This spatial dislocation challenges territorially grounded legal systems and exposes the limits of domestic regulation. While several states have introduced national laws addressing specific uses of deepfakes, such as election interference or non-consensual intimate imagery, these fragmented approaches do not resolve the broader question of how deepfakes should be classified, constrained, and sanctioned under international law.
Deepfakes in international law also intersect with multiple legal domains, each governed by distinct normative logics. In the context of armed conflict, deepfakes raise questions under international humanitarian law concerning deception, perfidy, civilian protection, and psychological operations. In peacetime, they implicate international human rights law through violations of privacy, dignity, reputation, and mental autonomy. At the level of international peace and security, deepfakes function as tools of disinformation and influence operations capable of destabilizing societies without the use of force. This cross-cutting character makes deepfakes difficult to regulate within any single doctrinal framework.
A central complication is the absence of a universally accepted legal definition of deepfakes. Technical descriptions emphasize machine learning architectures and generative models, but international law requires functional classifications grounded in harm, intent, and effect. Without a shared legal understanding of what constitutes a deepfake, states and international institutions struggle to articulate consistent standards of legality. This definitional gap allows malicious uses of synthetic media to fall between existing legal categories, creating regulatory blind spots that can be exploited by both state and non-state actors.
Attribution presents another structural challenge. International law depends on the ability to identify responsible actors, particularly when assessing state responsibility or violations of peremptory norms. Deepfakes complicate attribution by enabling anonymity, deniability, and proxy dissemination. Even when a deepfake causes demonstrable harm, linking its creation or distribution to a specific state, organization, or individual may be technically and legally infeasible. This difficulty weakens enforcement mechanisms and incentivizes the strategic use of deepfakes in gray-zone activities that remain below traditional thresholds of armed attack or coercive intervention.
The growing use of deepfakes in contemporary conflicts illustrates the urgency of this legal problem. Synthetic videos impersonating political leaders, fabricated surrender announcements, and manipulated audiovisual evidence have already appeared in real-world military and geopolitical contexts. Even when such content is quickly debunked, its circulation can generate confusion, fear, and mistrust, producing tangible humanitarian and political consequences. International law, however, was not designed with algorithmic deception in mind, and its existing rules address deepfakes only indirectly through analogies to older forms of propaganda and deception.
This article examines the legal status of deepfakes in international law by asking a central question: how far can existing international legal frameworks accommodate the challenges posed by synthetic media, and where do they fall short? Rather than treating deepfakes as an entirely unprecedented threat, the analysis situates them within established doctrines of international humanitarian law, human rights law, state responsibility, and international peace and security. At the same time, it acknowledges that the scale, speed, and realism of deepfakes introduce novel risks that strain traditional legal concepts.
The objective of this study is not to advocate for technological determinism or sweeping prohibitions, but to clarify how international law currently engages with deepfakes and to identify the normative gaps that require further development. By grounding the analysis in recent conflicts, regulatory practice, and scholarly research, the article aims to contribute to a more coherent and principled understanding of deepfakes in international law. Such clarity is essential if international law is to remain capable of protecting truth, accountability, and human dignity in an era of synthetic reality.
2. Conceptualizing Deepfakes in International Legal Terms
Deepfakes pose a conceptual challenge for international law because they do not fit neatly within existing legal categories. International legal norms were developed to regulate conduct, actors, and material effects in a world where falsification required significant resources and was relatively easy to detect. Deepfakes disrupt this assumption by enabling low-cost, scalable, and highly persuasive forms of deception that can be deployed by states, non-state actors, or individuals with minimal infrastructure. As a result, conceptual clarity becomes a prerequisite for any meaningful legal analysis.
At the most basic level, deepfakes can be understood as synthetic or manipulated digital representations created through artificial intelligence systems, designed to imitate real persons, events, or communications with a high degree of realism. This technical description, however, is insufficient for international legal purposes. International law does not regulate technologies as such; it regulates conduct, harm, intent, and responsibility. The legal relevance of deepfakes therefore depends on how they function in practice rather than on how they are generated.
From an international legal perspective, deepfakes can be conceptualized through three overlapping lenses: as deceptive information artifacts, as instruments of influence or coercion, and as vectors of harm to legally protected interests. Each lens highlights different normative concerns and activates different bodies of international law.
First, deepfakes operate as deceptive information artifacts. They are not neutral forms of expression but are typically designed to induce belief in a false reality. In this sense, deepfakes resemble forged documents, falsified communications, or fabricated evidence. International law has long recognized the legal significance of deception, particularly in diplomatic relations, armed conflict, and judicial proceedings. The novelty introduced by deepfakes lies in their capacity to exploit cognitive biases associated with audiovisual media. Images and voices are often perceived as inherently credible, which amplifies the deceptive effect and distinguishes deepfakes from traditional misinformation based on text alone.
Second, deepfakes can be understood as instruments of influence. In international relations, influence operations seek to shape perceptions, decisions, or behavior without direct physical coercion. Deepfakes enhance such operations by allowing actors to impersonate authoritative figures, simulate official statements, or fabricate events that appear to carry institutional legitimacy. When deployed strategically, deepfakes blur the line between persuasion and coercion, especially when they target civilian populations during crises or conflicts. This functional role places deepfakes within the broader category of information operations, a domain that international law has historically regulated only indirectly.
Third, deepfakes function as vectors of harm to protected legal interests. These harms may be individual or collective. At the individual level, deepfakes can violate privacy, dignity, reputation, and psychological integrity. At the collective level, they can undermine democratic processes, destabilize public order, and erode trust in institutions. International law traditionally responds to harm through specific regimes, such as human rights law, international humanitarian law, or the law of state responsibility. The challenge posed by deepfakes is that a single synthetic artifact may simultaneously trigger multiple forms of harm across different legal regimes.
A further conceptual difficulty arises from the distinction between benign and malicious uses of deepfake technology. Not all synthetic media is inherently unlawful. Educational simulations, artistic productions, satire, and historical reconstructions may rely on similar techniques without generating legal concern. International law must therefore avoid technology-based prohibitions and instead focus on context, intent, and effect. This mirrors established approaches in other areas of international law, where the legality of conduct depends on circumstances rather than on the tools employed.
The absence of a shared legal definition of deepfakes exacerbates these challenges. Without a common conceptual framework, states and international institutions risk adopting inconsistent or overly narrow approaches that fail to address cross-border harms. A functional definition grounded in deception, impersonation, and harmful impact would better align with international legal reasoning than purely technical descriptions.
To clarify this functional approach, the table below summarizes how deepfakes can be mapped onto existing international legal concepts:
Functional Characteristic of Deepfakes | Relevant International Legal Concept |
Intentional creation of false reality | Deception and misrepresentation |
Impersonation of leaders or officials | Abuse of authority and perfidy (in conflict contexts) |
Manipulation of civilian perception | Psychological operations and civilian protection |
Undermining trust in evidence | Integrity of judicial and fact-finding processes |
Cross-border dissemination | Transnational harm and state responsibility |
This conceptual mapping illustrates that deepfakes are not legally invisible. Instead, they intersect with well-established legal principles that were not designed with algorithmic media in mind. The task for international law is therefore one of adaptation rather than reinvention. By understanding deepfakes in functional and normative terms, international law can begin to articulate clearer standards for responsibility, prohibition, and protection.
Conceptualizing deepfakes in international legal terms is a necessary step toward resolving the broader regulatory puzzle. Without this analytical foundation, legal responses risk remaining reactive, fragmented, and ineffective. A coherent conceptual framework allows international law to address deepfakes not as isolated technological anomalies, but as a new modality of conduct capable of producing legally significant consequences on a global scale.
3. Deepfakes and International Humanitarian Law (IHL)
International humanitarian law regulates conduct during armed conflict with the primary objective of limiting human suffering. Although the treaties and customary rules of IHL predate artificial intelligence and synthetic media, their normative structure is sufficiently flexible to encompass new methods and means of warfare. Deepfakes therefore do not exist outside IHL; rather, they challenge how existing principles are interpreted and applied in contemporary conflicts.
The relevance of deepfakes to IHL emerges from their use as tools of deception and psychological influence during hostilities. Armed conflict has always involved deception, including camouflage, feints, misinformation, and strategic concealment. IHL recognizes this reality and permits certain forms of deception as lawful ruses of war. At the same time, it draws a firm legal boundary where deception exploits the protections afforded by the law itself. Deepfakes operate precisely at this boundary, making their legal assessment highly context-dependent.
A central distinction in IHL is the one between lawful ruses of war and prohibited perfidy. Ruses of war are acts intended to mislead an adversary or induce them to act recklessly, provided such acts do not violate any rule of international law. Examples include the use of decoys, misinformation about troop movements, or simulated military exercises. Deepfakes used to fabricate misleading but non-protected information, such as false depictions of equipment or troop deployments, may fall within this lawful category if they do not exploit legal protections or target civilians.
Perfidy, by contrast, is expressly prohibited. It involves inviting the confidence of an adversary by invoking the protection of international law, with the intent to betray that confidence. Classic examples include feigning surrender, misuse of protected emblems, or pretending to be a civilian in order to attack. Deepfakes that impersonate military commanders issuing false surrender orders, fabricate ceasefire announcements, or simulate communications from protected humanitarian actors would likely qualify as perfidious acts. In such cases, the illegality stems not from the use of synthetic media itself, but from the exploitation of legal protections to cause harm.
Deepfakes also raise serious concerns regarding the principle of distinction, one of the cornerstones of IHL. Parties to a conflict must at all times distinguish between combatants and civilians, as well as between military objectives and civilian objects. When deepfakes are disseminated through public channels and target civilian populations, they risk blurring this distinction. Fabricated audiovisual messages that incite panic, induce displacement, or manipulate civilians into unsafe actions may constitute unlawful psychological operations. The deliberate spread of fear or confusion among civilians can violate prohibitions on acts or threats of violence whose primary purpose is to spread terror.
The obligation to take constant care to spare the civilian population further constrains the use of deepfakes in armed conflict. Even in cases where deepfakes are directed at enemy forces, foreseeable spillover effects on civilians must be taken into account. Synthetic content released into open information environments is difficult to contain, and its circulation may extend far beyond its intended audience. This raises questions about proportionality and precaution, particularly when the potential harm includes mass panic, misinformation-driven displacement, or interference with humanitarian access.
Another critical dimension concerns the use of deepfakes in relation to evidence and accountability for war crimes. Audiovisual material plays a crucial role in documenting violations of IHL, especially in conflicts where access for investigators is limited. The proliferation of deepfakes undermines trust in such material and creates opportunities for denial and obfuscation. This phenomenon weakens enforcement mechanisms and complicates the work of international courts, commissions of inquiry, and humanitarian fact-finding bodies. While this issue does not constitute a direct violation of IHL, it has significant implications for the effectiveness of the legal regime as a whole.
Deepfakes may also intersect with the prohibition of improper methods of warfare. IHL requires that means and methods of warfare not cause superfluous injury or unnecessary suffering and that they comply with fundamental humanitarian principles. While deepfakes do not cause physical harm in a traditional sense, their capacity to manipulate perception and decision-making introduces a form of cognitive harm that existing legal categories struggle to address. The growing recognition of psychological integrity as a component of civilian protection suggests that future interpretations of IHL may need to account for such non-kinetic effects more explicitly.
The absence of explicit treaty provisions on deepfakes does not imply a regulatory vacuum. Instead, it places a greater burden on interpretation and clarification by states, international organizations, and expert bodies. Authoritative guidance, such as interpretive commentaries or manuals on the application of IHL to information operations, could help delineate lawful and unlawful uses of synthetic media in conflict. Without such clarification, the risk remains that deepfakes will be normalized as acceptable tools of warfare, despite their capacity to undermine core humanitarian values.
In sum, international humanitarian law already provides a framework capable of addressing many uses of deepfakes in armed conflict. The decisive factors are intent, context, and effect rather than the technological medium itself. Deepfakes that function as lawful ruses may be permissible, while those that exploit legal protections, target civilians, or spread terror are incompatible with IHL. The challenge lies not in the absence of law, but in ensuring that existing norms are interpreted and enforced in a manner that preserves the protective purpose of humanitarian law in an era of synthetic reality.
4. Deepfakes, Disinformation, and International Peace and Security
Deepfakes intensify long-standing concerns about disinformation in international relations by introducing a level of realism that fundamentally alters how false narratives are created, disseminated, and believed. In the domain of international peace and security, the significance of deepfakes lies not in their capacity to cause immediate physical destruction, but in their ability to destabilize societies, manipulate political decision-making, and escalate tensions without the overt use of force. This characteristic places deepfakes squarely within the evolving landscape of non-kinetic threats to international stability.
International peace and security, as reflected in the framework of the United Nations Charter, is premised on the maintenance of order through the prevention of armed conflict and coercive interference in the affairs of states. Deepfakes complicate this framework by enabling influence operations that operate below traditional thresholds of aggression. Synthetic media can be deployed to impersonate political leaders, fabricate official announcements, or simulate crises, creating confusion and mistrust that weaken state institutions from within. These activities challenge the assumption that threats to peace are necessarily linked to physical violence or territorial incursions.
The strategic use of deepfakes aligns with broader practices of information warfare and hybrid conflict. States and non-state actors increasingly rely on a combination of cyber operations, psychological manipulation, and disinformation to achieve strategic objectives without triggering conventional military responses. Deepfakes enhance these strategies by providing highly persuasive content capable of shaping public opinion rapidly and at scale. When false audiovisual messages circulate in moments of political tension or crisis, they can influence electoral outcomes, provoke civil unrest, or undermine confidence in diplomatic processes.
From the perspective of international law, a critical question is whether deepfake-driven disinformation can amount to a violation of the prohibition on intervention in the internal affairs of states. The principle of non-intervention prohibits coercive actions that interfere with a state’s sovereign choices, particularly in matters such as political governance. Deepfakes that are designed to manipulate electoral processes, incite unrest, or delegitimize public authorities may approach this threshold when they exert a coercive effect on the political will of a population. The challenge lies in distinguishing unlawful intervention from mere influence, a distinction that international law has historically struggled to articulate with precision.
Deepfakes also raise concerns in relation to the concept of threats to international peace and security. While isolated instances of synthetic disinformation may not justify collective security responses, large-scale or coordinated deepfake campaigns could contribute to destabilization with transboundary effects. Fabricated videos depicting military mobilization, false surrender, or imminent attacks may provoke panic, miscalculation, or even preemptive responses by states. In high-tension environments, such misinformation increases the risk of escalation based on false premises, undermining crisis management and diplomatic communication.
The role of deepfakes in undermining trust is particularly significant. International peace and security depend on a baseline level of confidence in official communications, media reporting, and evidentiary materials. Deepfakes erode this confidence by creating an environment in which all information can be plausibly denied. This erosion benefits actors who seek to obscure responsibility, discredit legitimate reporting, or delay collective responses to emerging threats. Over time, such dynamics weaken multilateral institutions and complicate efforts to build consensus around security challenges.
Another dimension concerns the impact of deepfakes on early warning and preventive diplomacy. International organizations rely on information flows to detect emerging conflicts and respond before violence escalates. Synthetic media that distort or flood information environments can obscure warning signals and overwhelm verification mechanisms. This informational noise reduces the effectiveness of preventive tools and increases reliance on reactive measures, contrary to the preventive orientation of the international peace and security system.
Despite these risks, international law lacks a dedicated framework for addressing disinformation as a threat to peace. Existing norms focus on use of force, armed attack, and coercion, leaving a gray zone in which deepfake-driven operations can flourish. Responses to such threats have largely been ad hoc, relying on political condemnation, sanctions, or counter-narratives rather than legal accountability. This gap highlights the need for clearer normative guidance on how disinformation, including deepfakes, should be assessed within the collective security architecture.
Deepfakes do not render the existing legal framework obsolete, but they expose its limitations. The challenge for international peace and security lies in adapting legal and institutional tools to address threats that operate through perception and belief rather than physical violence. As deepfakes become more sophisticated and widespread, their capacity to undermine stability will grow unless international law develops clearer standards for attribution, responsibility, and response.
In this context, deepfakes should be understood as force multipliers within disinformation strategies rather than as isolated technological anomalies. Their legal significance derives from their potential effects on international stability, state sovereignty, and collective security. Addressing these challenges requires a reassessment of how international law conceptualizes threats to peace in an era where the manipulation of reality itself has become a strategic instrument.
5. Human Rights Law and the Impact of Deepfakes
International human rights law provides one of the most immediate and normatively grounded frameworks for assessing the harms caused by deepfakes. While international humanitarian law addresses conduct during armed conflict, human rights law applies at all times and places, with primary obligations on states to respect, protect, and fulfill the rights of individuals. Deepfakes engage this framework directly because their most pervasive effects are felt at the level of personal dignity, autonomy, and psychological integrity.
A central right implicated by deepfakes is the right to privacy. International human rights instruments protect individuals against arbitrary or unlawful interference with their private life, family, honor, and reputation. Deepfakes frequently rely on the unauthorized use of a person’s likeness, voice, or biometric data, transforming personal attributes into tools of deception. When an individual’s identity is digitally reconstructed and placed into fabricated scenarios, the intrusion goes beyond traditional privacy violations. The harm is not limited to exposure or surveillance; it involves the active manipulation of identity in a manner that strips individuals of control over how they appear and are perceived in the world.
Closely connected to privacy is the protection of human dignity. Deepfakes that depict individuals engaging in actions they never performed, particularly in degrading or sexualized contexts, constitute profound affronts to dignity. These harms are amplified by the permanence and global reach of digital platforms, where synthetic content can circulate indefinitely and resurface long after initial dissemination. International human rights law recognizes dignity as an underlying value informing the interpretation of all rights, making deepfake abuses especially problematic when they reduce individuals to instruments of humiliation or coercion.
Freedom of expression presents a more complex challenge. International law protects the right to seek, receive, and impart information and ideas, including creative and artistic expression. At the same time, this freedom is not absolute and may be subject to restrictions necessary to protect the rights and reputations of others, national security, or public order. Deepfakes complicate this balance because they occupy an ambiguous space between expression and deception. Synthetic media presented as authentic speech or conduct is qualitatively different from satire, parody, or fictional representation. When deepfakes are intended to mislead, impersonate, or harm, they exceed the protective scope of freedom of expression and enter the realm of rights violations.
The psychological impact of deepfakes has become increasingly relevant to human rights analysis. Deepfakes exploit cognitive vulnerabilities by leveraging the persuasive power of audiovisual media. Victims often experience anxiety, distress, loss of reputation, and long-term emotional harm. These effects raise questions about the protection of mental integrity, a concept implicit in the rights to dignity, privacy, and security of the person. Emerging scholarship has begun to frame such harms under the notion of cognitive liberty, understood as the right of individuals to form beliefs and perceptions free from manipulative interference. While not yet codified as a standalone right, cognitive liberty reflects growing concern that deepfakes undermine personal autonomy at a fundamental level.
Deepfakes also have a pronounced gendered dimension. Empirical patterns show that women are disproportionately targeted by non-consensual sexual deepfakes, often created using publicly available images. These practices reinforce structural inequalities and expose gaps in existing human rights protections. States have positive obligations to protect individuals from such abuses, including by regulating private actors, providing effective remedies, and ensuring access to justice. Failure to address deepfake harms may therefore constitute a breach of state duties under international human rights law.
Another critical issue concerns discrimination and equal protection. Deepfakes can be weaponized to target specific groups based on gender, ethnicity, religion, or political affiliation. Synthetic content used to incite hatred, stigmatization, or social exclusion may engage prohibitions against discrimination and advocacy of hatred. In fragile societies, such practices can deepen divisions and expose vulnerable populations to heightened risk of violence or marginalization.
The table below summarizes key human rights affected by deepfakes and the typical forms of harm associated with them:
Human Right Implicated | Typical Deepfake-Related Harm |
Right to privacy | Unauthorized use of likeness and voice |
Human dignity | Degrading or humiliating fabricated content |
Freedom of expression | Impersonation and deceptive speech |
Security of the person | Psychological distress and harassment |
Equality and non-discrimination | Gendered and targeted abuse |
A further challenge lies in enforcement and remedies. Human rights law requires that victims have access to effective remedies, including investigation, accountability, and reparation. Deepfakes complicate these requirements by obscuring authorship and enabling cross-border dissemination. Victims may struggle to identify responsible parties or to secure redress across jurisdictions. These practical obstacles do not negate human rights obligations but underscore the need for international cooperation and procedural innovation.
In sum, human rights law offers a robust normative lens through which to assess the harms caused by deepfakes. The core issue is not technological innovation, but the protection of individuals against manipulative practices that distort identity, autonomy, and dignity. Deepfakes expose existing vulnerabilities in the human rights system, particularly in relation to private actor conduct and digital environments. Addressing these challenges requires states to interpret established rights dynamically and to adapt protective mechanisms to a context in which reality itself can be convincingly fabricated.
6. Deepfakes as Evidence in International and Domestic Proceedings
Deepfakes pose a structural challenge to the use of evidence in both international and domestic legal proceedings. Courts, tribunals, and investigative bodies have long relied on audiovisual material as a powerful means of establishing facts, corroborating testimony, and documenting violations of law. The increasing sophistication of synthetic media undermines this reliance by destabilizing assumptions about authenticity, reliability, and probative value. As a result, deepfakes do not merely introduce evidentiary difficulties; they threaten to erode confidence in the adjudicative process itself.
In international proceedings, audiovisual evidence plays a particularly significant role. International criminal tribunals, human rights courts, and fact-finding missions often operate in contexts where direct access to crime scenes is limited or impossible. Videos, photographs, and audio recordings frequently serve as substitutes for physical presence, allowing judges and investigators to reconstruct events occurring in conflict zones or repressive environments. Deepfakes disrupt this evidentiary model by introducing plausible doubt about the origin and integrity of such materials, even when they document genuine atrocities.
A key concern is the emergence of what has been described as the “liar’s dividend.” When synthetic media becomes widely known, accused parties may dismiss authentic evidence as fabricated, exploiting uncertainty to evade accountability. This dynamic is particularly dangerous in proceedings involving war crimes, crimes against humanity, or serious human rights violations, where evidentiary thresholds are already difficult to meet. Deepfakes thus create an asymmetry in which false content can be produced cheaply and rapidly, while verification requires extensive technical expertise and resources.
International courts generally apply flexible evidentiary standards, emphasizing relevance and probative value rather than rigid rules of admissibility. This flexibility, while advantageous in complex cases, becomes a vulnerability in an environment saturated with synthetic media. Judges must increasingly assess not only what evidence shows, but also whether it is authentic. The burden of authentication may shift toward parties presenting audiovisual material, requiring additional corroboration, metadata analysis, or expert testimony. These demands risk prolonging proceedings and increasing costs, potentially limiting access to justice.
Domestic legal systems face similar challenges, although their responses vary depending on procedural traditions. Criminal courts often rely on audiovisual recordings for surveillance, witness corroboration, and confessions. Civil courts may use such material to establish defamation, fraud, or contractual disputes. Deepfakes complicate these uses by raising questions about chain of custody, source integrity, and intentional manipulation. In jurisdictions with strict evidentiary rules, synthetic media may lead to heightened exclusion of digital evidence, even when it is genuine.
Another critical issue concerns the role of deepfakes in witness testimony. Synthetic media can be used to fabricate confessions, simulate admissions of guilt, or manipulate recorded statements. In such cases, the line between evidence and fabrication becomes dangerously thin. The risk is not limited to the admission of false evidence; it extends to the intimidation of witnesses and the distortion of public perception surrounding legal proceedings. When fabricated audiovisual material circulates widely, it can prejudice jurors, influence public opinion, and undermine the perceived legitimacy of judicial outcomes.
The challenges posed by deepfakes also extend to investigative processes. Human rights documentation increasingly relies on open-source intelligence, including user-generated videos and images shared online. While these methods have expanded the evidentiary base available to investigators, they are particularly vulnerable to synthetic manipulation. Verification techniques such as geolocation, temporal analysis, and source triangulation remain essential, but they are becoming more resource-intensive as deepfake technology improves. Investigative bodies must therefore balance the need for timely reporting with the risk of relying on manipulated material.
The table below outlines key evidentiary risks associated with deepfakes and their legal implications:
Evidentiary Issue | Legal Implication |
Fabricated audiovisual material | Admission of false evidence |
Plausible denial of genuine evidence | Obstruction of accountability |
Increased verification burden | Delays and higher litigation costs |
Manipulated witness statements | Compromised fair trial rights |
Public circulation of deepfakes | Prejudicial impact on proceedings |
Despite these challenges, deepfakes do not render audiovisual evidence unusable. Instead, they necessitate a recalibration of evidentiary assessment. Courts and tribunals must increasingly rely on contextual corroboration, technical expertise, and consistency with other forms of evidence. The legal system has historically adapted to new forms of evidentiary risk, including forged documents and altered photographs. Deepfakes represent a more advanced iteration of this problem, but they do not invalidate the core principles of evidentiary evaluation.
Ultimately, the impact of deepfakes on legal proceedings highlights a broader tension between technological change and legal epistemology. Law depends on the ability to establish facts with sufficient certainty to render judgment. As synthetic media blurs the boundary between real and fabricated evidence, legal institutions must strengthen their capacity to assess authenticity without abandoning the probative value of digital material altogether. Failure to adapt risks allowing deepfakes to undermine not only individual cases, but the credibility of justice systems at both the domestic and international levels.
7. State Responsibility and Attribution in Deepfake Operations
State responsibility is a cornerstone of international law, providing the legal framework through which wrongful conduct is attributed to states and legal consequences are triggered. Deepfakes complicate this framework by introducing unprecedented challenges of attribution, intent, and control. While the law of state responsibility is technologically neutral, its effective application depends on the ability to identify actors and link conduct to state authority. Deepfake operations test these requirements in ways that strain existing doctrinal tools.
Attribution is the primary obstacle. Deepfakes can be created and disseminated anonymously, routed through multiple jurisdictions, and amplified by automated systems. This technical opacity makes it difficult to establish who created the synthetic content, who controlled its dissemination, and for what purpose. In traditional contexts, attribution relies on physical presence, command structures, or identifiable state agents. Deepfake operations, by contrast, can be conducted remotely and covertly, often by actors operating in legal and informational gray zones.
Under international law, a state may be held responsible for conduct attributable to it under established criteria, including acts carried out by state organs, entities exercising governmental authority, or persons acting under the state’s direction or control. Deepfake operations may fall within these categories when state agencies directly produce or coordinate synthetic media campaigns. In such cases, the wrongful nature of the act depends on whether the deepfake violates an international obligation, such as the prohibition of intervention, respect for human rights, or compliance with international humanitarian law.
More complex scenarios arise when deepfakes are created by non-state actors. States may seek to distance themselves from such activities, invoking plausible deniability. International law, however, recognizes that state responsibility may arise not only from direct action but also from indirect involvement. If a state exercises effective control over non-state actors engaged in deepfake operations, attribution may still be established. The evidentiary challenge lies in demonstrating such control in an environment where digital operations leave limited traceable indicators.
Beyond direct attribution, deepfakes raise questions about due diligence obligations. States have a duty to ensure that activities within their jurisdiction or control do not cause significant harm to other states. This obligation extends to preventing and responding to harmful cyber and information activities, even when conducted by private actors. When deepfake operations originating from a state’s territory cause transboundary harm, failure to take reasonable preventive measures may engage state responsibility. This dimension is particularly relevant in cases involving disinformation campaigns that target foreign populations or institutions.
Intent also plays a crucial role in assessing responsibility. Not all deepfake-related activities constitute internationally wrongful acts. The legality of a deepfake operation depends on its purpose, context, and effects. Synthetic media used for artistic or educational purposes falls outside the scope of international responsibility. By contrast, deepfakes deployed to manipulate political processes, incite violence, or undermine civilian protection may violate international obligations. Establishing intent in digital environments is inherently difficult, yet it remains essential for distinguishing lawful conduct from wrongful interference.
The standard of proof presents another challenge. International tribunals generally require a high threshold of evidence to attribute wrongful conduct to a state, particularly in sensitive matters involving security and sovereignty. Deepfake operations exploit this caution by creating uncertainty and fragmentation of evidence. States accused of deploying deepfakes may contest attribution by pointing to technical ambiguities, alternative explanations, or lack of conclusive proof. This dynamic risks creating an accountability gap in which harmful conduct remains legally unaddressed.
The table below summarizes key attribution scenarios and their implications for state responsibility:
Scenario | Potential Legal Consequence |
Deepfake created by state organ | Direct state responsibility |
Deepfake coordinated through proxies | Attribution based on control |
Deepfake by private actor within state territory | Due diligence obligations |
Failure to prevent harmful dissemination | Responsibility for omission |
Ambiguous attribution | Accountability gap |
Deepfakes also interact with countermeasures and responses under international law. States affected by malicious deepfake operations may seek to respond through diplomatic protests, sanctions, or countermeasures. However, the legality of such responses depends on establishing attribution and proportionality. Uncertainty surrounding the origin of deepfakes complicates the lawful invocation of countermeasures and increases the risk of escalation based on misattribution.
In this context, deepfakes highlight the limits of existing attribution models in an era of algorithmic manipulation. While international law does not require absolute certainty, it does require a sufficient evidentiary basis to justify responsibility and response. Enhancing international cooperation on technical attribution, information sharing, and investigative standards may help reduce uncertainty, but it cannot eliminate it entirely.
Ultimately, deepfakes do not invalidate the law of state responsibility, but they expose its vulnerability to technologies designed to obscure authorship and intent. Addressing this challenge requires a combination of legal interpretation, evidentiary innovation, and political will. Without such adaptation, deepfake operations risk becoming a tool through which states and non-state actors can inflict harm while remaining shielded from legal accountability.
Must Read
8. Comparative International Regulatory Approaches
Comparative analysis reveals that regulatory responses to deepfakes remain fragmented, uneven, and largely domestic in orientation. States have approached deepfakes through existing legal categories such as data protection, criminal law, election integrity, and platform regulation rather than through a unified international legal framework. This diversity reflects differing constitutional traditions, threat perceptions, and regulatory philosophies, but it also exposes significant gaps when deepfakes operate across borders. As a result, deepfakes in international law are currently governed indirectly through a patchwork of national and regional measures whose collective effect remains limited.
The European Union has adopted the most comprehensive regulatory strategy to date. Its approach treats deepfakes as a governance problem linked to systemic risks created by artificial intelligence and digital platforms. Rather than criminalizing deepfakes as a category, the EU emphasizes transparency, risk mitigation, and accountability. Synthetic media is addressed through obligations imposed on developers and platforms, including disclosure requirements, risk assessments, and safeguards against manipulation. This regulatory logic reflects a preventive model that seeks to reduce harm before it materializes, particularly in relation to democratic processes and fundamental rights. However, while the EU framework has extraterritorial reach in practice, it remains regional in scope and does not resolve attribution or enforcement challenges beyond its jurisdiction.
The United States has adopted a markedly different approach characterized by decentralization and sector-specific regulation. There is no comprehensive federal framework governing deepfakes. Instead, regulatory responses have emerged through state legislation and targeted federal initiatives. States have focused on narrow contexts such as election interference, fraud, and non-consensual intimate imagery. This approach prioritizes freedom of expression and constitutional safeguards, resulting in cautious and incremental regulation. While this model allows flexibility, it produces uneven protection and legal uncertainty, particularly in cross-border cases. From an international perspective, the absence of a unified federal framework limits the United States’ ability to contribute to coherent global standards.
China represents a contrasting regulatory philosophy grounded in centralized control and social stability. Chinese regulations treat deepfakes as a matter of information governance and public order. Obligations are placed on platforms and service providers to prevent misuse, ensure traceability, and label synthetic content. The regulatory emphasis is less on individual rights and more on maintaining informational integrity and political stability. This model demonstrates that strong regulatory control is technically feasible, but it raises concerns regarding transparency, proportionality, and compatibility with international human rights norms. Its relevance to international law lies less in its normative appeal and more in its illustration of regulatory capacity.
Other jurisdictions, including the United Kingdom, Taiwan, and several European states outside the EU framework, have opted for targeted legal amendments. These measures typically integrate deepfakes into existing criminal or civil law regimes addressing fraud, impersonation, harassment, or electoral integrity. Such approaches rely on adapting familiar legal tools rather than creating new regulatory categories. While pragmatic, this strategy struggles to address the scale and speed of synthetic media dissemination, particularly when content originates abroad.
Despite these varied approaches, several common limitations emerge. First, most regulatory frameworks are reactive rather than anticipatory. They address harms after dissemination rather than preventing the creation and spread of deepfakes at scale. Second, domestic regulations struggle to cope with transnational dissemination, platform-based amplification, and jurisdictional fragmentation. Third, enforcement mechanisms remain weak when responsible actors operate outside the regulating state’s territory.
From the perspective of deepfakes in international law, the most significant shortcoming is the absence of coordination. National and regional regulations do not translate into shared international standards, leaving gaps in accountability and protection. Divergent definitions, thresholds of harm, and enforcement priorities create opportunities for regulatory arbitrage, allowing malicious actors to exploit the least restrictive environments.
Comparative regulatory practice nonetheless offers valuable insights. It demonstrates that deepfakes can be addressed through a combination of transparency obligations, platform accountability, and targeted prohibitions without resorting to blanket bans. It also shows that regulation must be sensitive to freedom of expression and innovation concerns while remaining capable of addressing serious harm.
Ultimately, comparative analysis underscores a central conclusion: domestic and regional regulation, while necessary, is insufficient on its own. The cross-border nature of deepfakes demands greater convergence at the international level. Without shared principles and cooperative mechanisms, existing regulatory approaches will continue to operate in parallel rather than in concert, limiting their effectiveness against a technology designed to transcend legal boundaries.
9. Normative Gaps in International Law
Despite the partial applicability of existing legal regimes, deepfakes expose significant normative gaps in international law. These gaps do not arise because international law is silent on deception, harm, or responsibility, but because its rules were developed for analog or earlier digital contexts that did not anticipate the scale, speed, and epistemic impact of synthetic media. As a result, deepfakes in international law remain regulated indirectly and inconsistently, leaving critical areas of uncertainty that undermine legal predictability and accountability.
One major gap concerns the absence of an explicit international legal definition of deepfakes. International law relies on shared concepts to enable uniform interpretation and application. In the absence of a functional definition grounded in deception, impersonation, and harm, states and institutions apply divergent standards. This definitional ambiguity weakens cooperation, complicates attribution, and allows harmful conduct to evade classification as an internationally wrongful act. Technical definitions focused on artificial intelligence models are ill-suited to legal analysis, yet no widely accepted functional alternative has emerged at the international level.
A second gap lies in the regulation of disinformation as a threat to international peace and security. While international law prohibits the use of force and coercive intervention, it offers limited guidance on non-kinetic operations that manipulate perception rather than territory or physical assets. Deepfakes amplify this problem by enabling disinformation that can destabilize societies, influence political outcomes, or provoke escalation without crossing traditional legal thresholds. The absence of clear criteria for assessing when such conduct becomes unlawful creates a gray zone in which deepfake operations can proliferate with minimal legal consequence.
International humanitarian law also reveals normative limitations when confronted with deepfakes. Although the principles of distinction, perfidy, and civilian protection can be applied through interpretation, there is no authoritative guidance addressing synthetic media as a method of warfare. This lack of specificity risks inconsistent application and normalization of harmful practices. Without clearer interpretive standards, parties to a conflict may justify increasingly aggressive information operations that erode humanitarian protections while claiming compliance with existing rules.
Another critical gap concerns the protection of cognitive and psychological integrity. International law traditionally prioritizes physical harm and material damage. Deepfakes, however, operate primarily through manipulation of belief, perception, and trust. While human rights law protects dignity, privacy, and mental well-being, it lacks explicit recognition of cognitive manipulation as a distinct form of harm. The growing discussion of cognitive liberty reflects awareness of this deficiency, but the concept remains underdeveloped and lacks formal legal status at the international level.
Attribution and enforcement represent further normative weaknesses. The law of state responsibility presumes that wrongful acts can be linked to identifiable actors through evidence of control or direction. Deepfakes are designed to obscure authorship, fragment evidence, and enable plausible deniability. International law does not currently offer adapted standards of proof or attribution tailored to synthetic media operations. This gap creates an accountability deficit, particularly when deepfakes are deployed through proxies or private actors operating across jurisdictions.
The evidentiary dimension also exposes systemic vulnerabilities. International courts and investigative mechanisms depend on audiovisual material to establish facts. Deepfakes undermine confidence in such evidence without providing clear procedural alternatives. There are no internationally agreed standards for authentication of digital media or for handling contested synthetic evidence. This absence threatens the effectiveness of accountability mechanisms and risks allowing serious violations to go unpunished.
The table below summarizes key normative gaps and their implications:
Normative Gap | Legal Consequence |
No legal definition of deepfakes | Fragmented interpretation |
Limited regulation of disinformation | Gray-zone operations |
Lack of IHL-specific guidance | Inconsistent application |
Weak protection of cognitive integrity | Unaddressed forms of harm |
Attribution difficulties | Accountability deficit |
Evidentiary uncertainty | Erosion of judicial effectiveness |
These gaps are not merely theoretical. They have practical consequences for victims, states, and international institutions. Individuals harmed by deepfakes may lack effective remedies. States targeted by deepfake operations may struggle to respond lawfully and proportionately. International bodies may find their mandates constrained by uncertainty and contested facts. In each case, the absence of clear norms benefits those who exploit ambiguity.
The existence of normative gaps does not imply that international law is incapable of addressing deepfakes. Rather, it highlights the need for clarification, adaptation, and possibly normative development. The challenge is to close these gaps without undermining core principles such as freedom of expression, state sovereignty, and technological innovation. Achieving this balance is difficult, but failure to address these deficiencies risks allowing deepfakes to undermine the credibility and effectiveness of international law itself.
Recognizing normative gaps is therefore a necessary step toward more coherent regulation. It provides the analytical foundation for identifying where interpretive guidance, soft law instruments, or new legal standards may be required. Without such recognition, responses to deepfakes will remain reactive, fragmented, and insufficient in the face of a technology that systematically exploits legal uncertainty.
10. Toward an International Legal Framework on Deepfakes
The identification of normative gaps leads inevitably to the question of how international law should evolve to address deepfakes in a coherent and effective manner. The challenge is not to construct an entirely new legal order, but to adapt existing principles and institutional mechanisms to a technological environment in which reality itself can be convincingly fabricated. Any international legal framework on deepfakes must therefore balance legal certainty, protection of fundamental rights, and respect for legitimate uses of synthetic media.
A first step toward such a framework is conceptual consolidation. International law would benefit from a functional definition of deepfakes grounded in legal relevance rather than technical architecture. Such a definition should focus on intentional deception, impersonation, and the capacity to cause legally cognizable harm. This approach would allow international norms to remain technologically neutral while capturing the distinctive risks posed by synthetic media. Conceptual clarity is essential to avoid inconsistent interpretation across legal regimes and jurisdictions.
A second pillar of an international framework lies in interpretive development rather than treaty proliferation. Existing bodies of law already regulate deception, harm, and responsibility, but they require authoritative clarification in light of deepfake technology. In the context of armed conflict, interpretive guidance could clarify how international humanitarian law applies to synthetic media used in psychological operations, surrender manipulation, or civilian targeting. Such guidance would reinforce humanitarian protections without altering the fundamental structure of IHL.
In the field of international human rights law, states could be encouraged to recognize deepfake-related harms as violations of established rights, including privacy, dignity, and mental integrity. The articulation of state obligations to prevent, investigate, and remedy deepfake abuses would strengthen protection without necessitating new rights instruments. Over time, this interpretive practice could contribute to the gradual recognition of cognitive integrity as a protected legal interest, embedded within existing rights rather than standing apart from them.
An international framework must also address procedural and institutional dimensions. One of the most pressing needs is the development of shared standards for authenticity and verification of digital media. While international law does not traditionally regulate evidentiary techniques, the credibility of legal processes increasingly depends on the ability to distinguish genuine material from synthetic fabrication. International cooperation on technical standards, forensic methodologies, and capacity-building for investigators and courts would strengthen accountability mechanisms without compromising judicial independence.
Attribution and responsibility require particular attention. While the law of state responsibility remains applicable, its effective operation in deepfake contexts depends on enhanced cooperation and information sharing. States could be encouraged to adopt due diligence standards tailored to synthetic media, including obligations to prevent large-scale malicious dissemination originating from their territory. Such standards would not impose strict liability, but they would clarify expectations regarding reasonable preventive measures in an interconnected information environment.
Soft law instruments offer a pragmatic pathway toward international coordination. Declarations, principles, or guidelines adopted by international organizations can provide normative direction without the rigidity of binding treaties. These instruments can articulate common standards, promote best practices, and facilitate convergence among diverse legal systems. In the context of deepfakes, soft law may be particularly well-suited to addressing a rapidly evolving technology while preserving regulatory flexibility.
The role of non-state actors must also be acknowledged. Digital platforms, developers, and intermediaries play a central role in the creation and dissemination of deepfakes. While international law traditionally addresses state conduct, modern regulatory frameworks increasingly recognize the influence of private actors on public interests. An international legal framework on deepfakes would therefore benefit from articulating expectations regarding corporate responsibility, transparency, and cooperation with lawful investigations, consistent with existing approaches to business and human rights.
Any movement toward an international framework must also confront political and structural constraints. Geopolitical rivalry, divergent regulatory philosophies, and concerns over sovereignty limit the feasibility of comprehensive binding agreements. Nonetheless, the absence of perfect consensus does not preclude incremental progress. International law has historically evolved through gradual clarification, practice, and normative alignment rather than through sweeping codification.
Ultimately, the goal of an international legal framework on deepfakes is not to eliminate synthetic media, but to ensure that its use does not undermine the legal order itself. Deepfakes challenge the capacity of international law to establish truth, assign responsibility, and protect individuals from harm. Responding to this challenge requires a measured approach that reinforces existing principles while adapting them to new realities. Without such adaptation, deepfakes risk becoming a structural vulnerability within the international legal system, exploited precisely because the law has not yet caught up with the manipulation of reality.
11. Conclusion: The Legal Status of Deepfakes in International Law
Deepfakes in international law occupy a paradoxical position. They are neither legally invisible nor comprehensively regulated. Existing international legal frameworks already apply to many of the harms caused by synthetic media, yet they do so indirectly, inconsistently, and often only after significant damage has occurred. This article has demonstrated that the central challenge posed by deepfakes is not their technological novelty, but their capacity to exploit structural vulnerabilities in the international legal order.
International humanitarian law, international human rights law, the law of state responsibility, and the collective security framework of the United Nations all provide tools that can be applied to deepfakes. Deceptive uses of synthetic media during armed conflict may violate prohibitions on perfidy, endanger civilians, and undermine humanitarian protections. In peacetime, deepfakes can infringe rights to privacy, dignity, and psychological integrity, particularly when they involve impersonation or non-consensual manipulation of identity. Large-scale disinformation campaigns amplified by deepfakes can destabilize societies and approach unlawful intervention, even when they fall below the threshold of armed attack.
At the same time, the analysis has revealed persistent normative gaps. International law lacks a shared functional definition of deepfakes, clear standards for attribution in synthetic media operations, and adapted evidentiary frameworks capable of preserving accountability in an environment of plausible fabrication. The absence of explicit guidance within international humanitarian law and the underdeveloped recognition of cognitive harm further weaken the legal response. These gaps do not render international law obsolete, but they limit its effectiveness in addressing a technology designed to manipulate perception rather than territory or force.
The legal status of deepfakes in international law is therefore best understood as conditionally regulated. Deepfakes are lawful or unlawful not by virtue of the technology itself, but according to their purpose, context, and effects. This functional approach aligns with the broader logic of international law, which evaluates conduct based on harm and obligation rather than tools. Synthetic media used for artistic, educational, or clearly fictional purposes falls outside the scope of international legal concern. Deepfakes deployed to deceive, coerce, or harm engage existing legal prohibitions, even in the absence of deepfake-specific rules.
A key insight emerging from this study is that uncertainty benefits those who exploit deepfakes strategically. Ambiguity in attribution, evidentiary reliability, and normative thresholds creates space for plausible deniability and weakens enforcement. The resulting accountability deficit risks normalizing practices that undermine trust, human dignity, and international stability. Addressing this problem requires not radical legal reinvention, but deliberate clarification and adaptation.
Progress toward a more coherent legal framework is both possible and necessary. Interpretive guidance, soft law instruments, and convergence around core principles can strengthen the application of existing norms without sacrificing flexibility or freedom of expression. International cooperation on verification standards, due diligence expectations, and protection of individuals against synthetic manipulation would enhance the resilience of the legal system. Over time, these developments may contribute to the recognition of new protected interests, such as cognitive integrity, grounded in established human rights doctrine.
Ultimately, deepfakes test the capacity of international law to function in an environment where reality itself can be engineered. Law depends on shared understandings of fact, responsibility, and harm. When these foundations are destabilized, legal norms risk losing their practical force. The challenge posed by deepfakes is therefore not marginal or technical, but structural. Responding effectively will determine whether international law can continue to serve its core purpose: regulating power, protecting individuals, and preserving order in an increasingly artificial information environment.
References
Chesney, R., & Citron, D. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review, 107, 1753–1820.
Shirish, A., & Komal, S. (2024). A socio-legal inquiry on deepfakes. California Western International Law Journal, 54(2), 517–553.
Kuźnicka-Błaszkowska, D., & Kostyuk, N. (2025). Emerging need to regulate deepfakes in international law: The Russo–Ukrainian war as an example. Journal of Cybersecurity, 11(1).
International Committee of the Red Cross. (2016). Commentary on the First Geneva Convention. ICRC.
International Committee of the Red Cross. (2016). Commentary on the Additional Protocols of 1977. ICRC.
United Nations. (1945). Charter of the United Nations.
United Nations General Assembly. (1966). International Covenant on Civil and Political Rights.
United Nations General Assembly. (2005). World Summit Outcome Document.
Kazaz, J. (2024). Regulating deepfakes: Global approaches to combatting AI-driven manipulation. Centre for Democracy & Resilience Policy Paper.
European Union. (2024). Artificial Intelligence Act.
European Union. (2022). Digital Services Act.
Westerlund, M. (2019). The emergence of deepfake technology: A review. Technology Innovation Management Review, 9(11), 39–52.
Nguyen, T. T., Nguyen, C. M., Nguyen, D. T., Nguyen, D. T., & Nahavandi, S. (2022). Deep learning for deepfakes creation and detection: A survey. Computer Vision and Image Understanding, 223.
Lewandowsky, S., Ecker, U. K. H., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition, 6(4), 353–369.
Hancock, J. T., & Bailenson, J. N. (2021). The social impact of deepfakes. Cyberpsychology, Behavior, and Social Networking, 24(3), 149–152.
United Nations International Law Commission. (2001). Articles on Responsibility of States for Internationally Wrongful Acts.
European Parliament. (2021). Artificial intelligence in the digital age: Implications for democracy and fundamental rights.
Lieber Institute for Law and Warfare. (2023). Information operations and the law of armed conflict.




Comments