On the Applicable Legal Frameworks and Regulatory Gaps: International Humanitarian Law and International Human Rights Law

The entanglement of civilian and military data ecosystems blurs legal boundaries, creating critical gaps in privacy and accountability protections.

Long Read

Legal regimes governing situations of armed conflict and peace time have traditionally been clearly defined, leaving little to no doubt as to which regime applies to what situation. However, the Militarisation of Tech challenges this tidy distinction. Instead we are seeing the blurring of lines between actors, technologies, and the areas of deployment, financing, export, and regulations of certain technologies. This growing overlap between on the ground and remote, between war and peace, complicates the understanding and application of existing legal frameworks.

Traditionally, the distinction between the two legal regimes has been reinforced and justified by the different rationales they embody. International humanitarian law (IHL) applies in situations of armed conflict. Through treaty and customary law, its aim is to protect individuals who are not, or are no longer, participating in the hostilities. IHL also aims to restrict the means and methods of warfare. It asserts that even wars have rules. International human rights law (IHRL), on the other hand, applies not only to contexts of peace, but to all contexts. It attributes responsibility to states and other actors to respect, protect and fulfill human rights in relation to persons under their jurisdictions. It is widely accepted that a regime of complementarity governs the relationship between these two bodies of law.

One of the central challenges posed by the Militarisation of Tech is the increasing reliance of states on companies that develop advanced technologies for civilian uses, which also enables military uses of these tools. Civilian tech firms are often drawn into national security agendas, expected to contribute their expertise under the justification of protecting or advancing state interests. Most of these technologies are data-driven, relying on vast quantities of data to function “efficiently”. This blurring of roles brings two concerns. Firstly, civilian data, and at times sensitive civilian data is used to power military systems. In simple terms, our data helps to build military tools without our knowledge or consent. Secondly, technologies developed for military purposes are repurposed for use in civilian contexts. They are often used to increase surveillance, supress dissent, and harm democracy.

What Is the Problem?

The intersection of data, technology and armed conflict presents significant legal and ethical challenges, particularly when it comes to privacy and data protection. While these rights are more crucial than ever, especially in regulating data-intensive military systems, they are largely absent from the legal frameworks governing armed conflict. Meanwhile, data collection has become central to modern warfare, not just for targeting of adversaries but as a driver of both military and civilian technological development, often without clear legal oversight.

As civilian tech companies, defence-tech companies and militaries invest in national security and defence - often at the expense of transparency, oversight, and accountability - it becomes more challenging to determine which actor’s conduct is regulated by the laws of armed conflict. We no longer know where the battlefield starts and where it ends. Private companies operating in both civilian and conflict zones face unpredictable legal risks and responsibilities, with some potentially losing civilian protections if their roles are deemed direct participation in hostilities.

Finally, the legal frameworks surrounding emerging technologies such as autonomous weapons and military decision-making support systems fail to adequately address how training data, often drawn from both civilian and military sources, is acquired or governed. This leaves critical questions of accountability and oversight unresolved.

So data-intensive technologies developed in civilian spaces are being transferred to the warzone without due consideration of transparency and accountability, or vice versa. What legal issues arise in this transfer, and what are the consequences?

  1. Rights to Privacy and Data Protection in the Laws of War?

It is controversial if and how the rights to privacy and data protection apply in armed conflict. However, these rights are needed more than ever before to regulate and limit the adverse effects of data-intensive systems that make use/exploit data in peace and war times.

The rights to privacy or data protection do not get a single explicit mention in the treaties governing the laws of war, nor the customary rules identified by the International Committee of the Red Cross’s (ICRC) customary law study. This leads to contested and ambiguous legal conclusions, whereas actors involved in armed conflicts are able to operate in grey zones to develop, deploy and transfer their technologies from one context to another. Progressive interpretations are needed to identify a meaningful protection framework for the right to privacy in armed conflicts.

Consider the unresolved question in international humanitarian law of whether personal data qualifies as property. Therefore, it is unclear whether the rules and restrictions under IHL around the protection, destruction, and seizure of property are applicable to personal data. Similarly, it is also unclear whether data qualifies as object under IHL, leaving ambiguity as to whether the rules governing the conduct of hostilities apply, especially in relation to the fundamental principle of distinction between military objectives and civilian objects.

  1. Unlimited Surveillance Powers in War?

Data collection and analysis have always been part of armed conflicts. However, in modern military operations, this process occurs at an unprecedented pace. This is largely because parties to a conflict increasingly rely on data to inform or justify their decisions.

Data collection has also become the primary resource driving both military and civilian technological development. Armies, intelligence agencies, or private companies might be among the actors collecting and analysing personal and mass data. Such actions possibly remain unregulated, as conduct during armed conflict is regulated by IHL, which is silent on data protection.

This begs the question: do actors in armed conflicts enjoy unrestricted freedom to collect and process data in ways that can later inform technologies employed in civic spaces?

In the lack of a comprehensive data protection framework applicable in armed conflict, the question remains open and rights-based interpretations emerge. Some argue that all informational operations necessary to support military activities should be governed by the duty of constant care under IHL to spare the civilian population, civilians, and civilian objects. It is proposed that this duty “may serve as a temporary gap-filler to the lacuna that exists around data protection in IHL”. Accordingly, the duty of constant care in the context of digital-age warfare extends to all informational operations supporting military activity related to the armed conflict, such as “intelligence gathering, data collection, and management activities, regardless of the actor involved (private contractors or civilian intelligence agencies), and as long as these activities are intended to advance combat.”

  1. Legal Uncertainty for Businesses in Conflict Zones

Businesses working in contexts experiencing both peace and conflict face unpredictable legal duties, because their role and involvement in the conflict can vary. Private technology and Information and Communication Technology companies may bring harm to the civilian populations and objects under their influence through their involvement in activities related to armed conflict. Furthermore, such companies and/or their staff may, depending on their functions, lose their protection under international humanitarian law and become directly targetable by parties to the armed conflict by virtue of directly participating in hostilities.

For instance, when a private actor provides satellite services to armed forces or conflict groups — such as for internet access or intelligence gathering used to plan or carry out a specific attack, this may be considered direct participation in hostilities. This means that for as long as the supporting activity continues, those carrying it out may lose the protection granted to civilians. In other words, they risk becoming lawful targets of military force.

In the case of facial recognition technologies transfered between peace and war, consider the case of Tel Aviv-based Corsight AI. The company provides facial recognition solutions for a wide range of uses, from public safety and tourism to airports and critical infrastructure, but also for warfare. According to The New York Times: “Israeli soldiers entering Gaza were given cameras equipped with the technology. Soldiers also set up checkpoints along major roads that Palestinians were using to flee areas of heavy fighting, with cameras that scanned faces.” This is one example of a data-intensive technology developed and enhanced using data from an oppressed society living in a context of systematic international law violations. Yet, this is exactly what makes these systems attractive when they are marketed elsewhere as being tested in real-life conditions

Another facial recognition company, Clearview AI, has notoriously scraped facial images of individuals off the internet indiscriminately to build its system. Data protection authorities of Italy, Greece, France, Germany and Austria, among others, have found the company’s practice unlawful. Then Clearview AI offered its services to the Ukranian Defence Ministry for free, and Clearview boasted about having 2 billion images from the Russian social media platform VKontakte in its database.

  1. Data Governance in Emerging Military Technologies

Lastly, the legal governance framework is frustrated when it comes to emerging data-intensive technologies, such as autonomous weapons systems or decision support systems, including automated target profiling systems. These technologies witness civilian data being used to train algorithms for military purposes, and vice versa, bringing to the table complex legal and ethical issues around data governance.

Take Microsoft, for example. It has provided cloud technology and artificial intelligence systems to the Israeli military, including contracts for thousands of hours of technical support. This collaboration helped meet the technological needs of the Israeli military during the most intense phase of its offensive on Gaza.

Similarly, it remains unclear whether the existing legal frameworks around the study, development, acquisition or adoption of new weapons under international humanitarian law, as well as exports control regimes are sufficient to address these concerns. For instance, despite a wide range of concerns brought up by expert organisations, the ongoing consultations to regulate autonomous weapons do not address sufficiently how the data that builds and trains the systems are acquired and protected. In short, there is little clarity on whose data helps build these systems.

It is time to Bridge This Gap

Data-intensive technologies are increasingly deployed in both civic spaces and military operations. The actors behind these technologies assume roles in these two domains, and possibly also in the gray zone inbetween. In turn, civilian personal data feeds military technologies, and technologies built on military data are deployed in civic spaces. The applicable legal frameworks in this complex environment become more difficult to identify. This creates protection and accountability gaps, among others, on the rights to privacy and data protection.